GenAI (GenAI Core v0.2.0)
Link to this section Summary
Functions
Creates a new chat context.
Execute command.
Shorthand for execute report
Run inference. Returning update chat completion and updated thread state.
Run inference in streaming mode, interstitial messages (dynamics) if any will sent to the stream handler using the interstitial handle
Set API Key or API Key constraint for inference. @todo we will need per model keys for ollam and hugging face.
Set API Org or API Org constraint for inference.
Append message to thread. @note Message may be dynamic/generated.
Append messages to thread. @note Messages may be dynamic/generated.
Set model or model selector constraint for inference.
Set safety setting for inference. @note - only fully supported by Gemini. backwards compatibility can be enabled via prompting but will be less reliable.
Set setting or setting selector constraint for inference.
Set Inference setting.
GenAI.Session
Set settings setting selector constraints for inference.
Override streaming handler module.
Set tool for inference.
Set tools for inference.
Link to this section Functions
chat(context_type \\ :default, options \\ nil)
Creates a new chat context.
execute(thread_context, command, context, options \\ nil)
Execute command.
# Notes Used, for example, to retrieve full report of a thread with an optimization loop or data loop command. Under usual processing not final/accepted grid search loops are not returned in response and a linear thread is returned. Execute mode however will return a graph of all runs, or meta data based on options, and grid search configuration.
report(thread_context, context, options \\ nil)
Shorthand for execute report
run(thread_context)
Run inference. Returning update chat completion and updated thread state.
run(thread_context, context, options \\ nil)
stream(thread_context, context, options \\ nil)
Run inference in streaming mode, interstitial messages (dynamics) if any will sent to the stream handler using the interstitial handle
with_api_key(thead_context, provider, api_key)
Set API Key or API Key constraint for inference. @todo we will need per model keys for ollam and hugging face.
with_api_org(thead_context, provider, api_org)
Set API Org or API Org constraint for inference.
with_message(thead_context, message, options \\ nil)
Append message to thread. @note Message may be dynamic/generated.
with_messages(thead_context, messages, options \\ nil)
Append messages to thread. @note Messages may be dynamic/generated.
with_model(thead_context, model)
Set model or model selector constraint for inference.
with_model_setting(thead_context, model_setting)
with_model_setting(thead_context, model, setting, value)
with_provider_setting(thead_context, provider_setting)
with_provider_setting(thead_context, provider, setting, value)
with_provider_settings(thead_context, provider_settings)
with_provider_settings(thead_context, provider, provider_settings)
with_safety_setting(thead_context, safety_setting_object)
with_safety_setting(thead_context, safety_setting, threshold)
Set safety setting for inference. @note - only fully supported by Gemini. backwards compatibility can be enabled via prompting but will be less reliable.
with_setting(thead_context, setting_object)
Set setting or setting selector constraint for inference.
with_setting(thead_context, setting, value)
Set Inference setting.
GenAI.Session
with_settings(thead_context, setting_object)
Set settings setting selector constraints for inference.
with_stream_handler(context, handler, options \\ nil)
Override streaming handler module.
with_tool(thead_context, tool)
Set tool for inference.
with_tools(thead_context, tools)
Set tools for inference.