LazyDoc.Providers.GithubAi (LazyDoc v0.5.0)
## Main functionality
The module LazyDoc.Providers.GithubAi provides a way of interacting with the Github AI API for prompt-based communication and response generation.
## Description
It implements the behavior Provider, offering a standardized method to request and retrieve responses from AI models hosted on the Github AI platform. Key operations include sending prompts, constructing API requests, and processing responses.
Summary
Functions
Parameters
- params - a keyword list containing parameters to check for validity.
Description
Verifies whether all provided parameters are among the allowed list of parameters.
Parameters
- response - a
%Req.Response{}
struct containing the response data.
Description
Extracts the message content from the first choice in the response body.
Parameters
- model - an atom representing the type of model to retrieve. It can be one of the following: :codestral, :gpt_4o, or :gpt_4o_mini.
Description
Retrieves the name of the model based on the provided symbol.
Parameters
- prompt - The input text that the model will respond to.
- model - The specific model to be used for generating a response.
- token - The authentication token required to access the model.
- params - Additional parameters for controlling the generation, such as temperature, top_p, and max_tokens.
Description
Prepares a request to the specified model with the given prompt and parameters.
Parameters
- prompt - the text input that guides the generated response.
- model - the AI model used to process the prompt.
- token - the authorization token required for making the request.
- params - optional additional parameters for the request.
Description
Sends a request to the AI model with the provided prompt and parameters.
Functions
Parameters
- params - a keyword list containing parameters to check for validity.
Description
Verifies whether all provided parameters are among the allowed list of parameters.
Returns
true if all parameters are valid, false otherwise.
@spec get_docs_from_response(Req.Response.t()) :: binary()
Parameters
- response - a
%Req.Response{}
struct containing the response data.
Description
Extracts the message content from the first choice in the response body.
Returns
The content of the message as a string.
Parameters
- model - an atom representing the type of model to retrieve. It can be one of the following: :codestral, :gpt_4o, or :gpt_4o_mini.
Description
Retrieves the name of the model based on the provided symbol.
Returns
The name of the model as a string corresponding to the given atom.
Parameters
- prompt - The input text that the model will respond to.
- model - The specific model to be used for generating a response.
- token - The authentication token required to access the model.
- params - Additional parameters for controlling the generation, such as temperature, top_p, and max_tokens.
Description
Prepares a request to the specified model with the given prompt and parameters.
Returns
A configured request object ready to be sent to the API.
@spec request_prompt(binary(), binary(), binary(), keyword()) :: {:ok, Req.Response.t()} | {:error, Exception.t()}
Parameters
- prompt - the text input that guides the generated response.
- model - the AI model used to process the prompt.
- token - the authorization token required for making the request.
- params - optional additional parameters for the request.
Description
Sends a request to the AI model with the provided prompt and parameters.
Returns
The response from the AI model based on the provided prompt and parameters.