LazyDoc.Providers.GithubAi (LazyDoc v0.5.3)

View Source

Main functionality

The module LazyDoc.Providers.GithubAi provides a way of interacting with the Github AI API for prompt-based communication and response generation.

Description

It implements the behavior Provider, offering a standardized method to request and retrieve responses from AI models hosted on the Github AI platform. Key operations include sending prompts, constructing API requests, and processing responses.

Summary

Functions

Returns a boolean indicating whether all provided parameters are valid.

Returns the content of the message from the response.

Returns the corresponding model name based on the provided model symbol.

Returns a request configuration for querying a model with the specified parameters.

Returns the result of a request made with a prompt using a specified model and token.

Functions

check_parameters?(params)

@spec check_parameters?(params :: keyword()) :: boolean()

Returns a boolean indicating whether all provided parameters are valid.

Parameters

  • params - a list of key-value pairs representing parameters to be checked.

Description

Validates the parameters against a predefined list of acceptable keys.

get_docs_from_response(response)

@spec get_docs_from_response(Req.Response.t()) :: binary()

Returns the content of the message from the response.

Parameters

  • response - a %Req.Response{} struct containing the body of the response which includes the message.

Description

Extracts the message content from the response body.

model(model)

@spec model(atom()) :: binary()

Returns the corresponding model name based on the provided model symbol.

Parameters

  • model - symbol representing the model type (:codestral, :gpt_4o, or :gpt_4o_mini).

Description

Returns the name of the model as a string based on the input symbol.

req_query(prompt, model, token, params \\ [])

Returns a request configuration for querying a model with the specified parameters.

Parameters

  • prompt - The input text that will guide the model's response.
  • model - The identifier of the model to use for generating the output.
  • token - The authentication token for accessing the model's API.
  • params - Optional parameters such as temperature, top_p, and max_tokens to fine-tune the model's output.

Description

Constructs a request to the model API with specified settings and configurations.

request_prompt(prompt, model, token, params \\ [])

@spec request_prompt(binary(), binary(), binary(), keyword()) ::
  {:ok, Req.Response.t()} | {:error, Exception.t()}

Returns the result of a request made with a prompt using a specified model and token.

Parameters

  • prompt - The input text or question to be sent in the request.
  • model - The model that will process the input prompt.
  • token - The authentication token required to access the request.
  • params - Optional parameters for customizing the request.

Description

Performs a request to the API with the provided prompt and model.