View Source WiseGPTEx (WiseGPTEx v0.6.0)

Documentation for WiseGPTEx.

This module provides functions to obtain the best completion from the OpenAI models (default: "gpt-3.5-turbo") using the OpenAI API completions endpoint (https://api.openai.com/v1/chat/completions). The get_best_completion/2 and get_best_completion_with_resolver/2 functions take a question and an optional list of options to configure the API request. get_best_completion_with_resolver/2 also involves a secondary step to resolve the best completion among the options, which leads to an additional API call for a more accurate response.

Examples

Basic usage:

iex> WiseGPTEx.get_best_completion("What is the capital of France?")
{:ok, "Paris"}

iex> WiseGPTEx.get_best_completion_with_resolver("What is the capital of France?")
{:ok, "Paris"}

Using a raw completion:

iex> messages = [
...>   %{"role" => "system", "content" => "You are High School Geography Teacher"},
...>   %{"role" => "user", "content" => "What was the capital of France in 15th century?"}
...> ]
...> WiseGPTEx.openai_completion(messages)
{:ok, "The capital of France in the 15th century was Paris."}

Using all available options:

iex> opts = [model: "gpt-4", temperature: 0.7, num_completions: 5, timeout: 3_600_000]
iex> WiseGPTEx.get_best_completion("What is the capital of France?", opts)
{:ok, "Paris"}

iex> WiseGPTEx.get_best_completion_with_resolver("What is the capital of France?", opts)
{:ok, "Paris"}

Note that the examples for the get_best_completion_with_resolver/2 function are similar to get_best_completion/2. This is because the difference between these two functions is in the method of how they select the best completion, not in their usage or the nature of their inputs or outputs. The get_best_completion_with_resolver/2 function will perform an additional API call to get a more accurate completion, which can be beneficial for complex or ambiguous queries.

Anthropic API usage:

iex> WiseGPTEx.anthropic_completion("Why is the sky blue?")
{:ok, "The sky is blue because... [detailed explanation]"}

Options

The following options can be passed to the get_best_completion/2 and get_best_completion_with_resolver/2 functions:

  • :model - The name of the model to use (default: "gpt-3.5-turbo"). All OpenAI models are supported.
  • :temperature - Controls the randomness of the model's output. Higher values result in more diverse responses (default: 0.5).
  • :num_completions - The number of completions to generate (default: 3).
  • :timeout - The maximum time in milliseconds to wait for a response from the OpenAI API (default: 3_600_000 ms, or 60 minutes).

For anthropic_completion/2, the following options can be passed:

  • :model - The version of the Claude model to use (default: "claude-2").
  • :temperature - Controls the randomness of the model's output (default: 0.1).
  • :max_tokens_to_sample - Maximum number of tokens to generate (default: 100,000).
  • :timeout - Maximum time in milliseconds to wait for a response (default: 3,600,000 ms, or 60 minutes).

Summary

Functions

Retrieves a completion from the Anthropic API using the Claude model.

get_best_completion/2 attempts to answer a given question using OpenAI's completion endpoint.

get_best_completion_with_resolver/2 is similar to get_best_completion/2 but uses a secondary step to resolve the best completion among the options.

Gets a raw completion from the OpenAI API without additional prompting.

Functions

Link to this function

anthropic_completion(message, opts \\ [])

View Source
@spec anthropic_completion(binary(), Keyword.t()) :: {:ok, binary()} | {:error, any()}

Retrieves a completion from the Anthropic API using the Claude model.

Params:

  • message: A string containing the prompt or question.
  • opts: A keyword list of options to configure the API request.

Returns:

  • {:ok, binary()}: The completion for the given prompt.
  • {:error, any()}: An error message in case of failure.

This function provides a direct way to interact with the Anthropic API, allowing for detailed control over the completion request.

Example:

iex> WiseGPTEx.anthropic_completion("Why is the sky blue?")
{:ok, "The sky is blue because... [detailed explanation]"}
Link to this function

get_best_completion(question, opts \\ [])

View Source
@spec get_best_completion(binary(), Keyword.t()) :: {:ok, binary()} | {:error, any()}

get_best_completion/2 attempts to answer a given question using OpenAI's completion endpoint.

Params:

  • question: a binary string containing the question to be answered
  • opts: a keyword list of options to configure the API request

Returns:

  • {:ok, binary()}: the best completion for the given question
  • {:error, any()}: an error message in the case of failure

Example:

iex> WiseGPTEx.get_best_completion("What is the capital of France?")
{:ok, "Paris"}

Options:

The function accepts the following options:

  • :model - The name of the model to use (default: "gpt-3.5-turbo"). All OpenAI models are supported.
  • :temperature - Controls the randomness of the model's output. Higher values result in more diverse responses (default: 0.5).
  • :num_completions - The number of completions to generate (default: 3).
  • :timeout - The maximum time in milliseconds to wait for a response from the OpenAI API (default: 3_600_000 ms, or 60 minutes).
Link to this function

get_best_completion_with_resolver(question, opts \\ [])

View Source
@spec get_best_completion_with_resolver(binary(), Keyword.t()) ::
  {:ok, binary()} | {:error, any()}

get_best_completion_with_resolver/2 is similar to get_best_completion/2 but uses a secondary step to resolve the best completion among the options.

This function will perform an additional API call to get a more accurate completion, which can be beneficial for complex or ambiguous queries.

Params:

  • question: a binary string containing the question to be answered
  • opts: a keyword list of options to configure the API request

Returns:

  • {:ok, binary()}: the best completion for the given question -: an error message in the case of failure

Example:

iex> WiseGPTEx.get_best_completion_with_resolver("What is the capital of France?")
{:ok, "Paris"}

Options:

The function accepts the following options:

  • :model - The name of the model to use (default: "gpt-3.5-turbo"). All OpenAI models are supported.
  • :temperature - Controls the randomness of the model's output. Higher values result in more diverse responses (default: 0.5). -:num_completions- The number of completions to generate (default: 3). -:timeout` - The maximum time in milliseconds to wait for a response from the OpenAI API (default: 3_600_000 ms, or 60 minutes).
Link to this function

openai_completion(messages, opts \\ [])

View Source
@spec openai_completion([map()], Keyword.t()) :: {:ok, binary()} | {:error, any()}

Gets a raw completion from the OpenAI API without additional prompting.

Params:

  • messages: A list of messages to send to the API. Each message should be a map with keys "role" and "content", for example:
      iex> messages = [
      ...>   %{"role" => "system", "content" => "You are High School Geography Teacher"},
      ...>   %{"role" => "user", "content" => "What was the capital of France in 15th century?"}
      ...> ]
  • opts: a keyword list of options to configure the API request

Returns:

  • {:ok, binary()}: the completion for the given question
  • {:error, any()}: an error message in the case of failure

This allows you to customize the conversation sent to the API without any additional prompting added.

Example:

    iex> messages = [
    ...>   %{"role" => "system", "content" => "You are High School Geography Teacher"},
    ...>   %{"role" => "user", "content" => "What was the capital of France in 15th century?"}
    ...> ]
    ...> WiseGPTEx.openai_completion(messages)
    {:ok, "The capital of France in the 15th century was Paris."}