View Source AWS.BedrockRuntime (aws-elixir v1.0.2)

Describes the API operations for running inference using Amazon Bedrock models.

Link to this section Summary

Functions

Sends messages to the specified Amazon Bedrock model.

Sends messages to the specified Amazon Bedrock model and returns the response in a stream.

Invokes the specified Amazon Bedrock model to run inference using the prompt and inference parameters provided in the request body.

Invoke the specified Amazon Bedrock model to run inference using the prompt and inference parameters provided in the request body.

Link to this section Functions

Link to this function

converse(client, model_id, input, options \\ [])

View Source

Sends messages to the specified Amazon Bedrock model.

Converse provides a consistent interface that works with all models that support messages. This allows you to write code once and use it with different models. Should a model have unique inference parameters, you can also pass those unique parameters to the model. For more information, see Run inference in the Bedrock User Guide.

This operation requires permission for the bedrock:InvokeModel action.

Link to this function

converse_stream(client, model_id, input, options \\ [])

View Source

Sends messages to the specified Amazon Bedrock model and returns the response in a stream.

ConverseStream provides a consistent API that works with all Amazon Bedrock models that support messages. This allows you to write code once and use it with different models. Should a model have unique inference parameters, you can also pass those unique parameters to the model. For more information, see Run inference in the Bedrock User Guide.

To find out if a model supports streaming, call GetFoundationModel and check the responseStreamingSupported field in the response.

For example code, see Invoke model with streaming code example in the Amazon Bedrock User Guide.

This operation requires permission for the bedrock:InvokeModelWithResponseStream action.

Link to this function

invoke_model(client, model_id, input, options \\ [])

View Source

Invokes the specified Amazon Bedrock model to run inference using the prompt and inference parameters provided in the request body.

You use model inference to generate text, images, and embeddings.

For example code, see Invoke model code examples in the Amazon Bedrock User Guide.

This operation requires permission for the bedrock:InvokeModel action.

Link to this function

invoke_model_with_response_stream(client, model_id, input, options \\ [])

View Source

Invoke the specified Amazon Bedrock model to run inference using the prompt and inference parameters provided in the request body.

The response is returned in a stream.

To see if a model supports streaming, call GetFoundationModel and check the responseStreamingSupported field in the response.

The CLI doesn't support InvokeModelWithResponseStream.

For example code, see Invoke model with streaming code example in the Amazon Bedrock User Guide.

This operation requires permissions to perform the bedrock:InvokeModelWithResponseStream action.