View Source AWS.BedrockRuntime (aws-elixir v1.0.3)
Describes the API operations for running inference using Amazon Bedrock models.
Link to this section Summary
Functions
The action to apply a guardrail.
Sends messages to the specified Amazon Bedrock model.
Sends messages to the specified Amazon Bedrock model and returns the response in a stream.
Invokes the specified Amazon Bedrock model to run inference using the prompt and inference parameters provided in the request body.
Invoke the specified Amazon Bedrock model to run inference using the prompt and inference parameters provided in the request body.
Link to this section Functions
apply_guardrail(client, guardrail_identifier, guardrail_version, input, options \\ [])
View SourceThe action to apply a guardrail.
Sends messages to the specified Amazon Bedrock model.
Converse
provides
a consistent interface that works with all models that
support messages. This allows you to write code once and use it with different
models.
If a model has unique inference parameters, you can also pass those unique
parameters
to the model.
Amazon Bedrock doesn't store any text, images, or documents that you provide as content. The data is only used to generate the response.
You can submit a prompt by including it in the messages
field, specifying the
modelId
of a foundation model or inference profile to run inference on it, and
including any other fields that are relevant to your use case.
You can also submit a prompt from Prompt management by specifying the ARN of the
prompt version and including a map of variables to values in the
promptVariables
field. You can append more messages to the prompt by using the
messages
field. If you use a prompt from Prompt management, you can't include
the following fields in the request: additionalModelRequestFields
,
inferenceConfig
, system
, or toolConfig
. Instead, these fields must be
defined through Prompt management. For more information, see Use a prompt from Prompt
management.
For information about the Converse API, see Use the Converse API in the Amazon Bedrock User Guide. To use a guardrail, see Use a guardrail with the Converse API in the Amazon Bedrock User Guide. To use a tool with a model, see Tool use (Function calling) in the Amazon Bedrock User Guide
For example code, see Converse API examples in the Amazon Bedrock User Guide.
This operation requires permission for the bedrock:InvokeModel
action.
Sends messages to the specified Amazon Bedrock model and returns the response in a stream.
ConverseStream
provides a consistent API
that works with all Amazon Bedrock models that support messages.
This allows you to write code once and use it with different models. Should a
model have unique inference parameters, you can also pass those unique
parameters to the
model.
To find out if a model supports streaming, call
GetFoundationModel and check the responseStreamingSupported
field in the response.
The CLI doesn't support streaming operations in Amazon Bedrock, including
ConverseStream
.
Amazon Bedrock doesn't store any text, images, or documents that you provide as content. The data is only used to generate the response.
You can submit a prompt by including it in the messages
field, specifying the
modelId
of a foundation model or inference profile to run inference on it, and
including any other fields that are relevant to your use case.
You can also submit a prompt from Prompt management by specifying the ARN of the
prompt version and including a map of variables to values in the
promptVariables
field. You can append more messages to the prompt by using the
messages
field. If you use a prompt from Prompt management, you can't include
the following fields in the request: additionalModelRequestFields
,
inferenceConfig
, system
, or toolConfig
. Instead, these fields must be
defined through Prompt management. For more information, see Use a prompt from
Prompt
management.
For information about the Converse API, see Use the Converse API in the Amazon Bedrock User Guide. To use a guardrail, see Use a guardrail with the Converse API in the Amazon Bedrock User Guide. To use a tool with a model, see Tool use (Function calling) in the Amazon Bedrock User Guide
For example code, see Conversation streaming example in the Amazon Bedrock User Guide.
This operation requires permission for the
bedrock:InvokeModelWithResponseStream
action.
Invokes the specified Amazon Bedrock model to run inference using the prompt and inference parameters provided in the request body.
You use model inference to generate text, images, and embeddings.
For example code, see Invoke model code examples in the Amazon Bedrock User Guide.
This operation requires permission for the bedrock:InvokeModel
action.
invoke_model_with_response_stream(client, model_id, input, options \\ [])
View SourceInvoke the specified Amazon Bedrock model to run inference using the prompt and inference parameters provided in the request body.
The response is returned in a stream.
To see if a model supports streaming, call
GetFoundationModel
and check the responseStreamingSupported
field in the response.
The CLI doesn't support streaming operations in Amazon Bedrock, including
InvokeModelWithResponseStream
.
For example code, see Invoke model with streaming code example in the Amazon Bedrock User Guide.
This operation requires permissions to perform the
bedrock:InvokeModelWithResponseStream
action.