google_api_dialogflow v0.43.0 API Reference

Modules

API client metadata for GoogleApi.Dialogflow.V2.

API calls for all endpoints tagged Projects.

Handle Tesla connections for GoogleApi.Dialogflow.V2.

Metadata associated with the long running operation for Versions.CreateVersion.

Represents page information communicated to and from the webhook.

Represents a response message that can be returned by a conversational agent. Response messages are also used for output audio synthesis. The approach is as follows: If at least one OutputAudioText response is present, then all OutputAudioText responses are linearly concatenated, and the result is used for output audio synthesis. If the OutputAudioText responses are a mixture of text and SSML, then the concatenated result is treated as SSML; otherwise, the result is treated as either text or SSML as appropriate. The agent designer should ideally use either text or SSML consistently throughout the bot design. * Otherwise, all Text responses are linearly concatenated, and the result is used for output audio synthesis. This approach allows for more sophisticated user experience scenarios, where the text displayed to the user may differ from what is heard.

Indicates that the conversation succeeded, i.e., the bot handled the issue that the customer talked to it about. Dialogflow only uses this to determine which conversations should be counted as successful and doesn't process the metadata in this message in any way. Note that Dialogflow also considers conversations that get to the conversation end page as successful even if they don't return ConversationSuccess. You may set this, for example: In the entry_fulfillment of a Page if entering the page indicates that the conversation succeeded. In a webhook response when you determine that you handled the customer issue.

Indicates that interaction with the Dialogflow agent has ended. This message is generated by Dialogflow only and not supposed to be defined by the user.

Indicates that the conversation should be handed off to a live agent. Dialogflow only uses this to determine which conversations were handed off to a human agent for measurement purposes. What else to do with this signal is up to you and your handoff procedures. You may set this, for example: In the entry_fulfillment of a Page if entering the page indicates something went extremely wrong in the conversation. In a webhook response when you determine that the customer issue can only be handled by a human.

Represents an audio message that is composed of both segments synthesized from the Dialogflow agent prompts and ones hosted externally at the specified URIs. The external URIs are specified via play_audio. This message is generated by Dialogflow only and not supposed to be defined by the user.

A text or ssml response that is preferentially used for TTS output audio synthesis, as described in the comment on the ResponseMessage message.

Specifies an audio clip to be played by the client as part of the response.

Represents session information communicated to and from the webhook.

Represents fulfillment information communicated to the webhook.

Represents intent information communicated to the webhook.

A Dialogflow agent is a virtual agent that handles conversations with your end-users. It is a natural language understanding module that understands the nuances of human language. Dialogflow translates end-user text or audio during a conversation to structured data that your apps and services can understand. You design and build a Dialogflow agent to handle the types of conversations required for your system. For more information about agents, see the Agent guide.

Represents a part of a message possibly annotated with an entity. The part can be an entity or purely a part of the message between two entities or message start/end.

The request message for EntityTypes.BatchCreateEntities.

The request message for EntityTypes.BatchDeleteEntities.

The request message for EntityTypes.BatchDeleteEntityTypes.

The request message for Intents.BatchDeleteIntents.

The request message for EntityTypes.BatchUpdateEntities.

The request message for EntityTypes.BatchUpdateEntityTypes.

The response message for EntityTypes.BatchUpdateEntityTypes.

Attributes

  • intentBatchInline (type: GoogleApi.Dialogflow.V2.Model.GoogleCloudDialogflowV2IntentBatch.t, default: nil) - The collection of intents to update or create.
  • intentBatchUri (type: String.t, default: nil) - The URI to a Google Cloud Storage file containing intents to update or create. The file format can either be a serialized proto (of IntentBatch type) or JSON object. Note: The URI must start with "gs://".
  • intentView (type: String.t, default: nil) - Optional. The resource view to apply to the returned intent.
  • languageCode (type: String.t, default: nil) - Optional. The language used to access language-specific data. If not specified, the agent's default language is used. For more information, see Multilingual intent and entity data.
  • updateMask (type: String.t, default: nil) - Optional. The mask to control which fields get updated.

The response message for Intents.BatchUpdateIntents.

Dialogflow contexts are similar to natural language context. If a person says to you "they are orange", you need context in order to understand what "they" is referring to. Similarly, for Dialogflow to handle an end-user expression like that, it needs to be provided with context in order to correctly match an intent. Using contexts, you can control the flow of a conversation. You can configure contexts for an intent by setting input and output contexts, which are identified by string names. When an intent is matched, any configured output contexts for that intent become active. While any contexts are active, Dialogflow is more likely to match intents that are configured with input contexts that correspond to the currently active contexts. For more information about context, see the Contexts guide.

Represents a notification sent to Pub/Sub subscribers for conversation lifecycle events.

The message returned from the DetectIntent method.

Each intent parameter has a type, called the entity type, which dictates exactly how data from an end-user expression is extracted. Dialogflow provides predefined system entities that can match many common types of data. For example, there are system entities for matching dates, times, colors, email addresses, and so on. You can also create your own custom entities for matching custom data. For example, you could define a vegetable entity that can match the types of vegetables available for purchase with a grocery store agent. For more information, see the Entity guide.

This message is a wrapper around a collection of entity types.

An entity entry for an associated entity type.

You can create multiple versions of your agent and publish them to separate environments. When you edit an agent, you are editing the draft agent. At any point, you can save the draft agent as an agent version, which is an immutable snapshot of your agent. When you save the draft agent, it is published to the default environment. When you create agent versions, you can publish them to custom environments. You can create a variety of custom environments for: - testing - development - production - etc. For more information, see the versions and environments guide.

Events allow for matching intents by event name instead of the natural language input. For instance, input ` can trigger a personalized welcome response. The parameternamemay be used by the agent in the response:"Hello #welcome_event.name! What can I do for you today?". ## Attributes *languageCode(*type:*String.t, *default:*nil) - Required. The language of this query. See [Language Support](https://cloud.google.com/dialogflow/docs/reference/language) for a list of the currently supported language codes. Note that queries in the same session do not necessarily need to specify the same language. *name(*type:*String.t, *default:*nil) - Required. The unique identifier of the event. *parameters(*type:*map(), *default:*nil`) - The collection of parameters associated with the event. Depending on your protocol or client library language, this is a map, associative array, symbol table, dictionary, or JSON object composed of a collection of (MapKey, MapValue) pairs: - MapKey type: string - MapKey value: parameter name - MapValue type: - If parameter's entity type is a composite entity: map - Else: string or number, depending on parameter value type - MapValue value: - If parameter's entity type is a composite entity: map from composite entity property names to property values - Else: parameter value

By default, your agent responds to a matched intent with a static response. As an alternative, you can provide a more dynamic response by using fulfillment. When you enable fulfillment for an intent, Dialogflow responds to that intent by calling a service that you define. For example, if an end-user wants to schedule a haircut on Friday, your service can check your database and respond to the end-user with availability information for Friday. For more information, see the fulfillment guide.

Whether fulfillment is enabled for the specific feature.

Represents configuration for a generic web service. Dialogflow supports two mechanisms for authentications: - Basic authentication with username and password. - Authentication with additional authentication headers. More information could be found at: https://cloud.google.com/dialogflow/docs/fulfillment-configure.

Instructs the speech recognizer how to process the audio content.

An intent categorizes an end-user's intention for one conversation turn. For each agent, you define many intents, where your combined intents can handle a complete conversation. When an end-user writes or says something, referred to as an end-user expression or end-user input, Dialogflow matches the end-user input to the best intent in your agent. Matching an intent is also known as intent classification. For more information, see the intent guide.

This message is a wrapper around a collection of intents.

Represents a single followup intent in the chain.

A rich response message. Corresponds to the intent Response field in the Dialogflow console. For more information, see Rich response messages.

The basic card message. Useful for displaying information.

The button object that appears at the bottom of a card.

The card for presenting a carousel of options to select from.

The suggestion chip message that allows the user to jump out to the app or website associated with this agent.

The card for presenting a list of options to select from.

Additional info about the select item for when it is triggered in a dialog.

The simple response message containing speech or text.

The collection of simple response candidates. This message in QueryResult.fulfillment_messages and WebhookResponse.fulfillment_messages should contain only one SimpleResponse.

The suggestion chip message that the user can tap to quickly post a reply to the conversation.

Represents an example that the agent is trained on.

The response message for Contexts.ListContexts.

The response message for EntityTypes.ListEntityTypes.

The response message for Environments.ListEnvironments.

The response message for Intents.ListIntents.

The response message for SessionEntityTypes.ListSessionEntityTypes.

Represents a message posted into a conversation.

Represents the result of annotation for the message.

Represents the contents of the original request that was passed to the [Streaming]DetectIntent call.

Instructs the speech synthesizer on how to generate the output audio content. If this audio config is supplied in a request, it overrides all existing text-to-speech settings applied to the agent.

Represents the query input. It can contain either: 1. An audio config which instructs the speech recognizer how to process the speech audio. 2. A conversational query in the form of text,. 3. An event that specifies which intent to trigger.

Represents the parameters of the conversational query.

Represents the result of conversational query or event processing.

The sentiment, such as positive/negative feeling or association, for a unit of analysis, such as the query text.

Configures the types of sentiment analysis to perform.

The result of sentiment analysis. Sentiment analysis inspects user input and identifies the prevailing subjective opinion, especially to determine a user's attitude as positive, negative, or neutral. For Participants.AnalyzeContent, it needs to be configured in DetectIntentRequest.query_params. For Participants.StreamingAnalyzeContent, it needs to be configured in StreamingDetectIntentRequest.query_params. And for Participants.AnalyzeContent and Participants.StreamingAnalyzeContent, it needs to be configured in ConversationProfile.human_agent_assistant_config

A session represents a conversation between a Dialogflow agent and an end-user. You can create special entities, called session entities, during a session. Session entities can extend or replace custom entity types and only exist during the session that they were created for. All session data, including session entities, is stored by Dialogflow for 20 minutes. For more information, see the session entity guide.

Hints for the speech recognizer to help with recognition in a specific conversation state.

Configuration of how speech should be synthesized.

Represents the natural language text to be processed.

Description of which voice to use for speech synthesis.

The response message for a webhook call. This response is validated by the Dialogflow server. If validation fails, an error will be returned in the QueryResult.diagnostic_info field. Setting JSON fields to an empty value with the wrong type is a common error. To avoid this error: - Use "" for empty strings - Use {} or null for empty objects - Use [] or null for empty arrays For more information, see the Protocol Buffers Language Guide.

Represents an annotated conversation dataset. ConversationDataset can have multiple AnnotatedConversationDataset, each of them represents one result from one annotation task. AnnotatedConversationDataset can only be generated from annotation task, which will be triggered by LabelConversation.

Response message for [Documents.AutoApproveSmartMessagingEntries].

The response message for EntityTypes.BatchUpdateEntityTypes.

Dialogflow contexts are similar to natural language context. If a person says to you "they are orange", you need context in order to understand what "they" is referring to. Similarly, for Dialogflow to handle an end-user expression like that, it needs to be provided with context in order to correctly match an intent. Using contexts, you can control the flow of a conversation. You can configure contexts for an intent by setting input and output contexts, which are identified by string names. When an intent is matched, any configured output contexts for that intent become active. While any contexts are active, Dialogflow is more likely to match intents that are configured with input contexts that correspond to the currently active contexts. For more information about context, see the Contexts guide.

Each intent parameter has a type, called the entity type, which dictates exactly how data from an end-user expression is extracted. Dialogflow provides predefined system entities that can match many common types of data. For example, there are system entities for matching dates, times, colors, email addresses, and so on. You can also create your own custom entities for matching custom data. For example, you could define a vegetable entity that can match the types of vegetables available for purchase with a grocery store agent. For more information, see the Entity guide.

Events allow for matching intents by event name instead of the natural language input. For instance, input ` can trigger a personalized welcome response. The parameternamemay be used by the agent in the response:"Hello #welcome_event.name! What can I do for you today?". ## Attributes *languageCode(*type:*String.t, *default:*nil) - Required. The language of this query. See [Language Support](https://cloud.google.com/dialogflow/docs/reference/language) for a list of the currently supported language codes. Note that queries in the same session do not necessarily need to specify the same language. *name(*type:*String.t, *default:*nil) - Required. The unique identifier of the event. *parameters(*type:*map(), *default:*nil`) - The collection of parameters associated with the event. Depending on your protocol or client library language, this is a map, associative array, symbol table, dictionary, or JSON object composed of a collection of (MapKey, MapValue) pairs: - MapKey type: string - MapKey value: parameter name - MapValue type: - If parameter's entity type is a composite entity: map - Else: string or number, depending on parameter value type - MapValue value: - If parameter's entity type is a composite entity: map from composite entity property names to property values - Else: parameter value

An intent categorizes an end-user's intention for one conversation turn. For each agent, you define many intents, where your combined intents can handle a complete conversation. When an end-user writes or says something, referred to as an end-user expression or end-user input, Dialogflow matches the end-user input to the best intent in your agent. Matching an intent is also known as intent classification. For more information, see the intent guide.

Corresponds to the Response field in the Dialogflow console.

The basic card message. Useful for displaying information.

The button object that appears at the bottom of a card.

The card for presenting a carousel of options to select from.

The suggestion chip message that allows the user to jump out to the app or website associated with this agent.

The card for presenting a list of options to select from.

Rich Business Messaging (RBM) Media displayed in Cards The following media-types are currently supported: Image Types image/jpeg image/jpg' image/gif image/png Video Types video/h263 video/m4v video/mp4 video/mpeg video/mpeg4 video/webm

Carousel Rich Business Messaging (RBM) rich card. Rich cards allow you to respond to users with more vivid content, e.g. with media and suggestions. If you want to show a single card with more control over the layout, please use RbmStandaloneCard instead.

Standalone Rich Business Messaging (RBM) rich card. Rich cards allow you to respond to users with more vivid content, e.g. with media and suggestions. You can group multiple rich cards into one using RbmCarouselCard but carousel cards will give you less control over the card layout.

Rich Business Messaging (RBM) suggested client-side action that the user can choose from the card.

Opens the user's default dialer app with the specified phone number but does not dial automatically.

Opens the user's default web browser app to the specified uri If the user has an app installed that is registered as the default handler for the URL, then this app will be opened instead, and its icon will be used in the suggested action UI.

Opens the device's location chooser so the user can pick a location to send back to the agent.

Rich Business Messaging (RBM) suggested reply that the user can click instead of typing in their own response.

Rich Business Messaging (RBM) suggestion. Suggestions allow user to easily select/click a predefined response or perform an action (like opening a web uri).

Rich Business Messaging (RBM) text response with suggestions.

Additional info about the select item for when it is triggered in a dialog.

The simple response message containing speech or text.

The collection of simple response candidates. This message in QueryResult.fulfillment_messages and WebhookResponse.fulfillment_messages should contain only one SimpleResponse.

The suggestion chip message that the user can tap to quickly post a reply to the conversation.

Synthesizes speech and plays back the synthesized audio to the caller in Telephony Gateway. Telephony Gateway takes the synthesizer settings from DetectIntentResponse.output_audio_config which can either be set at request-level or can come from the agent-level synthesizer config.

Represents an example that the agent is trained on.

Represents the result of querying a Knowledge base.

Metadata in google::longrunning::Operation for Knowledge operations.

The response for ConversationDatasets.LabelConversation.

Represents the contents of the original request that was passed to the [Streaming]DetectIntent call.

Represents the result of conversational query or event processing.

The sentiment, such as positive/negative feeling or association, for a unit of analysis, such as the query text.

The result of sentiment analysis. Sentiment analysis inspects user input and identifies the prevailing subjective opinion, especially to determine a user's attitude as positive, negative, or neutral. For Participants.AnalyzeContent, it needs to be configured in DetectIntentRequest.query_params. For Participants.StreamingAnalyzeContent, it needs to be configured in StreamingDetectIntentRequest.query_params. And for Participants.AnalyzeContent and Participants.StreamingAnalyzeContent, it needs to be configured in ConversationProfile.human_agent_assistant_config

A session represents a conversation between a Dialogflow agent and an end-user. You can create special entities, called session entities, during a session. Session entities can extend or replace custom entity types and only exist during the session that they were created for. All session data, including session entities, is stored by Dialogflow for 20 minutes. For more information, see the session entity guide.

The response message for a webhook call. This response is validated by the Dialogflow server. If validation fails, an error will be returned in the QueryResult.diagnostic_info field. Setting JSON fields to an empty value with the wrong type is a common error. To avoid this error: - Use "" for empty strings - Use {} or null for empty objects - Use [] or null for empty arrays For more information, see the Protocol Buffers Language Guide.

Metadata associated with the long running operation for Versions.CreateVersion.

The response message for Operations.ListOperations.

This resource represents a long-running operation that is the result of a network API call.

A generic empty message that you can re-use to avoid defining duplicated empty messages in your APIs. A typical example is to use it as the request or the response type of an API method. For instance: service Foo { rpc Bar(google.protobuf.Empty) returns (google.protobuf.Empty); } The JSON representation for Empty is empty JSON object {}.

The Status type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by gRPC. Each Status message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the API Design Guide.

An object representing a latitude/longitude pair. This is expressed as a pair of doubles representing degrees latitude and degrees longitude. Unless specified otherwise, this must conform to the WGS84 standard. Values must be within normalized ranges.