google_api_dialogflow v0.16.0 GoogleApi.Dialogflow.V2.Model.GoogleCloudDialogflowV2InputAudioConfig View Source
Instructs the speech recognizer how to process the audio content.
Attributes
audioEncoding
(type:String.t
, default:nil
) - Required. Audio encoding of the audio content to process.languageCode
(type:String.t
, default:nil
) - Required. The language of the supplied audio. Dialogflow does not do translations. See Language Support for a list of the currently supported language codes. Note that queries in the same session do not necessarily need to specify the same language.modelVariant
(type:String.t
, default:nil
) - Optional. Which variant of the Speech model to use.phraseHints
(type:list(String.t)
, default:nil
) - Optional. A list of strings containing words and phrases that the speech recognizer should recognize with higher likelihood.See the Cloud Speech documentation for more details.
sampleRateHertz
(type:integer()
, default:nil
) - Required. Sample rate (in Hertz) of the audio content sent in the query. Refer to Cloud Speech API documentation for more details.singleUtterance
(type:boolean()
, default:nil
) - Optional. Iffalse
(default), recognition does not cease until the client closes the stream. Iftrue
, the recognizer will detect a single spoken utterance in input audio. Recognition ceases when it detects the audio's voice has stopped or paused. In this case, once a detected intent is received, the client should close the stream and start a new request with a new stream as needed. Note: This setting is relevant only for streaming methods. Note: When specified, InputAudioConfig.single_utterance takes precedence over StreamingDetectIntentRequest.single_utterance.
Link to this section Summary
Functions
Unwrap a decoded JSON object into its complex fields.
Link to this section Types
Link to this section Functions
Unwrap a decoded JSON object into its complex fields.