google_api_speech v0.2.0 GoogleApi.Speech.V1.Model.RecognitionConfig View Source
Provides information to the recognizer that specifies how to process the request.
Attributes
- enableAutomaticPunctuation (boolean()): Optional If 'true', adds punctuation to recognition result hypotheses. This feature is only available in select languages. Setting this for requests in other languages has no effect at all. The default 'false' value does not add punctuation to result hypotheses. Note: This is currently offered as an experimental service, complimentary to all users. In the future this may be exclusively available as a premium feature. Defaults to:
null
. - enableWordTimeOffsets (boolean()): Optional If `true`, the top result includes a list of words and the start and end time offsets (timestamps) for those words. If `false`, no word-level time offset information is returned. The default is `false`. Defaults to:
null
. encoding (String.t): Encoding of audio data sent in all `RecognitionAudio` messages. This field is optional for `FLAC` and `WAV` audio files and required for all other audio formats. For details, see AudioEncoding. Defaults to:
null
.- Enum - one of [ENCODING_UNSPECIFIED, LINEAR16, FLAC, MULAW, AMR, AMR_WB, OGG_OPUS, SPEEX_WITH_HEADER_BYTE]
- languageCode (String.t): Required The language of the supplied audio as a BCP-47 language tag. Example: "en-US". See Language Support for a list of the currently supported language codes. Defaults to:
null
. - maxAlternatives (integer()): Optional Maximum number of recognition hypotheses to be returned. Specifically, the maximum number of `SpeechRecognitionAlternative` messages within each `SpeechRecognitionResult`. The server may return fewer than `max_alternatives`. Valid values are `0`-`30`. A value of `0` or `1` will return a maximum of one. If omitted, will return a maximum of one. Defaults to:
null
. - model (String.t): Optional Which model to select for the given request. Select the model best suited to your domain to get best results. If a model is not explicitly specified, then we auto-select a model based on the parameters in the RecognitionConfig. <table> <tr> <td><b>Model</b></td> <td><b>Description</b></td> </tr> <tr> <td><code>command_and_search</code></td> <td>Best for short queries such as voice commands or voice search.</td> </tr> <tr> <td><code>phone_call</code></td> <td>Best for audio that originated from a phone call (typically recorded at an 8khz sampling rate).</td> </tr> <tr> <td><code>video</code></td> <td>Best for audio that originated from from video or includes multiple speakers. Ideally the audio is recorded at a 16khz or greater sampling rate. This is a premium model that costs more than the standard rate.</td> </tr> <tr> <td><code>default</code></td> <td>Best for audio that is not one of the specific audio models. For example, long-form audio. Ideally the audio is high-fidelity, recorded at a 16khz or greater sampling rate.</td> </tr> </table> Defaults to:
null
. - profanityFilter (boolean()): Optional If set to `true`, the server will attempt to filter out profanities, replacing all but the initial character in each filtered word with asterisks, e.g. "f*". If set to `false` or omitted, profanities won't be filtered out. Defaults to:
null
. - sampleRateHertz (integer()): Sample rate in Hertz of the audio data sent in all `RecognitionAudio` messages. Valid values are: 8000-48000. 16000 is optimal. For best results, set the sampling rate of the audio source to 16000 Hz. If that's not possible, use the native sample rate of the audio source (instead of re-sampling). This field is optional for `FLAC` and `WAV` audio files and required for all other audio formats. For details, see AudioEncoding. Defaults to:
null
. - speechContexts ([SpeechContext]): Optional array of SpeechContext. A means to provide context to assist the speech recognition. For more information, see Phrase Hints. Defaults to:
null
. - useEnhanced (boolean()): Optional Set to true to use an enhanced model for speech recognition. If `use_enhanced` is set to true and the `model` field is not set, then an appropriate enhanced model is chosen if: 1. project is eligible for requesting enhanced models 2. an enhanced model exists for the audio If `use_enhanced` is true and an enhanced version of the specified model does not exist, then the speech is recognized using the standard version of the specified model. Enhanced speech models require that you opt-in to data logging using instructions in the documentation. If you set `use_enhanced` to true and you have not enabled audio logging, then you will receive an error. Defaults to:
null
.
Link to this section Summary
Functions
Unwrap a decoded JSON object into its complex fields
Link to this section Types
Link to this type
t()
View Source
t() :: %GoogleApi.Speech.V1.Model.RecognitionConfig{ enableAutomaticPunctuation: any(), enableWordTimeOffsets: any(), encoding: any(), languageCode: any(), maxAlternatives: any(), model: any(), profanityFilter: any(), sampleRateHertz: any(), speechContexts: [GoogleApi.Speech.V1.Model.SpeechContext.t()], useEnhanced: any() }
Link to this section Functions
Unwrap a decoded JSON object into its complex fields.