google_api_text_to_speech v0.11.0 GoogleApi.TextToSpeech.V1beta1.Model.SynthesizeSpeechResponse View Source
The message returned to the client by the SynthesizeSpeech
method.
Attributes
-
audioConfig
(type:GoogleApi.TextToSpeech.V1beta1.Model.AudioConfig.t
, default:nil
) - The audio metadata ofaudio_content
. -
audioContent
(type:String.t
, default:nil
) - The audio data bytes encoded as specified in the request, including the header for encodings that are wrapped in containers (e.g. MP3, OGG_OPUS). For LINEAR16 audio, we include the WAV header. Note: as with all bytes fields, protobuffers use a pure binary representation, whereas JSON representations use base64. -
timepoints
(type:list(GoogleApi.TextToSpeech.V1beta1.Model.Timepoint.t)
, default:nil
) - A link between a position in the original request input and a corresponding time in the output audio. It's only supported via <mark> of SSML input.
Link to this section Summary
Functions
Unwrap a decoded JSON object into its complex fields.
Link to this section Types
Specs
t() :: %GoogleApi.TextToSpeech.V1beta1.Model.SynthesizeSpeechResponse{ audioConfig: GoogleApi.TextToSpeech.V1beta1.Model.AudioConfig.t(), audioContent: String.t(), timepoints: [GoogleApi.TextToSpeech.V1beta1.Model.Timepoint.t()] }
Link to this section Functions
Specs
Unwrap a decoded JSON object into its complex fields.