View Source GoogleApi.TextToSpeech.V1beta1.Model.SynthesizeSpeechResponse (google_api_text_to_speech v0.17.0)
The message returned to the client by the SynthesizeSpeech
method.
Attributes
-
audioConfig
(type:GoogleApi.TextToSpeech.V1beta1.Model.AudioConfig.t
, default:nil
) - The audio metadata ofaudio_content
. -
audioContent
(type:String.t
, default:nil
) - The audio data bytes encoded as specified in the request, including the header for encodings that are wrapped in containers (e.g. MP3, OGG_OPUS). For LINEAR16 audio, we include the WAV header. Note: as with all bytes fields, protobuffers use a pure binary representation, whereas JSON representations use base64. -
timepoints
(type:list(GoogleApi.TextToSpeech.V1beta1.Model.Timepoint.t)
, default:nil
) - A link between a position in the original request input and a corresponding time in the output audio. It's only supported via `` of SSML input.
Summary
Functions
Unwrap a decoded JSON object into its complex fields.
Types
@type t() :: %GoogleApi.TextToSpeech.V1beta1.Model.SynthesizeSpeechResponse{ audioConfig: GoogleApi.TextToSpeech.V1beta1.Model.AudioConfig.t() | nil, audioContent: String.t() | nil, timepoints: [GoogleApi.TextToSpeech.V1beta1.Model.Timepoint.t()] | nil }
Functions
Unwrap a decoded JSON object into its complex fields.