View Source API Reference ex_azure_speech v0.1.0

Modules

Client to authenticate with the Azure Cognitive Services API

Configuration required to authenticate with the Azure Cognitive Services Speech Pronunciation Assessment API.

Represents an unexpected authentication error.

Return when the informed API Key is not authorized to access the Azure Cognitive Services API.

Provides utility functions for working with binary data.

Defines the state of a websocket connection.

Defines the error types for the Azure Cognitive Services Speech SDK.

Defines the error type for a failed dispatch of a command to a websocket client.

Defines the error type for not being authorized.

Define an error class for internal sdk errors.

Defines the error type for invalid requests to Azure Services.

Defines the error type for invalid responses from Azure Services.

Defines the error type for a timeout.

Defines the error type for an unknown error.

Defines the error type for a failed websocket connection.

Globally unique identifier (GUID) generator.

Common header names used in the Azure Cognitive Services API

Represents an audio message to be sent over a socket. The audio itself can be streammed in chunks.

Defines the type of message to be sent.

Message used to inform the Speech service about which SDK is being used and the type of audio it should expect.

Protocol for deserialize JSON responses into valid message structs.

Defines a protocol for building a message to be sent over a socket.

Implements a way to read audio-streams with the capability to rewing and restart from a specific offset point.

Represents a message to be sent through the WebSocket.

Configuration module for the Azure Cognitive Services Speech SDK.

The dimension of the details returned by the speech acessment from the Steep-To-Text service.

This error fires when the Speech Service fails to recognize a speech input.

This error fires up when the websocket connection with the cognitive services fails to be established.

The grading system to be used for pronunciation assessment.

Defines the granularity of the recognition results.

Message used to set the context of the speech to text service, such as if prosody accessment is enabled or not.

Speech-to-Text Recognizer module, which provides the functionality to recognize speech from audio input.

Represents the end of a speech recognition session.

Phrase details from the Speech-To-Text API.

Represents the primary language of the speech.

Overall pronunciation assessment for the Evaluated Speech.

Represents a valid Speech-To-Text API response.

Represents a word in a phrase.

Configurations required to establish a WebSocket connection with the Azure Cognitive Speech Service.

Configures the Speech-to-Text context.-- THe objective of the Speech Context is to provide more data to the Speech-to-Text service, so it can better understand the user's speech, it also provides configurations for speech assessment and detailed output analysis.

Websocket Connection with the Azure Cognitive Services Speech-to-Text service.