google_api_video_intelligence v0.3.0 API Reference
Modules
API calls for all endpoints tagged Operations
API calls for all endpoints tagged Videos
Handle Tesla connections for GoogleApi.VideoIntelligence.V1
Helper functions for deserializing responses into models
Video annotation progress. Included in the `metadata` field of the `Operation` returned by the `GetOperation` call of the `google::longrunning::Operations` service
Video annotation request
Video annotation response. Included in the `response` field of the `Operation` returned by the `GetOperation` call of the `google::longrunning::Operations` service
Detected entity from video analysis
Explicit content annotation (based on per-frame visual signals only). If no explicit content has been detected in a frame, no annotations are present for that frame
Config for EXPLICIT_CONTENT_DETECTION
Video frame level annotation results for explicit content
Config for LABEL_DETECTION
Video frame level annotation results for label detection
Video segment level annotation results for label detection
Config for SHOT_CHANGE_DETECTION
Provides "hints" to the speech recognizer to favor specific words and phrases in the results
Alternative hypotheses (a.k.a. n-best list)
A speech recognition result corresponding to a portion of the audio
Config for SPEECH_TRANSCRIPTION
Annotation progress for a single video
Annotation results for a single video
Video context and/or feature-specific parameters
Word-specific information for recognized words. Word information is only included in the response when certain request parameters are set, such as `enable_word_time_offsets`
Video annotation progress. Included in the `metadata` field of the `Operation` returned by the `GetOperation` call of the `google::longrunning::Operations` service
Video annotation response. Included in the `response` field of the `Operation` returned by the `GetOperation` call of the `google::longrunning::Operations` service
Detected entity from video analysis
Explicit content annotation (based on per-frame visual signals only). If no explicit content has been detected in a frame, no annotations are present for that frame
Video frame level annotation results for explicit content
Label annotation
Video frame level annotation results for label detection
Video segment level annotation results for label detection
Alternative hypotheses (a.k.a. n-best list)
A speech recognition result corresponding to a portion of the audio
Annotation progress for a single video
Annotation results for a single video
Word-specific information for recognized words. Word information is only included in the response when certain request parameters are set, such as `enable_word_time_offsets`
Video annotation progress. Included in the `metadata` field of the `Operation` returned by the `GetOperation` call of the `google::longrunning::Operations` service
Video annotation response. Included in the `response` field of the `Operation` returned by the `GetOperation` call of the `google::longrunning::Operations` service
Detected entity from video analysis
Explicit content annotation (based on per-frame visual signals only). If no explicit content has been detected in a frame, no annotations are present for that frame
Video frame level annotation results for explicit content
Label annotation
Video frame level annotation results for label detection
Video segment level annotation results for label detection
Alternative hypotheses (a.k.a. n-best list)
A speech recognition result corresponding to a portion of the audio
Annotation progress for a single video
Annotation results for a single video
Video segment
Word-specific information for recognized words. Word information is only included in the response when certain request parameters are set, such as `enable_word_time_offsets`
Video annotation progress. Included in the `metadata` field of the `Operation` returned by the `GetOperation` call of the `google::longrunning::Operations` service
Video annotation response. Included in the `response` field of the `Operation` returned by the `GetOperation` call of the `google::longrunning::Operations` service
Detected entity from video analysis
Explicit content annotation (based on per-frame visual signals only). If no explicit content has been detected in a frame, no annotations are present for that frame
Video frame level annotation results for explicit content
Label annotation
Video frame level annotation results for label detection
Video segment level annotation results for label detection
Normalized bounding box. The normalized vertex coordinates are relative to the original image. Range: [0, 1]
Normalized bounding polygon for text (that might not be aligned with axis). Contains list of the corner points in clockwise order starting from top-left corner. For example, for a rectangular bounding box: When the text is horizontal it might look like: 0----1 | | 3----2 When it's clockwise rotated 180 degrees around the top-left corner it becomes: 2----3 | | 1----0 and the vertex order will still be (0, 1, 2, 3). Note that values can be less than 0, or greater than 1 due to trignometric calculations for location of the box
A vertex represents a 2D point in the image. NOTE: the normalized vertex coordinates are relative to the original image and range from 0 to 1
Annotations corresponding to one tracked object
Video frame level annotations for object detection and tracking. This field stores per frame location, time offset, and confidence
Alternative hypotheses (a.k.a. n-best list)
A speech recognition result corresponding to a portion of the audio
Annotations related to one detected OCR text snippet. This will contain the corresponding text, confidence value, and frame level information for each detection
Video frame level annotation results for text annotation (OCR). Contains information regarding timestamp and bounding box locations for the frames containing detected OCR text snippets
Video segment level annotation results for text detection
Annotation progress for a single video
Annotation results for a single video
Video segment
Word-specific information for recognized words. Word information is only included in the response when certain request parameters are set, such as `enable_word_time_offsets`
The request message for Operations.CancelOperation
The response message for Operations.ListOperations
This resource represents a long-running operation that is the result of a network API call
A generic empty message that you can re-use to avoid defining duplicated empty messages in your APIs. A typical example is to use it as the request or the response type of an API method. For instance: service Foo { rpc Bar(google.protobuf.Empty) returns (google.protobuf.Empty); } The JSON representation for `Empty` is empty JSON object `{}`
The `Status` type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by gRPC. The error model is designed to be: - Simple to use and understand for most users - Flexible enough to meet unexpected needs # Overview The `Status` message contains three pieces of data: error code, error message, and error details. The error code should be an enum value of google.rpc.Code, but it may accept additional error codes if needed. The error message should be a developer-facing English message that helps developers understand and resolve the error. If a localized user-facing error message is needed, put the localized message in the error details or localize it in the client. The optional error details may contain arbitrary information about the error. There is a predefined set of error detail types in the package `google.rpc` that can be used for common error conditions. # Language mapping The `Status` message is the logical representation of the error model, but it is not necessarily the actual wire format. When the `Status` message is exposed in different client libraries and different wire protocols, it can be mapped differently. For example, it will likely be mapped to some exceptions in Java, but more likely mapped to some error codes in C. # Other uses The error model and the `Status` message can be used in a variety of environments, either with or without APIs, to provide a consistent developer experience across different environments. Example uses of this error model include: - Partial errors. If a service needs to return partial errors to the client, it may embed the `Status` in the normal response to indicate the partial errors. - Workflow errors. A typical workflow has multiple steps. Each step may have a `Status` message for error reporting. - Batch operations. If a client uses batch request and batch response, the `Status` message should be used directly inside batch response, one for each error sub-response. - Asynchronous operations. If an API call embeds asynchronous operation results in its response, the status of those operations should be represented directly using the `Status` message. - Logging. If some API errors are stored in logs, the message `Status` could be used directly after any stripping needed for security/privacy reasons
Helper functions for building Tesla requests