google_api_video_intelligence v0.20.0 GoogleApi.VideoIntelligence.V1.Model.GoogleCloudVideointelligenceV1_VideoAnnotationResults View Source

Annotation results for a single video.

Attributes

  • error (type: GoogleApi.VideoIntelligence.V1.Model.GoogleRpc_Status.t, default: nil) - If set, indicates an error. Note that for a single AnnotateVideoRequest some videos may succeed and some may fail.
  • explicitAnnotation (type: GoogleApi.VideoIntelligence.V1.Model.GoogleCloudVideointelligenceV1_ExplicitContentAnnotation.t, default: nil) - Explicit content annotation.
  • frameLabelAnnotations (type: list(GoogleApi.VideoIntelligence.V1.Model.GoogleCloudVideointelligenceV1_LabelAnnotation.t), default: nil) - Label annotations on frame level. There is exactly one element for each unique label.
  • inputUri (type: String.t, default: nil) - Video file location in Google Cloud Storage.
  • objectAnnotations (type: list(GoogleApi.VideoIntelligence.V1.Model.GoogleCloudVideointelligenceV1_ObjectTrackingAnnotation.t), default: nil) - Annotations for list of objects detected and tracked in video.
  • segment (type: GoogleApi.VideoIntelligence.V1.Model.GoogleCloudVideointelligenceV1_VideoSegment.t, default: nil) - Video segment on which the annotation is run.
  • segmentLabelAnnotations (type: list(GoogleApi.VideoIntelligence.V1.Model.GoogleCloudVideointelligenceV1_LabelAnnotation.t), default: nil) - Topical label annotations on video level or user specified segment level. There is exactly one element for each unique label.
  • segmentPresenceLabelAnnotations (type: list(GoogleApi.VideoIntelligence.V1.Model.GoogleCloudVideointelligenceV1_LabelAnnotation.t), default: nil) - Presence label annotations on video level or user specified segment level. There is exactly one element for each unique label. Compared to the existing topical segment_label_annotations, this field presents more fine-grained, segment-level labels detected in video content and is made available only when the client sets LabelDetectionConfig.model to "builtin/latest" in the request.
  • shotAnnotations (type: list(GoogleApi.VideoIntelligence.V1.Model.GoogleCloudVideointelligenceV1_VideoSegment.t), default: nil) - Shot annotations. Each shot is represented as a video segment.
  • shotLabelAnnotations (type: list(GoogleApi.VideoIntelligence.V1.Model.GoogleCloudVideointelligenceV1_LabelAnnotation.t), default: nil) - Topical label annotations on shot level. There is exactly one element for each unique label.
  • shotPresenceLabelAnnotations (type: list(GoogleApi.VideoIntelligence.V1.Model.GoogleCloudVideointelligenceV1_LabelAnnotation.t), default: nil) - Presence label annotations on shot level. There is exactly one element for each unique label. Compared to the existing topical shot_label_annotations, this field presents more fine-grained, shot-level labels detected in video content and is made available only when the client sets LabelDetectionConfig.model to "builtin/latest" in the request.
  • speechTranscriptions (type: list(GoogleApi.VideoIntelligence.V1.Model.GoogleCloudVideointelligenceV1_SpeechTranscription.t), default: nil) - Speech transcription.
  • textAnnotations (type: list(GoogleApi.VideoIntelligence.V1.Model.GoogleCloudVideointelligenceV1_TextAnnotation.t), default: nil) - OCR text detection and tracking. Annotations for list of detected text snippets. Each will have list of frame information associated with it.

Link to this section Summary

Functions

Unwrap a decoded JSON object into its complex fields.

Link to this section Types

Link to this section Functions

Link to this function

decode(value, options)

View Source
decode(struct(), keyword()) :: struct()

Unwrap a decoded JSON object into its complex fields.