google_api_video_intelligence v0.23.0 GoogleApi.VideoIntelligence.V1.Model.GoogleCloudVideointelligenceV1p1beta1_VideoAnnotationResults View Source
Annotation results for a single video.
Attributes
error
(type:GoogleApi.VideoIntelligence.V1.Model.GoogleRpc_Status.t
, default:nil
) - If set, indicates an error. Note that for a singleAnnotateVideoRequest
some videos may succeed and some may fail.explicitAnnotation
(type:GoogleApi.VideoIntelligence.V1.Model.GoogleCloudVideointelligenceV1p1beta1_ExplicitContentAnnotation.t
, default:nil
) - Explicit content annotation.frameLabelAnnotations
(type:list(GoogleApi.VideoIntelligence.V1.Model.GoogleCloudVideointelligenceV1p1beta1_LabelAnnotation.t)
, default:nil
) - Label annotations on frame level. There is exactly one element for each unique label.inputUri
(type:String.t
, default:nil
) - Video file location in Cloud Storage.logoRecognitionAnnotations
(type:list(GoogleApi.VideoIntelligence.V1.Model.GoogleCloudVideointelligenceV1p1beta1_LogoRecognitionAnnotation.t)
, default:nil
) - Annotations for list of logos detected, tracked and recognized in video.objectAnnotations
(type:list(GoogleApi.VideoIntelligence.V1.Model.GoogleCloudVideointelligenceV1p1beta1_ObjectTrackingAnnotation.t)
, default:nil
) - Annotations for list of objects detected and tracked in video.segment
(type:GoogleApi.VideoIntelligence.V1.Model.GoogleCloudVideointelligenceV1p1beta1_VideoSegment.t
, default:nil
) - Video segment on which the annotation is run.segmentLabelAnnotations
(type:list(GoogleApi.VideoIntelligence.V1.Model.GoogleCloudVideointelligenceV1p1beta1_LabelAnnotation.t)
, default:nil
) - Topical label annotations on video level or user-specified segment level. There is exactly one element for each unique label.segmentPresenceLabelAnnotations
(type:list(GoogleApi.VideoIntelligence.V1.Model.GoogleCloudVideointelligenceV1p1beta1_LabelAnnotation.t)
, default:nil
) - Presence label annotations on video level or user-specified segment level. There is exactly one element for each unique label. Compared to the existing topicalsegment_label_annotations
, this field presents more fine-grained, segment-level labels detected in video content and is made available only when the client setsLabelDetectionConfig.model
to "builtin/latest" in the request.shotAnnotations
(type:list(GoogleApi.VideoIntelligence.V1.Model.GoogleCloudVideointelligenceV1p1beta1_VideoSegment.t)
, default:nil
) - Shot annotations. Each shot is represented as a video segment.shotLabelAnnotations
(type:list(GoogleApi.VideoIntelligence.V1.Model.GoogleCloudVideointelligenceV1p1beta1_LabelAnnotation.t)
, default:nil
) - Topical label annotations on shot level. There is exactly one element for each unique label.shotPresenceLabelAnnotations
(type:list(GoogleApi.VideoIntelligence.V1.Model.GoogleCloudVideointelligenceV1p1beta1_LabelAnnotation.t)
, default:nil
) - Presence label annotations on shot level. There is exactly one element for each unique label. Compared to the existing topicalshot_label_annotations
, this field presents more fine-grained, shot-level labels detected in video content and is made available only when the client setsLabelDetectionConfig.model
to "builtin/latest" in the request.speechTranscriptions
(type:list(GoogleApi.VideoIntelligence.V1.Model.GoogleCloudVideointelligenceV1p1beta1_SpeechTranscription.t)
, default:nil
) - Speech transcription.textAnnotations
(type:list(GoogleApi.VideoIntelligence.V1.Model.GoogleCloudVideointelligenceV1p1beta1_TextAnnotation.t)
, default:nil
) - OCR text detection and tracking. Annotations for list of detected text snippets. Each will have list of frame information associated with it.
Link to this section Summary
Functions
Unwrap a decoded JSON object into its complex fields.
Link to this section Types
Link to this type
t()
View Sourcet() :: %GoogleApi.VideoIntelligence.V1.Model.GoogleCloudVideointelligenceV1p1beta1_VideoAnnotationResults{ error: GoogleApi.VideoIntelligence.V1.Model.GoogleRpc_Status.t(), explicitAnnotation: GoogleApi.VideoIntelligence.V1.Model.GoogleCloudVideointelligenceV1p1beta1_ExplicitContentAnnotation.t(), frameLabelAnnotations: [ GoogleApi.VideoIntelligence.V1.Model.GoogleCloudVideointelligenceV1p1beta1_LabelAnnotation.t() ], inputUri: String.t(), logoRecognitionAnnotations: [ GoogleApi.VideoIntelligence.V1.Model.GoogleCloudVideointelligenceV1p1beta1_LogoRecognitionAnnotation.t() ], objectAnnotations: [ GoogleApi.VideoIntelligence.V1.Model.GoogleCloudVideointelligenceV1p1beta1_ObjectTrackingAnnotation.t() ], segment: GoogleApi.VideoIntelligence.V1.Model.GoogleCloudVideointelligenceV1p1beta1_VideoSegment.t(), segmentLabelAnnotations: [ GoogleApi.VideoIntelligence.V1.Model.GoogleCloudVideointelligenceV1p1beta1_LabelAnnotation.t() ], segmentPresenceLabelAnnotations: [ GoogleApi.VideoIntelligence.V1.Model.GoogleCloudVideointelligenceV1p1beta1_LabelAnnotation.t() ], shotAnnotations: [ GoogleApi.VideoIntelligence.V1.Model.GoogleCloudVideointelligenceV1p1beta1_VideoSegment.t() ], shotLabelAnnotations: [ GoogleApi.VideoIntelligence.V1.Model.GoogleCloudVideointelligenceV1p1beta1_LabelAnnotation.t() ], shotPresenceLabelAnnotations: [ GoogleApi.VideoIntelligence.V1.Model.GoogleCloudVideointelligenceV1p1beta1_LabelAnnotation.t() ], speechTranscriptions: [ GoogleApi.VideoIntelligence.V1.Model.GoogleCloudVideointelligenceV1p1beta1_SpeechTranscription.t() ], textAnnotations: [ GoogleApi.VideoIntelligence.V1.Model.GoogleCloudVideointelligenceV1p1beta1_TextAnnotation.t() ] }
Link to this section Functions
Unwrap a decoded JSON object into its complex fields.