google_api_video_intelligence v0.13.0 GoogleApi.VideoIntelligence.V1.Model.GoogleCloudVideointelligenceV1p3beta1_VideoAnnotationResults View Source

Annotation results for a single video.

Attributes

  • error (type: GoogleApi.VideoIntelligence.V1.Model.GoogleRpc_Status.t, default: nil) - If set, indicates an error. Note that for a single AnnotateVideoRequest some videos may succeed and some may fail.
  • explicitAnnotation (type: GoogleApi.VideoIntelligence.V1.Model.GoogleCloudVideointelligenceV1p3beta1_ExplicitContentAnnotation.t, default: nil) - Explicit content annotation.
  • frameLabelAnnotations (type: list(GoogleApi.VideoIntelligence.V1.Model.GoogleCloudVideointelligenceV1p3beta1_LabelAnnotation.t), default: nil) - Label annotations on frame level. There is exactly one element for each unique label.
  • inputUri (type: String.t, default: nil) - Video file location in Google Cloud Storage.
  • logoRecognitionAnnotations (type: list(GoogleApi.VideoIntelligence.V1.Model.GoogleCloudVideointelligenceV1p3beta1_LogoRecognitionAnnotation.t), default: nil) - Annotations for list of logos detected, tracked and recognized in video.
  • objectAnnotations (type: list(GoogleApi.VideoIntelligence.V1.Model.GoogleCloudVideointelligenceV1p3beta1_ObjectTrackingAnnotation.t), default: nil) - Annotations for list of objects detected and tracked in video.
  • segment (type: GoogleApi.VideoIntelligence.V1.Model.GoogleCloudVideointelligenceV1p3beta1_VideoSegment.t, default: nil) - Video segment on which the annotation is run.
  • segmentLabelAnnotations (type: list(GoogleApi.VideoIntelligence.V1.Model.GoogleCloudVideointelligenceV1p3beta1_LabelAnnotation.t), default: nil) - Topical label annotations on video level or user specified segment level. There is exactly one element for each unique label.
  • segmentPresenceLabelAnnotations (type: list(GoogleApi.VideoIntelligence.V1.Model.GoogleCloudVideointelligenceV1p3beta1_LabelAnnotation.t), default: nil) - Presence label annotations on video level or user specified segment level. There is exactly one element for each unique label.
  • shotAnnotations (type: list(GoogleApi.VideoIntelligence.V1.Model.GoogleCloudVideointelligenceV1p3beta1_VideoSegment.t), default: nil) - Shot annotations. Each shot is represented as a video segment.
  • shotLabelAnnotations (type: list(GoogleApi.VideoIntelligence.V1.Model.GoogleCloudVideointelligenceV1p3beta1_LabelAnnotation.t), default: nil) - Topical label annotations on shot level. There is exactly one element for each unique label.
  • shotPresenceLabelAnnotations (type: list(GoogleApi.VideoIntelligence.V1.Model.GoogleCloudVideointelligenceV1p3beta1_LabelAnnotation.t), default: nil) - Presence label annotations on shot level. There is exactly one element for each unique label.
  • speechTranscriptions (type: list(GoogleApi.VideoIntelligence.V1.Model.GoogleCloudVideointelligenceV1p3beta1_SpeechTranscription.t), default: nil) - Speech transcription.
  • textAnnotations (type: list(GoogleApi.VideoIntelligence.V1.Model.GoogleCloudVideointelligenceV1p3beta1_TextAnnotation.t), default: nil) - OCR text detection and tracking. Annotations for list of detected text snippets. Each will have list of frame information associated with it.

Link to this section Summary

Functions

Unwrap a decoded JSON object into its complex fields.

Link to this section Types

Link to this type

t()

View Source
t() ::
  %GoogleApi.VideoIntelligence.V1.Model.GoogleCloudVideointelligenceV1p3beta1_VideoAnnotationResults{
    error: GoogleApi.VideoIntelligence.V1.Model.GoogleRpc_Status.t(),
    explicitAnnotation:
      GoogleApi.VideoIntelligence.V1.Model.GoogleCloudVideointelligenceV1p3beta1_ExplicitContentAnnotation.t(),
    frameLabelAnnotations: [
      GoogleApi.VideoIntelligence.V1.Model.GoogleCloudVideointelligenceV1p3beta1_LabelAnnotation.t()
    ],
    inputUri: String.t(),
    logoRecognitionAnnotations: [
      GoogleApi.VideoIntelligence.V1.Model.GoogleCloudVideointelligenceV1p3beta1_LogoRecognitionAnnotation.t()
    ],
    objectAnnotations: [
      GoogleApi.VideoIntelligence.V1.Model.GoogleCloudVideointelligenceV1p3beta1_ObjectTrackingAnnotation.t()
    ],
    segment:
      GoogleApi.VideoIntelligence.V1.Model.GoogleCloudVideointelligenceV1p3beta1_VideoSegment.t(),
    segmentLabelAnnotations: [
      GoogleApi.VideoIntelligence.V1.Model.GoogleCloudVideointelligenceV1p3beta1_LabelAnnotation.t()
    ],
    segmentPresenceLabelAnnotations: [
      GoogleApi.VideoIntelligence.V1.Model.GoogleCloudVideointelligenceV1p3beta1_LabelAnnotation.t()
    ],
    shotAnnotations: [
      GoogleApi.VideoIntelligence.V1.Model.GoogleCloudVideointelligenceV1p3beta1_VideoSegment.t()
    ],
    shotLabelAnnotations: [
      GoogleApi.VideoIntelligence.V1.Model.GoogleCloudVideointelligenceV1p3beta1_LabelAnnotation.t()
    ],
    shotPresenceLabelAnnotations: [
      GoogleApi.VideoIntelligence.V1.Model.GoogleCloudVideointelligenceV1p3beta1_LabelAnnotation.t()
    ],
    speechTranscriptions: [
      GoogleApi.VideoIntelligence.V1.Model.GoogleCloudVideointelligenceV1p3beta1_SpeechTranscription.t()
    ],
    textAnnotations: [
      GoogleApi.VideoIntelligence.V1.Model.GoogleCloudVideointelligenceV1p3beta1_TextAnnotation.t()
    ]
  }

Link to this section Functions

Link to this function

decode(value, options)

View Source
decode(struct(), keyword()) :: struct()

Unwrap a decoded JSON object into its complex fields.