Class: Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1VideoAnnotationResults

Inherits:
Object
  • Object
show all
Includes:
Core::Hashable, Core::JsonObjectSupport
Defined in:
generated/google/apis/videointelligence_v1p1beta1/classes.rb,
generated/google/apis/videointelligence_v1p1beta1/representations.rb,
generated/google/apis/videointelligence_v1p1beta1/representations.rb

Overview

Annotation results for a single video.

Instance Attribute Summary collapse

Instance Method Summary collapse

Constructor Details

#initialize(**args) ⇒ GoogleCloudVideointelligenceV1VideoAnnotationResults

Returns a new instance of GoogleCloudVideointelligenceV1VideoAnnotationResults.



1057
1058
1059
# File 'generated/google/apis/videointelligence_v1p1beta1/classes.rb', line 1057

def initialize(**args)
   update!(**args)
end

Instance Attribute Details

#errorGoogle::Apis::VideointelligenceV1p1beta1::GoogleRpcStatus

The Status type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by gRPC. Each Status message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the API Design Guide. Corresponds to the JSON property error



960
961
962
# File 'generated/google/apis/videointelligence_v1p1beta1/classes.rb', line 960

def error
  @error
end

#explicit_annotationGoogle::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1ExplicitContentAnnotation

Explicit content annotation (based on per-frame visual signals only). If no explicit content has been detected in a frame, no annotations are present for that frame. Corresponds to the JSON property explicitAnnotation



967
968
969
# File 'generated/google/apis/videointelligence_v1p1beta1/classes.rb', line 967

def explicit_annotation
  @explicit_annotation
end

#face_annotationsArray<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1FaceAnnotation>

Deprecated. Please use face_detection_annotations instead. Corresponds to the JSON property faceAnnotations



972
973
974
# File 'generated/google/apis/videointelligence_v1p1beta1/classes.rb', line 972

def face_annotations
  @face_annotations
end

#face_detection_annotationsArray<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1FaceDetectionAnnotation>

Face detection annotations. Corresponds to the JSON property faceDetectionAnnotations



977
978
979
# File 'generated/google/apis/videointelligence_v1p1beta1/classes.rb', line 977

def face_detection_annotations
  @face_detection_annotations
end

#frame_label_annotationsArray<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1LabelAnnotation>

Label annotations on frame level. There is exactly one element for each unique label. Corresponds to the JSON property frameLabelAnnotations



983
984
985
# File 'generated/google/apis/videointelligence_v1p1beta1/classes.rb', line 983

def frame_label_annotations
  @frame_label_annotations
end

#input_uriString

Video file location in Cloud Storage. Corresponds to the JSON property inputUri

Returns:

  • (String)


988
989
990
# File 'generated/google/apis/videointelligence_v1p1beta1/classes.rb', line 988

def input_uri
  @input_uri
end

#logo_recognition_annotationsArray<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1LogoRecognitionAnnotation>

Annotations for list of logos detected, tracked and recognized in video. Corresponds to the JSON property logoRecognitionAnnotations



993
994
995
# File 'generated/google/apis/videointelligence_v1p1beta1/classes.rb', line 993

def logo_recognition_annotations
  @logo_recognition_annotations
end

#object_annotationsArray<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1ObjectTrackingAnnotation>

Annotations for list of objects detected and tracked in video. Corresponds to the JSON property objectAnnotations



998
999
1000
# File 'generated/google/apis/videointelligence_v1p1beta1/classes.rb', line 998

def object_annotations
  @object_annotations
end

#person_detection_annotationsArray<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1PersonDetectionAnnotation>

Person detection annotations. Corresponds to the JSON property personDetectionAnnotations



1003
1004
1005
# File 'generated/google/apis/videointelligence_v1p1beta1/classes.rb', line 1003

def person_detection_annotations
  @person_detection_annotations
end

#segmentGoogle::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1VideoSegment

Video segment. Corresponds to the JSON property segment



1008
1009
1010
# File 'generated/google/apis/videointelligence_v1p1beta1/classes.rb', line 1008

def segment
  @segment
end

#segment_label_annotationsArray<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1LabelAnnotation>

Topical label annotations on video level or user-specified segment level. There is exactly one element for each unique label. Corresponds to the JSON property segmentLabelAnnotations



1014
1015
1016
# File 'generated/google/apis/videointelligence_v1p1beta1/classes.rb', line 1014

def segment_label_annotations
  @segment_label_annotations
end

#segment_presence_label_annotationsArray<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1LabelAnnotation>

Presence label annotations on video level or user-specified segment level. There is exactly one element for each unique label. Compared to the existing topical segment_label_annotations, this field presents more fine-grained, segment-level labels detected in video content and is made available only when the client sets LabelDetectionConfig.model to "builtin/latest" in the request. Corresponds to the JSON property segmentPresenceLabelAnnotations



1024
1025
1026
# File 'generated/google/apis/videointelligence_v1p1beta1/classes.rb', line 1024

def segment_presence_label_annotations
  @segment_presence_label_annotations
end

#shot_annotationsArray<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1VideoSegment>

Shot annotations. Each shot is represented as a video segment. Corresponds to the JSON property shotAnnotations



1029
1030
1031
# File 'generated/google/apis/videointelligence_v1p1beta1/classes.rb', line 1029

def shot_annotations
  @shot_annotations
end

#shot_label_annotationsArray<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1LabelAnnotation>

Topical label annotations on shot level. There is exactly one element for each unique label. Corresponds to the JSON property shotLabelAnnotations



1035
1036
1037
# File 'generated/google/apis/videointelligence_v1p1beta1/classes.rb', line 1035

def shot_label_annotations
  @shot_label_annotations
end

#shot_presence_label_annotationsArray<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1LabelAnnotation>

Presence label annotations on shot level. There is exactly one element for each unique label. Compared to the existing topical shot_label_annotations, this field presents more fine-grained, shot-level labels detected in video content and is made available only when the client sets LabelDetectionConfig. model to "builtin/latest" in the request. Corresponds to the JSON property shotPresenceLabelAnnotations



1044
1045
1046
# File 'generated/google/apis/videointelligence_v1p1beta1/classes.rb', line 1044

def shot_presence_label_annotations
  @shot_presence_label_annotations
end

#speech_transcriptionsArray<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1SpeechTranscription>

Speech transcription. Corresponds to the JSON property speechTranscriptions



1049
1050
1051
# File 'generated/google/apis/videointelligence_v1p1beta1/classes.rb', line 1049

def speech_transcriptions
  @speech_transcriptions
end

#text_annotationsArray<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1TextAnnotation>

OCR text detection and tracking. Annotations for list of detected text snippets. Each will have list of frame information associated with it. Corresponds to the JSON property textAnnotations



1055
1056
1057
# File 'generated/google/apis/videointelligence_v1p1beta1/classes.rb', line 1055

def text_annotations
  @text_annotations
end

Instance Method Details

#update!(**args) ⇒ Object

Update properties of this object



1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
# File 'generated/google/apis/videointelligence_v1p1beta1/classes.rb', line 1062

def update!(**args)
  @error = args[:error] if args.key?(:error)
  @explicit_annotation = args[:explicit_annotation] if args.key?(:explicit_annotation)
  @face_annotations = args[:face_annotations] if args.key?(:face_annotations)
  @face_detection_annotations = args[:face_detection_annotations] if args.key?(:face_detection_annotations)
  @frame_label_annotations = args[:frame_label_annotations] if args.key?(:frame_label_annotations)
  @input_uri = args[:input_uri] if args.key?(:input_uri)
  @logo_recognition_annotations = args[:logo_recognition_annotations] if args.key?(:logo_recognition_annotations)
  @object_annotations = args[:object_annotations] if args.key?(:object_annotations)
  @person_detection_annotations = args[:person_detection_annotations] if args.key?(:person_detection_annotations)
  @segment = args[:segment] if args.key?(:segment)
  @segment_label_annotations = args[:segment_label_annotations] if args.key?(:segment_label_annotations)
  @segment_presence_label_annotations = args[:segment_presence_label_annotations] if args.key?(:segment_presence_label_annotations)
  @shot_annotations = args[:shot_annotations] if args.key?(:shot_annotations)
  @shot_label_annotations = args[:shot_label_annotations] if args.key?(:shot_label_annotations)
  @shot_presence_label_annotations = args[:shot_presence_label_annotations] if args.key?(:shot_presence_label_annotations)
  @speech_transcriptions = args[:speech_transcriptions] if args.key?(:speech_transcriptions)
  @text_annotations = args[:text_annotations] if args.key?(:text_annotations)
end