Class: Google::Apis::VideointelligenceV1p3beta1::GoogleCloudVideointelligenceV1beta2VideoAnnotationResults

Inherits:
Object
  • Object
show all
Includes:
Core::Hashable, Core::JsonObjectSupport
Defined in:
lib/google/apis/videointelligence_v1p3beta1/classes.rb,
lib/google/apis/videointelligence_v1p3beta1/representations.rb,
lib/google/apis/videointelligence_v1p3beta1/representations.rb

Overview

Annotation results for a single video.

Instance Attribute Summary collapse

Instance Method Summary collapse

Constructor Details

#initialize(**args) ⇒ GoogleCloudVideointelligenceV1beta2VideoAnnotationResults

Returns a new instance of GoogleCloudVideointelligenceV1beta2VideoAnnotationResults.



2201
2202
2203
# File 'lib/google/apis/videointelligence_v1p3beta1/classes.rb', line 2201

def initialize(**args)
   update!(**args)
end

Instance Attribute Details

#errorGoogle::Apis::VideointelligenceV1p3beta1::GoogleRpcStatus

The Status type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by gRPC. Each Status message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the API Design Guide. Corresponds to the JSON property error



2104
2105
2106
# File 'lib/google/apis/videointelligence_v1p3beta1/classes.rb', line 2104

def error
  @error
end

#explicit_annotationGoogle::Apis::VideointelligenceV1p3beta1::GoogleCloudVideointelligenceV1beta2ExplicitContentAnnotation

Explicit content annotation (based on per-frame visual signals only). If no explicit content has been detected in a frame, no annotations are present for that frame. Corresponds to the JSON property explicitAnnotation



2111
2112
2113
# File 'lib/google/apis/videointelligence_v1p3beta1/classes.rb', line 2111

def explicit_annotation
  @explicit_annotation
end

#face_annotationsArray<Google::Apis::VideointelligenceV1p3beta1::GoogleCloudVideointelligenceV1beta2FaceAnnotation>

Deprecated. Please use face_detection_annotations instead. Corresponds to the JSON property faceAnnotations



2116
2117
2118
# File 'lib/google/apis/videointelligence_v1p3beta1/classes.rb', line 2116

def face_annotations
  @face_annotations
end

#face_detection_annotationsArray<Google::Apis::VideointelligenceV1p3beta1::GoogleCloudVideointelligenceV1beta2FaceDetectionAnnotation>

Face detection annotations. Corresponds to the JSON property faceDetectionAnnotations



2121
2122
2123
# File 'lib/google/apis/videointelligence_v1p3beta1/classes.rb', line 2121

def face_detection_annotations
  @face_detection_annotations
end

#frame_label_annotationsArray<Google::Apis::VideointelligenceV1p3beta1::GoogleCloudVideointelligenceV1beta2LabelAnnotation>

Label annotations on frame level. There is exactly one element for each unique label. Corresponds to the JSON property frameLabelAnnotations



2127
2128
2129
# File 'lib/google/apis/videointelligence_v1p3beta1/classes.rb', line 2127

def frame_label_annotations
  @frame_label_annotations
end

#input_uriString

Video file location in Cloud Storage. Corresponds to the JSON property inputUri

Returns:

  • (String)


2132
2133
2134
# File 'lib/google/apis/videointelligence_v1p3beta1/classes.rb', line 2132

def input_uri
  @input_uri
end

#logo_recognition_annotationsArray<Google::Apis::VideointelligenceV1p3beta1::GoogleCloudVideointelligenceV1beta2LogoRecognitionAnnotation>

Annotations for list of logos detected, tracked and recognized in video. Corresponds to the JSON property logoRecognitionAnnotations



2137
2138
2139
# File 'lib/google/apis/videointelligence_v1p3beta1/classes.rb', line 2137

def logo_recognition_annotations
  @logo_recognition_annotations
end

#object_annotationsArray<Google::Apis::VideointelligenceV1p3beta1::GoogleCloudVideointelligenceV1beta2ObjectTrackingAnnotation>

Annotations for list of objects detected and tracked in video. Corresponds to the JSON property objectAnnotations



2142
2143
2144
# File 'lib/google/apis/videointelligence_v1p3beta1/classes.rb', line 2142

def object_annotations
  @object_annotations
end

#person_detection_annotationsArray<Google::Apis::VideointelligenceV1p3beta1::GoogleCloudVideointelligenceV1beta2PersonDetectionAnnotation>

Person detection annotations. Corresponds to the JSON property personDetectionAnnotations



2147
2148
2149
# File 'lib/google/apis/videointelligence_v1p3beta1/classes.rb', line 2147

def person_detection_annotations
  @person_detection_annotations
end

#segmentGoogle::Apis::VideointelligenceV1p3beta1::GoogleCloudVideointelligenceV1beta2VideoSegment

Video segment. Corresponds to the JSON property segment



2152
2153
2154
# File 'lib/google/apis/videointelligence_v1p3beta1/classes.rb', line 2152

def segment
  @segment
end

#segment_label_annotationsArray<Google::Apis::VideointelligenceV1p3beta1::GoogleCloudVideointelligenceV1beta2LabelAnnotation>

Topical label annotations on video level or user-specified segment level. There is exactly one element for each unique label. Corresponds to the JSON property segmentLabelAnnotations



2158
2159
2160
# File 'lib/google/apis/videointelligence_v1p3beta1/classes.rb', line 2158

def segment_label_annotations
  @segment_label_annotations
end

#segment_presence_label_annotationsArray<Google::Apis::VideointelligenceV1p3beta1::GoogleCloudVideointelligenceV1beta2LabelAnnotation>

Presence label annotations on video level or user-specified segment level. There is exactly one element for each unique label. Compared to the existing topical segment_label_annotations, this field presents more fine-grained, segment-level labels detected in video content and is made available only when the client sets LabelDetectionConfig.model to "builtin/latest" in the request. Corresponds to the JSON property segmentPresenceLabelAnnotations



2168
2169
2170
# File 'lib/google/apis/videointelligence_v1p3beta1/classes.rb', line 2168

def segment_presence_label_annotations
  @segment_presence_label_annotations
end

#shot_annotationsArray<Google::Apis::VideointelligenceV1p3beta1::GoogleCloudVideointelligenceV1beta2VideoSegment>

Shot annotations. Each shot is represented as a video segment. Corresponds to the JSON property shotAnnotations



2173
2174
2175
# File 'lib/google/apis/videointelligence_v1p3beta1/classes.rb', line 2173

def shot_annotations
  @shot_annotations
end

#shot_label_annotationsArray<Google::Apis::VideointelligenceV1p3beta1::GoogleCloudVideointelligenceV1beta2LabelAnnotation>

Topical label annotations on shot level. There is exactly one element for each unique label. Corresponds to the JSON property shotLabelAnnotations



2179
2180
2181
# File 'lib/google/apis/videointelligence_v1p3beta1/classes.rb', line 2179

def shot_label_annotations
  @shot_label_annotations
end

#shot_presence_label_annotationsArray<Google::Apis::VideointelligenceV1p3beta1::GoogleCloudVideointelligenceV1beta2LabelAnnotation>

Presence label annotations on shot level. There is exactly one element for each unique label. Compared to the existing topical shot_label_annotations, this field presents more fine-grained, shot-level labels detected in video content and is made available only when the client sets LabelDetectionConfig. model to "builtin/latest" in the request. Corresponds to the JSON property shotPresenceLabelAnnotations



2188
2189
2190
# File 'lib/google/apis/videointelligence_v1p3beta1/classes.rb', line 2188

def shot_presence_label_annotations
  @shot_presence_label_annotations
end

#speech_transcriptionsArray<Google::Apis::VideointelligenceV1p3beta1::GoogleCloudVideointelligenceV1beta2SpeechTranscription>

Speech transcription. Corresponds to the JSON property speechTranscriptions



2193
2194
2195
# File 'lib/google/apis/videointelligence_v1p3beta1/classes.rb', line 2193

def speech_transcriptions
  @speech_transcriptions
end

#text_annotationsArray<Google::Apis::VideointelligenceV1p3beta1::GoogleCloudVideointelligenceV1beta2TextAnnotation>

OCR text detection and tracking. Annotations for list of detected text snippets. Each will have list of frame information associated with it. Corresponds to the JSON property textAnnotations



2199
2200
2201
# File 'lib/google/apis/videointelligence_v1p3beta1/classes.rb', line 2199

def text_annotations
  @text_annotations
end

Instance Method Details

#update!(**args) ⇒ Object

Update properties of this object



2206
2207
2208
2209
2210
2211
2212
2213
2214
2215
2216
2217
2218
2219
2220
2221
2222
2223
2224
# File 'lib/google/apis/videointelligence_v1p3beta1/classes.rb', line 2206

def update!(**args)
  @error = args[:error] if args.key?(:error)
  @explicit_annotation = args[:explicit_annotation] if args.key?(:explicit_annotation)
  @face_annotations = args[:face_annotations] if args.key?(:face_annotations)
  @face_detection_annotations = args[:face_detection_annotations] if args.key?(:face_detection_annotations)
  @frame_label_annotations = args[:frame_label_annotations] if args.key?(:frame_label_annotations)
  @input_uri = args[:input_uri] if args.key?(:input_uri)
  @logo_recognition_annotations = args[:logo_recognition_annotations] if args.key?(:logo_recognition_annotations)
  @object_annotations = args[:object_annotations] if args.key?(:object_annotations)
  @person_detection_annotations = args[:person_detection_annotations] if args.key?(:person_detection_annotations)
  @segment = args[:segment] if args.key?(:segment)
  @segment_label_annotations = args[:segment_label_annotations] if args.key?(:segment_label_annotations)
  @segment_presence_label_annotations = args[:segment_presence_label_annotations] if args.key?(:segment_presence_label_annotations)
  @shot_annotations = args[:shot_annotations] if args.key?(:shot_annotations)
  @shot_label_annotations = args[:shot_label_annotations] if args.key?(:shot_label_annotations)
  @shot_presence_label_annotations = args[:shot_presence_label_annotations] if args.key?(:shot_presence_label_annotations)
  @speech_transcriptions = args[:speech_transcriptions] if args.key?(:speech_transcriptions)
  @text_annotations = args[:text_annotations] if args.key?(:text_annotations)
end