Class: Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1VideoAnnotationResults
- Inherits:
-
Object
- Object
- Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1VideoAnnotationResults
- Includes:
- Core::Hashable, Core::JsonObjectSupport
- Defined in:
- generated/google/apis/videointelligence_v1/classes.rb,
generated/google/apis/videointelligence_v1/representations.rb,
generated/google/apis/videointelligence_v1/representations.rb
Overview
Annotation results for a single video.
Instance Attribute Summary collapse
-
#error ⇒ Google::Apis::VideointelligenceV1::GoogleRpcStatus
The
Status
type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. -
#explicit_annotation ⇒ Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1ExplicitContentAnnotation
Explicit content annotation (based on per-frame visual signals only).
-
#frame_label_annotations ⇒ Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1LabelAnnotation>
Label annotations on frame level.
-
#input_uri ⇒ String
Video file location in Cloud Storage.
-
#logo_recognition_annotations ⇒ Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1LogoRecognitionAnnotation>
Annotations for list of logos detected, tracked and recognized in video.
-
#object_annotations ⇒ Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1ObjectTrackingAnnotation>
Annotations for list of objects detected and tracked in video.
-
#segment ⇒ Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1VideoSegment
Video segment.
-
#segment_label_annotations ⇒ Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1LabelAnnotation>
Topical label annotations on video level or user-specified segment level.
-
#segment_presence_label_annotations ⇒ Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1LabelAnnotation>
Presence label annotations on video level or user-specified segment level.
-
#shot_annotations ⇒ Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1VideoSegment>
Shot annotations.
-
#shot_label_annotations ⇒ Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1LabelAnnotation>
Topical label annotations on shot level.
-
#shot_presence_label_annotations ⇒ Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1LabelAnnotation>
Presence label annotations on shot level.
-
#speech_transcriptions ⇒ Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1SpeechTranscription>
Speech transcription.
-
#text_annotations ⇒ Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1TextAnnotation>
OCR text detection and tracking.
Instance Method Summary collapse
-
#initialize(**args) ⇒ GoogleCloudVideointelligenceV1VideoAnnotationResults
constructor
A new instance of GoogleCloudVideointelligenceV1VideoAnnotationResults.
-
#update!(**args) ⇒ Object
Update properties of this object.
Methods included from Core::JsonObjectSupport
Methods included from Core::Hashable
Constructor Details
#initialize(**args) ⇒ GoogleCloudVideointelligenceV1VideoAnnotationResults
Returns a new instance of GoogleCloudVideointelligenceV1VideoAnnotationResults.
1282 1283 1284 |
# File 'generated/google/apis/videointelligence_v1/classes.rb', line 1282 def initialize(**args) update!(**args) end |
Instance Attribute Details
#error ⇒ Google::Apis::VideointelligenceV1::GoogleRpcStatus
The Status
type defines a logical error model that is suitable for
different programming environments, including REST APIs and RPC APIs. It is
used by gRPC. Each Status
message contains
three pieces of data: error code, error message, and error details.
You can find out more about this error model and how to work with it in the
API Design Guide.
Corresponds to the JSON property error
1198 1199 1200 |
# File 'generated/google/apis/videointelligence_v1/classes.rb', line 1198 def error @error end |
#explicit_annotation ⇒ Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1ExplicitContentAnnotation
Explicit content annotation (based on per-frame visual signals only).
If no explicit content has been detected in a frame, no annotations are
present for that frame.
Corresponds to the JSON property explicitAnnotation
1205 1206 1207 |
# File 'generated/google/apis/videointelligence_v1/classes.rb', line 1205 def explicit_annotation @explicit_annotation end |
#frame_label_annotations ⇒ Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1LabelAnnotation>
Label annotations on frame level.
There is exactly one element for each unique label.
Corresponds to the JSON property frameLabelAnnotations
1211 1212 1213 |
# File 'generated/google/apis/videointelligence_v1/classes.rb', line 1211 def frame_label_annotations @frame_label_annotations end |
#input_uri ⇒ String
Video file location in
Cloud Storage.
Corresponds to the JSON property inputUri
1217 1218 1219 |
# File 'generated/google/apis/videointelligence_v1/classes.rb', line 1217 def input_uri @input_uri end |
#logo_recognition_annotations ⇒ Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1LogoRecognitionAnnotation>
Annotations for list of logos detected, tracked and recognized in video.
Corresponds to the JSON property logoRecognitionAnnotations
1222 1223 1224 |
# File 'generated/google/apis/videointelligence_v1/classes.rb', line 1222 def logo_recognition_annotations @logo_recognition_annotations end |
#object_annotations ⇒ Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1ObjectTrackingAnnotation>
Annotations for list of objects detected and tracked in video.
Corresponds to the JSON property objectAnnotations
1227 1228 1229 |
# File 'generated/google/apis/videointelligence_v1/classes.rb', line 1227 def object_annotations @object_annotations end |
#segment ⇒ Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1VideoSegment
Video segment.
Corresponds to the JSON property segment
1232 1233 1234 |
# File 'generated/google/apis/videointelligence_v1/classes.rb', line 1232 def segment @segment end |
#segment_label_annotations ⇒ Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1LabelAnnotation>
Topical label annotations on video level or user-specified segment level.
There is exactly one element for each unique label.
Corresponds to the JSON property segmentLabelAnnotations
1238 1239 1240 |
# File 'generated/google/apis/videointelligence_v1/classes.rb', line 1238 def segment_label_annotations @segment_label_annotations end |
#segment_presence_label_annotations ⇒ Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1LabelAnnotation>
Presence label annotations on video level or user-specified segment level.
There is exactly one element for each unique label. Compared to the
existing topical segment_label_annotations
, this field presents more
fine-grained, segment-level labels detected in video content and is made
available only when the client sets LabelDetectionConfig.model
to
"builtin/latest" in the request.
Corresponds to the JSON property segmentPresenceLabelAnnotations
1248 1249 1250 |
# File 'generated/google/apis/videointelligence_v1/classes.rb', line 1248 def segment_presence_label_annotations @segment_presence_label_annotations end |
#shot_annotations ⇒ Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1VideoSegment>
Shot annotations. Each shot is represented as a video segment.
Corresponds to the JSON property shotAnnotations
1253 1254 1255 |
# File 'generated/google/apis/videointelligence_v1/classes.rb', line 1253 def shot_annotations @shot_annotations end |
#shot_label_annotations ⇒ Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1LabelAnnotation>
Topical label annotations on shot level.
There is exactly one element for each unique label.
Corresponds to the JSON property shotLabelAnnotations
1259 1260 1261 |
# File 'generated/google/apis/videointelligence_v1/classes.rb', line 1259 def shot_label_annotations @shot_label_annotations end |
#shot_presence_label_annotations ⇒ Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1LabelAnnotation>
Presence label annotations on shot level. There is exactly one element for
each unique label. Compared to the existing topical
shot_label_annotations
, this field presents more fine-grained, shot-level
labels detected in video content and is made available only when the client
sets LabelDetectionConfig.model
to "builtin/latest" in the request.
Corresponds to the JSON property shotPresenceLabelAnnotations
1268 1269 1270 |
# File 'generated/google/apis/videointelligence_v1/classes.rb', line 1268 def shot_presence_label_annotations @shot_presence_label_annotations end |
#speech_transcriptions ⇒ Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1SpeechTranscription>
Speech transcription.
Corresponds to the JSON property speechTranscriptions
1273 1274 1275 |
# File 'generated/google/apis/videointelligence_v1/classes.rb', line 1273 def speech_transcriptions @speech_transcriptions end |
#text_annotations ⇒ Array<Google::Apis::VideointelligenceV1::GoogleCloudVideointelligenceV1TextAnnotation>
OCR text detection and tracking.
Annotations for list of detected text snippets. Each will have list of
frame information associated with it.
Corresponds to the JSON property textAnnotations
1280 1281 1282 |
# File 'generated/google/apis/videointelligence_v1/classes.rb', line 1280 def text_annotations @text_annotations end |
Instance Method Details
#update!(**args) ⇒ Object
Update properties of this object
1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 |
# File 'generated/google/apis/videointelligence_v1/classes.rb', line 1287 def update!(**args) @error = args[:error] if args.key?(:error) @explicit_annotation = args[:explicit_annotation] if args.key?(:explicit_annotation) @frame_label_annotations = args[:frame_label_annotations] if args.key?(:frame_label_annotations) @input_uri = args[:input_uri] if args.key?(:input_uri) @logo_recognition_annotations = args[:logo_recognition_annotations] if args.key?(:logo_recognition_annotations) @object_annotations = args[:object_annotations] if args.key?(:object_annotations) @segment = args[:segment] if args.key?(:segment) @segment_label_annotations = args[:segment_label_annotations] if args.key?(:segment_label_annotations) @segment_presence_label_annotations = args[:segment_presence_label_annotations] if args.key?(:segment_presence_label_annotations) @shot_annotations = args[:shot_annotations] if args.key?(:shot_annotations) @shot_label_annotations = args[:shot_label_annotations] if args.key?(:shot_label_annotations) @shot_presence_label_annotations = args[:shot_presence_label_annotations] if args.key?(:shot_presence_label_annotations) @speech_transcriptions = args[:speech_transcriptions] if args.key?(:speech_transcriptions) @text_annotations = args[:text_annotations] if args.key?(:text_annotations) end |