Class: Google::Apis::VideointelligenceV1p3beta1::GoogleCloudVideointelligenceV1p3beta1VideoAnnotationResults
- Inherits:
-
Object
- Object
- Google::Apis::VideointelligenceV1p3beta1::GoogleCloudVideointelligenceV1p3beta1VideoAnnotationResults
- Includes:
- Core::Hashable, Core::JsonObjectSupport
- Defined in:
- generated/google/apis/videointelligence_v1p3beta1/classes.rb,
generated/google/apis/videointelligence_v1p3beta1/representations.rb,
generated/google/apis/videointelligence_v1p3beta1/representations.rb
Overview
Annotation results for a single video.
Instance Attribute Summary collapse
-
#celebrity_recognition_annotations ⇒ Google::Apis::VideointelligenceV1p3beta1::GoogleCloudVideointelligenceV1p3beta1CelebrityRecognitionAnnotation
Celebrity recognition annotation per video.
-
#error ⇒ Google::Apis::VideointelligenceV1p3beta1::GoogleRpcStatus
The
Status
type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. -
#explicit_annotation ⇒ Google::Apis::VideointelligenceV1p3beta1::GoogleCloudVideointelligenceV1p3beta1ExplicitContentAnnotation
Explicit content annotation (based on per-frame visual signals only).
-
#face_detection_annotations ⇒ Array<Google::Apis::VideointelligenceV1p3beta1::GoogleCloudVideointelligenceV1p3beta1FaceDetectionAnnotation>
Face detection annotations.
-
#frame_label_annotations ⇒ Array<Google::Apis::VideointelligenceV1p3beta1::GoogleCloudVideointelligenceV1p3beta1LabelAnnotation>
Label annotations on frame level.
-
#input_uri ⇒ String
Video file location in Google Cloud Storage.
-
#logo_recognition_annotations ⇒ Array<Google::Apis::VideointelligenceV1p3beta1::GoogleCloudVideointelligenceV1p3beta1LogoRecognitionAnnotation>
Annotations for list of logos detected, tracked and recognized in video.
-
#object_annotations ⇒ Array<Google::Apis::VideointelligenceV1p3beta1::GoogleCloudVideointelligenceV1p3beta1ObjectTrackingAnnotation>
Annotations for list of objects detected and tracked in video.
-
#person_detection_annotations ⇒ Array<Google::Apis::VideointelligenceV1p3beta1::GoogleCloudVideointelligenceV1p3beta1PersonDetectionAnnotation>
Person detection annotations.
-
#segment ⇒ Google::Apis::VideointelligenceV1p3beta1::GoogleCloudVideointelligenceV1p3beta1VideoSegment
Video segment.
-
#segment_label_annotations ⇒ Array<Google::Apis::VideointelligenceV1p3beta1::GoogleCloudVideointelligenceV1p3beta1LabelAnnotation>
Topical label annotations on video level or user specified segment level.
-
#segment_presence_label_annotations ⇒ Array<Google::Apis::VideointelligenceV1p3beta1::GoogleCloudVideointelligenceV1p3beta1LabelAnnotation>
Presence label annotations on video level or user specified segment level.
-
#shot_annotations ⇒ Array<Google::Apis::VideointelligenceV1p3beta1::GoogleCloudVideointelligenceV1p3beta1VideoSegment>
Shot annotations.
-
#shot_label_annotations ⇒ Array<Google::Apis::VideointelligenceV1p3beta1::GoogleCloudVideointelligenceV1p3beta1LabelAnnotation>
Topical label annotations on shot level.
-
#shot_presence_label_annotations ⇒ Array<Google::Apis::VideointelligenceV1p3beta1::GoogleCloudVideointelligenceV1p3beta1LabelAnnotation>
Presence label annotations on shot level.
-
#speech_transcriptions ⇒ Array<Google::Apis::VideointelligenceV1p3beta1::GoogleCloudVideointelligenceV1p3beta1SpeechTranscription>
Speech transcription.
-
#text_annotations ⇒ Array<Google::Apis::VideointelligenceV1p3beta1::GoogleCloudVideointelligenceV1p3beta1TextAnnotation>
OCR text detection and tracking.
Instance Method Summary collapse
-
#initialize(**args) ⇒ GoogleCloudVideointelligenceV1p3beta1VideoAnnotationResults
constructor
A new instance of GoogleCloudVideointelligenceV1p3beta1VideoAnnotationResults.
-
#update!(**args) ⇒ Object
Update properties of this object.
Methods included from Core::JsonObjectSupport
Methods included from Core::Hashable
Constructor Details
#initialize(**args) ⇒ GoogleCloudVideointelligenceV1p3beta1VideoAnnotationResults
Returns a new instance of GoogleCloudVideointelligenceV1p3beta1VideoAnnotationResults.
4824 4825 4826 |
# File 'generated/google/apis/videointelligence_v1p3beta1/classes.rb', line 4824 def initialize(**args) update!(**args) end |
Instance Attribute Details
#celebrity_recognition_annotations ⇒ Google::Apis::VideointelligenceV1p3beta1::GoogleCloudVideointelligenceV1p3beta1CelebrityRecognitionAnnotation
Celebrity recognition annotation per video.
Corresponds to the JSON property celebrityRecognitionAnnotations
4720 4721 4722 |
# File 'generated/google/apis/videointelligence_v1p3beta1/classes.rb', line 4720 def celebrity_recognition_annotations @celebrity_recognition_annotations end |
#error ⇒ Google::Apis::VideointelligenceV1p3beta1::GoogleRpcStatus
The Status
type defines a logical error model that is suitable for
different programming environments, including REST APIs and RPC APIs. It is
used by gRPC. Each Status
message contains
three pieces of data: error code, error message, and error details.
You can find out more about this error model and how to work with it in the
API Design Guide.
Corresponds to the JSON property error
4730 4731 4732 |
# File 'generated/google/apis/videointelligence_v1p3beta1/classes.rb', line 4730 def error @error end |
#explicit_annotation ⇒ Google::Apis::VideointelligenceV1p3beta1::GoogleCloudVideointelligenceV1p3beta1ExplicitContentAnnotation
Explicit content annotation (based on per-frame visual signals only).
If no explicit content has been detected in a frame, no annotations are
present for that frame.
Corresponds to the JSON property explicitAnnotation
4737 4738 4739 |
# File 'generated/google/apis/videointelligence_v1p3beta1/classes.rb', line 4737 def explicit_annotation @explicit_annotation end |
#face_detection_annotations ⇒ Array<Google::Apis::VideointelligenceV1p3beta1::GoogleCloudVideointelligenceV1p3beta1FaceDetectionAnnotation>
Face detection annotations.
Corresponds to the JSON property faceDetectionAnnotations
4742 4743 4744 |
# File 'generated/google/apis/videointelligence_v1p3beta1/classes.rb', line 4742 def face_detection_annotations @face_detection_annotations end |
#frame_label_annotations ⇒ Array<Google::Apis::VideointelligenceV1p3beta1::GoogleCloudVideointelligenceV1p3beta1LabelAnnotation>
Label annotations on frame level.
There is exactly one element for each unique label.
Corresponds to the JSON property frameLabelAnnotations
4748 4749 4750 |
# File 'generated/google/apis/videointelligence_v1p3beta1/classes.rb', line 4748 def frame_label_annotations @frame_label_annotations end |
#input_uri ⇒ String
Video file location in
Google Cloud Storage.
Corresponds to the JSON property inputUri
4754 4755 4756 |
# File 'generated/google/apis/videointelligence_v1p3beta1/classes.rb', line 4754 def input_uri @input_uri end |
#logo_recognition_annotations ⇒ Array<Google::Apis::VideointelligenceV1p3beta1::GoogleCloudVideointelligenceV1p3beta1LogoRecognitionAnnotation>
Annotations for list of logos detected, tracked and recognized in video.
Corresponds to the JSON property logoRecognitionAnnotations
4759 4760 4761 |
# File 'generated/google/apis/videointelligence_v1p3beta1/classes.rb', line 4759 def logo_recognition_annotations @logo_recognition_annotations end |
#object_annotations ⇒ Array<Google::Apis::VideointelligenceV1p3beta1::GoogleCloudVideointelligenceV1p3beta1ObjectTrackingAnnotation>
Annotations for list of objects detected and tracked in video.
Corresponds to the JSON property objectAnnotations
4764 4765 4766 |
# File 'generated/google/apis/videointelligence_v1p3beta1/classes.rb', line 4764 def object_annotations @object_annotations end |
#person_detection_annotations ⇒ Array<Google::Apis::VideointelligenceV1p3beta1::GoogleCloudVideointelligenceV1p3beta1PersonDetectionAnnotation>
Person detection annotations.
Corresponds to the JSON property personDetectionAnnotations
4769 4770 4771 |
# File 'generated/google/apis/videointelligence_v1p3beta1/classes.rb', line 4769 def person_detection_annotations @person_detection_annotations end |
#segment ⇒ Google::Apis::VideointelligenceV1p3beta1::GoogleCloudVideointelligenceV1p3beta1VideoSegment
Video segment.
Corresponds to the JSON property segment
4774 4775 4776 |
# File 'generated/google/apis/videointelligence_v1p3beta1/classes.rb', line 4774 def segment @segment end |
#segment_label_annotations ⇒ Array<Google::Apis::VideointelligenceV1p3beta1::GoogleCloudVideointelligenceV1p3beta1LabelAnnotation>
Topical label annotations on video level or user specified segment level.
There is exactly one element for each unique label.
Corresponds to the JSON property segmentLabelAnnotations
4780 4781 4782 |
# File 'generated/google/apis/videointelligence_v1p3beta1/classes.rb', line 4780 def segment_label_annotations @segment_label_annotations end |
#segment_presence_label_annotations ⇒ Array<Google::Apis::VideointelligenceV1p3beta1::GoogleCloudVideointelligenceV1p3beta1LabelAnnotation>
Presence label annotations on video level or user specified segment level.
There is exactly one element for each unique label. Compared to the
existing topical segment_label_annotations
, this field presents more
fine-grained, segment-level labels detected in video content and is made
available only when the client sets LabelDetectionConfig.model
to
"builtin/latest" in the request.
Corresponds to the JSON property segmentPresenceLabelAnnotations
4790 4791 4792 |
# File 'generated/google/apis/videointelligence_v1p3beta1/classes.rb', line 4790 def segment_presence_label_annotations @segment_presence_label_annotations end |
#shot_annotations ⇒ Array<Google::Apis::VideointelligenceV1p3beta1::GoogleCloudVideointelligenceV1p3beta1VideoSegment>
Shot annotations. Each shot is represented as a video segment.
Corresponds to the JSON property shotAnnotations
4795 4796 4797 |
# File 'generated/google/apis/videointelligence_v1p3beta1/classes.rb', line 4795 def shot_annotations @shot_annotations end |
#shot_label_annotations ⇒ Array<Google::Apis::VideointelligenceV1p3beta1::GoogleCloudVideointelligenceV1p3beta1LabelAnnotation>
Topical label annotations on shot level.
There is exactly one element for each unique label.
Corresponds to the JSON property shotLabelAnnotations
4801 4802 4803 |
# File 'generated/google/apis/videointelligence_v1p3beta1/classes.rb', line 4801 def shot_label_annotations @shot_label_annotations end |
#shot_presence_label_annotations ⇒ Array<Google::Apis::VideointelligenceV1p3beta1::GoogleCloudVideointelligenceV1p3beta1LabelAnnotation>
Presence label annotations on shot level. There is exactly one element for
each unique label. Compared to the existing topical
shot_label_annotations
, this field presents more fine-grained, shot-level
labels detected in video content and is made available only when the client
sets LabelDetectionConfig.model
to "builtin/latest" in the request.
Corresponds to the JSON property shotPresenceLabelAnnotations
4810 4811 4812 |
# File 'generated/google/apis/videointelligence_v1p3beta1/classes.rb', line 4810 def shot_presence_label_annotations @shot_presence_label_annotations end |
#speech_transcriptions ⇒ Array<Google::Apis::VideointelligenceV1p3beta1::GoogleCloudVideointelligenceV1p3beta1SpeechTranscription>
Speech transcription.
Corresponds to the JSON property speechTranscriptions
4815 4816 4817 |
# File 'generated/google/apis/videointelligence_v1p3beta1/classes.rb', line 4815 def speech_transcriptions @speech_transcriptions end |
#text_annotations ⇒ Array<Google::Apis::VideointelligenceV1p3beta1::GoogleCloudVideointelligenceV1p3beta1TextAnnotation>
OCR text detection and tracking.
Annotations for list of detected text snippets. Each will have list of
frame information associated with it.
Corresponds to the JSON property textAnnotations
4822 4823 4824 |
# File 'generated/google/apis/videointelligence_v1p3beta1/classes.rb', line 4822 def text_annotations @text_annotations end |
Instance Method Details
#update!(**args) ⇒ Object
Update properties of this object
4829 4830 4831 4832 4833 4834 4835 4836 4837 4838 4839 4840 4841 4842 4843 4844 4845 4846 4847 |
# File 'generated/google/apis/videointelligence_v1p3beta1/classes.rb', line 4829 def update!(**args) @celebrity_recognition_annotations = args[:celebrity_recognition_annotations] if args.key?(:celebrity_recognition_annotations) @error = args[:error] if args.key?(:error) @explicit_annotation = args[:explicit_annotation] if args.key?(:explicit_annotation) @face_detection_annotations = args[:face_detection_annotations] if args.key?(:face_detection_annotations) @frame_label_annotations = args[:frame_label_annotations] if args.key?(:frame_label_annotations) @input_uri = args[:input_uri] if args.key?(:input_uri) @logo_recognition_annotations = args[:logo_recognition_annotations] if args.key?(:logo_recognition_annotations) @object_annotations = args[:object_annotations] if args.key?(:object_annotations) @person_detection_annotations = args[:person_detection_annotations] if args.key?(:person_detection_annotations) @segment = args[:segment] if args.key?(:segment) @segment_label_annotations = args[:segment_label_annotations] if args.key?(:segment_label_annotations) @segment_presence_label_annotations = args[:segment_presence_label_annotations] if args.key?(:segment_presence_label_annotations) @shot_annotations = args[:shot_annotations] if args.key?(:shot_annotations) @shot_label_annotations = args[:shot_label_annotations] if args.key?(:shot_label_annotations) @shot_presence_label_annotations = args[:shot_presence_label_annotations] if args.key?(:shot_presence_label_annotations) @speech_transcriptions = args[:speech_transcriptions] if args.key?(:speech_transcriptions) @text_annotations = args[:text_annotations] if args.key?(:text_annotations) end |