Class: Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p1beta1VideoAnnotationResults
- Inherits:
-
Object
- Object
- Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p1beta1VideoAnnotationResults
- Includes:
- Core::Hashable, Core::JsonObjectSupport
- Defined in:
- generated/google/apis/videointelligence_v1p1beta1/classes.rb,
generated/google/apis/videointelligence_v1p1beta1/representations.rb,
generated/google/apis/videointelligence_v1p1beta1/representations.rb
Overview
Annotation results for a single video.
Instance Attribute Summary collapse
-
#error ⇒ Google::Apis::VideointelligenceV1p1beta1::GoogleRpcStatus
The
Status
type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. -
#explicit_annotation ⇒ Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p1beta1ExplicitContentAnnotation
Explicit content annotation (based on per-frame visual signals only).
-
#face_annotations ⇒ Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p1beta1FaceAnnotation>
Deprecated.
-
#face_detection_annotations ⇒ Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p1beta1FaceDetectionAnnotation>
Face detection annotations.
-
#frame_label_annotations ⇒ Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p1beta1LabelAnnotation>
Label annotations on frame level.
-
#input_uri ⇒ String
Video file location in Cloud Storage.
-
#logo_recognition_annotations ⇒ Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p1beta1LogoRecognitionAnnotation>
Annotations for list of logos detected, tracked and recognized in video.
-
#object_annotations ⇒ Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p1beta1ObjectTrackingAnnotation>
Annotations for list of objects detected and tracked in video.
-
#person_detection_annotations ⇒ Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p1beta1PersonDetectionAnnotation>
Person detection annotations.
-
#segment ⇒ Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p1beta1VideoSegment
Video segment.
-
#segment_label_annotations ⇒ Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p1beta1LabelAnnotation>
Topical label annotations on video level or user-specified segment level.
-
#segment_presence_label_annotations ⇒ Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p1beta1LabelAnnotation>
Presence label annotations on video level or user-specified segment level.
-
#shot_annotations ⇒ Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p1beta1VideoSegment>
Shot annotations.
-
#shot_label_annotations ⇒ Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p1beta1LabelAnnotation>
Topical label annotations on shot level.
-
#shot_presence_label_annotations ⇒ Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p1beta1LabelAnnotation>
Presence label annotations on shot level.
-
#speech_transcriptions ⇒ Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p1beta1SpeechTranscription>
Speech transcription.
-
#text_annotations ⇒ Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p1beta1TextAnnotation>
OCR text detection and tracking.
Instance Method Summary collapse
-
#initialize(**args) ⇒ GoogleCloudVideointelligenceV1p1beta1VideoAnnotationResults
constructor
A new instance of GoogleCloudVideointelligenceV1p1beta1VideoAnnotationResults.
-
#update!(**args) ⇒ Object
Update properties of this object.
Constructor Details
#initialize(**args) ⇒ GoogleCloudVideointelligenceV1p1beta1VideoAnnotationResults
Returns a new instance of GoogleCloudVideointelligenceV1p1beta1VideoAnnotationResults.
3749 3750 3751 |
# File 'generated/google/apis/videointelligence_v1p1beta1/classes.rb', line 3749 def initialize(**args) update!(**args) end |
Instance Attribute Details
#error ⇒ Google::Apis::VideointelligenceV1p1beta1::GoogleRpcStatus
The Status
type defines a logical error model that is suitable for different
programming environments, including REST APIs and RPC APIs. It is used by
gRPC. Each Status
message contains three pieces of
data: error code, error message, and error details. You can find out more
about this error model and how to work with it in the API Design Guide.
Corresponds to the JSON property error
3652 3653 3654 |
# File 'generated/google/apis/videointelligence_v1p1beta1/classes.rb', line 3652 def error @error end |
#explicit_annotation ⇒ Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p1beta1ExplicitContentAnnotation
Explicit content annotation (based on per-frame visual signals only). If no
explicit content has been detected in a frame, no annotations are present for
that frame.
Corresponds to the JSON property explicitAnnotation
3659 3660 3661 |
# File 'generated/google/apis/videointelligence_v1p1beta1/classes.rb', line 3659 def explicit_annotation @explicit_annotation end |
#face_annotations ⇒ Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p1beta1FaceAnnotation>
Deprecated. Please use face_detection_annotations
instead.
Corresponds to the JSON property faceAnnotations
3664 3665 3666 |
# File 'generated/google/apis/videointelligence_v1p1beta1/classes.rb', line 3664 def face_annotations @face_annotations end |
#face_detection_annotations ⇒ Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p1beta1FaceDetectionAnnotation>
Face detection annotations.
Corresponds to the JSON property faceDetectionAnnotations
3669 3670 3671 |
# File 'generated/google/apis/videointelligence_v1p1beta1/classes.rb', line 3669 def face_detection_annotations @face_detection_annotations end |
#frame_label_annotations ⇒ Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p1beta1LabelAnnotation>
Label annotations on frame level. There is exactly one element for each unique
label.
Corresponds to the JSON property frameLabelAnnotations
3675 3676 3677 |
# File 'generated/google/apis/videointelligence_v1p1beta1/classes.rb', line 3675 def frame_label_annotations @frame_label_annotations end |
#input_uri ⇒ String
Video file location in Cloud Storage.
Corresponds to the JSON property inputUri
3680 3681 3682 |
# File 'generated/google/apis/videointelligence_v1p1beta1/classes.rb', line 3680 def input_uri @input_uri end |
#logo_recognition_annotations ⇒ Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p1beta1LogoRecognitionAnnotation>
Annotations for list of logos detected, tracked and recognized in video.
Corresponds to the JSON property logoRecognitionAnnotations
3685 3686 3687 |
# File 'generated/google/apis/videointelligence_v1p1beta1/classes.rb', line 3685 def logo_recognition_annotations @logo_recognition_annotations end |
#object_annotations ⇒ Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p1beta1ObjectTrackingAnnotation>
Annotations for list of objects detected and tracked in video.
Corresponds to the JSON property objectAnnotations
3690 3691 3692 |
# File 'generated/google/apis/videointelligence_v1p1beta1/classes.rb', line 3690 def object_annotations @object_annotations end |
#person_detection_annotations ⇒ Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p1beta1PersonDetectionAnnotation>
Person detection annotations.
Corresponds to the JSON property personDetectionAnnotations
3695 3696 3697 |
# File 'generated/google/apis/videointelligence_v1p1beta1/classes.rb', line 3695 def person_detection_annotations @person_detection_annotations end |
#segment ⇒ Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p1beta1VideoSegment
Video segment.
Corresponds to the JSON property segment
3700 3701 3702 |
# File 'generated/google/apis/videointelligence_v1p1beta1/classes.rb', line 3700 def segment @segment end |
#segment_label_annotations ⇒ Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p1beta1LabelAnnotation>
Topical label annotations on video level or user-specified segment level.
There is exactly one element for each unique label.
Corresponds to the JSON property segmentLabelAnnotations
3706 3707 3708 |
# File 'generated/google/apis/videointelligence_v1p1beta1/classes.rb', line 3706 def segment_label_annotations @segment_label_annotations end |
#segment_presence_label_annotations ⇒ Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p1beta1LabelAnnotation>
Presence label annotations on video level or user-specified segment level.
There is exactly one element for each unique label. Compared to the existing
topical segment_label_annotations
, this field presents more fine-grained,
segment-level labels detected in video content and is made available only when
the client sets LabelDetectionConfig.model
to "builtin/latest" in the
request.
Corresponds to the JSON property segmentPresenceLabelAnnotations
3716 3717 3718 |
# File 'generated/google/apis/videointelligence_v1p1beta1/classes.rb', line 3716 def segment_presence_label_annotations @segment_presence_label_annotations end |
#shot_annotations ⇒ Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p1beta1VideoSegment>
Shot annotations. Each shot is represented as a video segment.
Corresponds to the JSON property shotAnnotations
3721 3722 3723 |
# File 'generated/google/apis/videointelligence_v1p1beta1/classes.rb', line 3721 def shot_annotations @shot_annotations end |
#shot_label_annotations ⇒ Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p1beta1LabelAnnotation>
Topical label annotations on shot level. There is exactly one element for each
unique label.
Corresponds to the JSON property shotLabelAnnotations
3727 3728 3729 |
# File 'generated/google/apis/videointelligence_v1p1beta1/classes.rb', line 3727 def shot_label_annotations @shot_label_annotations end |
#shot_presence_label_annotations ⇒ Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p1beta1LabelAnnotation>
Presence label annotations on shot level. There is exactly one element for
each unique label. Compared to the existing topical shot_label_annotations
,
this field presents more fine-grained, shot-level labels detected in video
content and is made available only when the client sets LabelDetectionConfig.
model
to "builtin/latest" in the request.
Corresponds to the JSON property shotPresenceLabelAnnotations
3736 3737 3738 |
# File 'generated/google/apis/videointelligence_v1p1beta1/classes.rb', line 3736 def shot_presence_label_annotations @shot_presence_label_annotations end |
#speech_transcriptions ⇒ Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p1beta1SpeechTranscription>
Speech transcription.
Corresponds to the JSON property speechTranscriptions
3741 3742 3743 |
# File 'generated/google/apis/videointelligence_v1p1beta1/classes.rb', line 3741 def speech_transcriptions @speech_transcriptions end |
#text_annotations ⇒ Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p1beta1TextAnnotation>
OCR text detection and tracking. Annotations for list of detected text
snippets. Each will have list of frame information associated with it.
Corresponds to the JSON property textAnnotations
3747 3748 3749 |
# File 'generated/google/apis/videointelligence_v1p1beta1/classes.rb', line 3747 def text_annotations @text_annotations end |
Instance Method Details
#update!(**args) ⇒ Object
Update properties of this object
3754 3755 3756 3757 3758 3759 3760 3761 3762 3763 3764 3765 3766 3767 3768 3769 3770 3771 3772 |
# File 'generated/google/apis/videointelligence_v1p1beta1/classes.rb', line 3754 def update!(**args) @error = args[:error] if args.key?(:error) @explicit_annotation = args[:explicit_annotation] if args.key?(:explicit_annotation) @face_annotations = args[:face_annotations] if args.key?(:face_annotations) @face_detection_annotations = args[:face_detection_annotations] if args.key?(:face_detection_annotations) @frame_label_annotations = args[:frame_label_annotations] if args.key?(:frame_label_annotations) @input_uri = args[:input_uri] if args.key?(:input_uri) @logo_recognition_annotations = args[:logo_recognition_annotations] if args.key?(:logo_recognition_annotations) @object_annotations = args[:object_annotations] if args.key?(:object_annotations) @person_detection_annotations = args[:person_detection_annotations] if args.key?(:person_detection_annotations) @segment = args[:segment] if args.key?(:segment) @segment_label_annotations = args[:segment_label_annotations] if args.key?(:segment_label_annotations) @segment_presence_label_annotations = args[:segment_presence_label_annotations] if args.key?(:segment_presence_label_annotations) @shot_annotations = args[:shot_annotations] if args.key?(:shot_annotations) @shot_label_annotations = args[:shot_label_annotations] if args.key?(:shot_label_annotations) @shot_presence_label_annotations = args[:shot_presence_label_annotations] if args.key?(:shot_presence_label_annotations) @speech_transcriptions = args[:speech_transcriptions] if args.key?(:speech_transcriptions) @text_annotations = args[:text_annotations] if args.key?(:text_annotations) end |