Class: Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p2beta1VideoAnnotationResults
- Inherits:
-
Object
- Object
- Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p2beta1VideoAnnotationResults
- Includes:
- Core::Hashable, Core::JsonObjectSupport
- Defined in:
- generated/google/apis/videointelligence_v1p2beta1/classes.rb,
generated/google/apis/videointelligence_v1p2beta1/representations.rb,
generated/google/apis/videointelligence_v1p2beta1/representations.rb
Overview
Annotation results for a single video.
Instance Attribute Summary collapse
-
#error ⇒ Google::Apis::VideointelligenceV1p2beta1::GoogleRpcStatus
The
Status
type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. -
#explicit_annotation ⇒ Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p2beta1ExplicitContentAnnotation
Explicit content annotation (based on per-frame visual signals only).
-
#frame_label_annotations ⇒ Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p2beta1LabelAnnotation>
Label annotations on frame level.
-
#input_uri ⇒ String
Video file location in Google Cloud Storage.
-
#object_annotations ⇒ Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p2beta1ObjectTrackingAnnotation>
Annotations for list of objects detected and tracked in video.
-
#segment_label_annotations ⇒ Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p2beta1LabelAnnotation>
Label annotations on video level or user specified segment level.
-
#shot_annotations ⇒ Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p2beta1VideoSegment>
Shot annotations.
-
#shot_label_annotations ⇒ Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p2beta1LabelAnnotation>
Label annotations on shot level.
-
#speech_transcriptions ⇒ Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p2beta1SpeechTranscription>
Speech transcription.
-
#text_annotations ⇒ Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p2beta1TextAnnotation>
OCR text detection and tracking.
Instance Method Summary collapse
-
#initialize(**args) ⇒ GoogleCloudVideointelligenceV1p2beta1VideoAnnotationResults
constructor
A new instance of GoogleCloudVideointelligenceV1p2beta1VideoAnnotationResults.
-
#update!(**args) ⇒ Object
Update properties of this object.
Methods included from Core::JsonObjectSupport
Methods included from Core::Hashable
Constructor Details
#initialize(**args) ⇒ GoogleCloudVideointelligenceV1p2beta1VideoAnnotationResults
Returns a new instance of GoogleCloudVideointelligenceV1p2beta1VideoAnnotationResults
2546 2547 2548 |
# File 'generated/google/apis/videointelligence_v1p2beta1/classes.rb', line 2546 def initialize(**args) update!(**args) end |
Instance Attribute Details
#error ⇒ Google::Apis::VideointelligenceV1p2beta1::GoogleRpcStatus
The Status
type defines a logical error model that is suitable for different
programming environments, including REST APIs and RPC APIs. It is used by
gRPC. The error model is designed to be:
- Simple to use and understand for most users
- Flexible enough to meet unexpected needs
# Overview
The
Status
message contains three pieces of data: error code, error message, and error details. The error code should be an enum value of google.rpc.Code, but it may accept additional error codes if needed. The error message should be a developer-facing English message that helps developers understand and resolve the error. If a localized user-facing error message is needed, put the localized message in the error details or localize it in the client. The optional error details may contain arbitrary information about the error. There is a predefined set of error detail types in the packagegoogle.rpc
that can be used for common error conditions. # Language mapping TheStatus
message is the logical representation of the error model, but it is not necessarily the actual wire format. When theStatus
message is exposed in different client libraries and different wire protocols, it can be mapped differently. For example, it will likely be mapped to some exceptions in Java, but more likely mapped to some error codes in C. # Other uses The error model and theStatus
message can be used in a variety of environments, either with or without APIs, to provide a consistent developer experience across different environments. Example uses of this error model include: - Partial errors. If a service needs to return partial errors to the client,
it may embed the
Status
in the normal response to indicate the partial errors. - Workflow errors. A typical workflow has multiple steps. Each step may
have a
Status
message for error reporting. - Batch operations. If a client uses batch request and batch response, the
Status
message should be used directly inside batch response, one for each error sub-response. - Asynchronous operations. If an API call embeds asynchronous operation
results in its response, the status of those operations should be
represented directly using the
Status
message. - Logging. If some API errors are stored in logs, the message
Status
could be used directly after any stripping needed for security/privacy reasons. Corresponds to the JSON propertyerror
2491 2492 2493 |
# File 'generated/google/apis/videointelligence_v1p2beta1/classes.rb', line 2491 def error @error end |
#explicit_annotation ⇒ Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p2beta1ExplicitContentAnnotation
Explicit content annotation (based on per-frame visual signals only).
If no explicit content has been detected in a frame, no annotations are
present for that frame.
Corresponds to the JSON property explicitAnnotation
2498 2499 2500 |
# File 'generated/google/apis/videointelligence_v1p2beta1/classes.rb', line 2498 def explicit_annotation @explicit_annotation end |
#frame_label_annotations ⇒ Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p2beta1LabelAnnotation>
Label annotations on frame level.
There is exactly one element for each unique label.
Corresponds to the JSON property frameLabelAnnotations
2504 2505 2506 |
# File 'generated/google/apis/videointelligence_v1p2beta1/classes.rb', line 2504 def frame_label_annotations @frame_label_annotations end |
#input_uri ⇒ String
Video file location in
Google Cloud Storage.
Corresponds to the JSON property inputUri
2510 2511 2512 |
# File 'generated/google/apis/videointelligence_v1p2beta1/classes.rb', line 2510 def input_uri @input_uri end |
#object_annotations ⇒ Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p2beta1ObjectTrackingAnnotation>
Annotations for list of objects detected and tracked in video.
Corresponds to the JSON property objectAnnotations
2515 2516 2517 |
# File 'generated/google/apis/videointelligence_v1p2beta1/classes.rb', line 2515 def object_annotations @object_annotations end |
#segment_label_annotations ⇒ Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p2beta1LabelAnnotation>
Label annotations on video level or user specified segment level.
There is exactly one element for each unique label.
Corresponds to the JSON property segmentLabelAnnotations
2521 2522 2523 |
# File 'generated/google/apis/videointelligence_v1p2beta1/classes.rb', line 2521 def segment_label_annotations @segment_label_annotations end |
#shot_annotations ⇒ Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p2beta1VideoSegment>
Shot annotations. Each shot is represented as a video segment.
Corresponds to the JSON property shotAnnotations
2526 2527 2528 |
# File 'generated/google/apis/videointelligence_v1p2beta1/classes.rb', line 2526 def shot_annotations @shot_annotations end |
#shot_label_annotations ⇒ Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p2beta1LabelAnnotation>
Label annotations on shot level.
There is exactly one element for each unique label.
Corresponds to the JSON property shotLabelAnnotations
2532 2533 2534 |
# File 'generated/google/apis/videointelligence_v1p2beta1/classes.rb', line 2532 def shot_label_annotations @shot_label_annotations end |
#speech_transcriptions ⇒ Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p2beta1SpeechTranscription>
Speech transcription.
Corresponds to the JSON property speechTranscriptions
2537 2538 2539 |
# File 'generated/google/apis/videointelligence_v1p2beta1/classes.rb', line 2537 def speech_transcriptions @speech_transcriptions end |
#text_annotations ⇒ Array<Google::Apis::VideointelligenceV1p2beta1::GoogleCloudVideointelligenceV1p2beta1TextAnnotation>
OCR text detection and tracking.
Annotations for list of detected text snippets. Each will have list of
frame information associated with it.
Corresponds to the JSON property textAnnotations
2544 2545 2546 |
# File 'generated/google/apis/videointelligence_v1p2beta1/classes.rb', line 2544 def text_annotations @text_annotations end |
Instance Method Details
#update!(**args) ⇒ Object
Update properties of this object
2551 2552 2553 2554 2555 2556 2557 2558 2559 2560 2561 2562 |
# File 'generated/google/apis/videointelligence_v1p2beta1/classes.rb', line 2551 def update!(**args) @error = args[:error] if args.key?(:error) @explicit_annotation = args[:explicit_annotation] if args.key?(:explicit_annotation) @frame_label_annotations = args[:frame_label_annotations] if args.key?(:frame_label_annotations) @input_uri = args[:input_uri] if args.key?(:input_uri) @object_annotations = args[:object_annotations] if args.key?(:object_annotations) @segment_label_annotations = args[:segment_label_annotations] if args.key?(:segment_label_annotations) @shot_annotations = args[:shot_annotations] if args.key?(:shot_annotations) @shot_label_annotations = args[:shot_label_annotations] if args.key?(:shot_label_annotations) @speech_transcriptions = args[:speech_transcriptions] if args.key?(:speech_transcriptions) @text_annotations = args[:text_annotations] if args.key?(:text_annotations) end |