Class: Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p2beta1VideoAnnotationResults

Inherits:
Object
  • Object
show all
Includes:
Core::Hashable, Core::JsonObjectSupport
Defined in:
generated/google/apis/videointelligence_v1p1beta1/classes.rb,
generated/google/apis/videointelligence_v1p1beta1/representations.rb,
generated/google/apis/videointelligence_v1p1beta1/representations.rb

Overview

Annotation results for a single video.

Instance Attribute Summary collapse

Instance Method Summary collapse

Methods included from Core::JsonObjectSupport

#to_json

Methods included from Core::Hashable

process_value, #to_h

Constructor Details

#initialize(**args) ⇒ GoogleCloudVideointelligenceV1p2beta1VideoAnnotationResults

Returns a new instance of GoogleCloudVideointelligenceV1p2beta1VideoAnnotationResults



2555
2556
2557
# File 'generated/google/apis/videointelligence_v1p1beta1/classes.rb', line 2555

def initialize(**args)
   update!(**args)
end

Instance Attribute Details

#errorGoogle::Apis::VideointelligenceV1p1beta1::GoogleRpcStatus

The Status type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by gRPC. The error model is designed to be:

  • Simple to use and understand for most users
  • Flexible enough to meet unexpected needs # Overview The Status message contains three pieces of data: error code, error message, and error details. The error code should be an enum value of google.rpc.Code, but it may accept additional error codes if needed. The error message should be a developer-facing English message that helps developers understand and resolve the error. If a localized user-facing error message is needed, put the localized message in the error details or localize it in the client. The optional error details may contain arbitrary information about the error. There is a predefined set of error detail types in the package google.rpc that can be used for common error conditions. # Language mapping The Status message is the logical representation of the error model, but it is not necessarily the actual wire format. When the Status message is exposed in different client libraries and different wire protocols, it can be mapped differently. For example, it will likely be mapped to some exceptions in Java, but more likely mapped to some error codes in C. # Other uses The error model and the Status message can be used in a variety of environments, either with or without APIs, to provide a consistent developer experience across different environments. Example uses of this error model include:
  • Partial errors. If a service needs to return partial errors to the client, it may embed the Status in the normal response to indicate the partial errors.
  • Workflow errors. A typical workflow has multiple steps. Each step may have a Status message for error reporting.
  • Batch operations. If a client uses batch request and batch response, the Status message should be used directly inside batch response, one for each error sub-response.
  • Asynchronous operations. If an API call embeds asynchronous operation results in its response, the status of those operations should be represented directly using the Status message.
  • Logging. If some API errors are stored in logs, the message Status could be used directly after any stripping needed for security/privacy reasons. Corresponds to the JSON property error


2500
2501
2502
# File 'generated/google/apis/videointelligence_v1p1beta1/classes.rb', line 2500

def error
  @error
end

#explicit_annotationGoogle::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p2beta1ExplicitContentAnnotation

Explicit content annotation (based on per-frame visual signals only). If no explicit content has been detected in a frame, no annotations are present for that frame. Corresponds to the JSON property explicitAnnotation



2507
2508
2509
# File 'generated/google/apis/videointelligence_v1p1beta1/classes.rb', line 2507

def explicit_annotation
  @explicit_annotation
end

#frame_label_annotationsArray<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p2beta1LabelAnnotation>

Label annotations on frame level. There is exactly one element for each unique label. Corresponds to the JSON property frameLabelAnnotations



2513
2514
2515
# File 'generated/google/apis/videointelligence_v1p1beta1/classes.rb', line 2513

def frame_label_annotations
  @frame_label_annotations
end

#input_uriString

Video file location in Google Cloud Storage. Corresponds to the JSON property inputUri

Returns:

  • (String)


2519
2520
2521
# File 'generated/google/apis/videointelligence_v1p1beta1/classes.rb', line 2519

def input_uri
  @input_uri
end

#object_annotationsArray<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p2beta1ObjectTrackingAnnotation>

Annotations for list of objects detected and tracked in video. Corresponds to the JSON property objectAnnotations



2524
2525
2526
# File 'generated/google/apis/videointelligence_v1p1beta1/classes.rb', line 2524

def object_annotations
  @object_annotations
end

#segment_label_annotationsArray<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p2beta1LabelAnnotation>

Label annotations on video level or user specified segment level. There is exactly one element for each unique label. Corresponds to the JSON property segmentLabelAnnotations



2530
2531
2532
# File 'generated/google/apis/videointelligence_v1p1beta1/classes.rb', line 2530

def segment_label_annotations
  @segment_label_annotations
end

#shot_annotationsArray<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p2beta1VideoSegment>

Shot annotations. Each shot is represented as a video segment. Corresponds to the JSON property shotAnnotations



2535
2536
2537
# File 'generated/google/apis/videointelligence_v1p1beta1/classes.rb', line 2535

def shot_annotations
  @shot_annotations
end

#shot_label_annotationsArray<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p2beta1LabelAnnotation>

Label annotations on shot level. There is exactly one element for each unique label. Corresponds to the JSON property shotLabelAnnotations



2541
2542
2543
# File 'generated/google/apis/videointelligence_v1p1beta1/classes.rb', line 2541

def shot_label_annotations
  @shot_label_annotations
end

#speech_transcriptionsArray<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p2beta1SpeechTranscription>

Speech transcription. Corresponds to the JSON property speechTranscriptions



2546
2547
2548
# File 'generated/google/apis/videointelligence_v1p1beta1/classes.rb', line 2546

def speech_transcriptions
  @speech_transcriptions
end

#text_annotationsArray<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p2beta1TextAnnotation>

OCR text detection and tracking. Annotations for list of detected text snippets. Each will have list of frame information associated with it. Corresponds to the JSON property textAnnotations



2553
2554
2555
# File 'generated/google/apis/videointelligence_v1p1beta1/classes.rb', line 2553

def text_annotations
  @text_annotations
end

Instance Method Details

#update!(**args) ⇒ Object

Update properties of this object



2560
2561
2562
2563
2564
2565
2566
2567
2568
2569
2570
2571
# File 'generated/google/apis/videointelligence_v1p1beta1/classes.rb', line 2560

def update!(**args)
  @error = args[:error] if args.key?(:error)
  @explicit_annotation = args[:explicit_annotation] if args.key?(:explicit_annotation)
  @frame_label_annotations = args[:frame_label_annotations] if args.key?(:frame_label_annotations)
  @input_uri = args[:input_uri] if args.key?(:input_uri)
  @object_annotations = args[:object_annotations] if args.key?(:object_annotations)
  @segment_label_annotations = args[:segment_label_annotations] if args.key?(:segment_label_annotations)
  @shot_annotations = args[:shot_annotations] if args.key?(:shot_annotations)
  @shot_label_annotations = args[:shot_label_annotations] if args.key?(:shot_label_annotations)
  @speech_transcriptions = args[:speech_transcriptions] if args.key?(:speech_transcriptions)
  @text_annotations = args[:text_annotations] if args.key?(:text_annotations)
end