Properties

constant static

Feature  number

Video annotation feature.

Properties

Name Type Optional Description

FEATURE_UNSPECIFIED

 

 

Unspecified.

LABEL_DETECTION

 

 

Label detection. Detect objects, such as dog or flower.

SHOT_CHANGE_DETECTION

 

 

Shot change detection.

EXPLICIT_CONTENT_DETECTION

 

 

Explicit content detection.

SPEECH_TRANSCRIPTION

 

 

Speech transcription.

TEXT_DETECTION

 

 

OCR text detection and tracking.

OBJECT_TRACKING

 

 

Object detection and tracking.

LOGO_RECOGNITION

 

 

Logo detection, tracking, and recognition.

constant static

LabelDetectionMode  number

Label detection mode.

Properties

Name Type Optional Description

LABEL_DETECTION_MODE_UNSPECIFIED

 

 

Unspecified.

SHOT_MODE

 

 

Detect shot-level labels.

FRAME_MODE

 

 

Detect frame-level labels.

SHOT_AND_FRAME_MODE

 

 

Detect both shot-level and frame-level labels.

constant static

Likelihood  number

Bucketized representation of likelihood.

Properties

Name Type Optional Description

LIKELIHOOD_UNSPECIFIED

 

 

Unspecified likelihood.

VERY_UNLIKELY

 

 

Very unlikely.

UNLIKELY

 

 

Unlikely.

POSSIBLE

 

 

Possible.

LIKELY

 

 

Likely.

VERY_LIKELY

 

 

Very likely.

constant static

StreamingFeature  number

Streaming video annotation feature.

Properties

Name Type Optional Description

STREAMING_FEATURE_UNSPECIFIED

 

 

Unspecified.

STREAMING_LABEL_DETECTION

 

 

Label detection. Detect objects, such as dog or flower.

STREAMING_SHOT_CHANGE_DETECTION

 

 

Shot change detection.

STREAMING_EXPLICIT_CONTENT_DETECTION

 

 

Explicit content detection.

STREAMING_OBJECT_TRACKING

 

 

Object detection and tracking.

Abstract types

static

AnnotateVideoProgress

Video annotation progress. Included in the metadata field of the Operation returned by the GetOperation call of the google::longrunning::Operations service.

Property

Name Type Optional Description

annotationProgress

Array of Object

 

Progress metadata for all videos specified in AnnotateVideoRequest.

This object should have the same structure as VideoAnnotationProgress

See also

google.cloud.videointelligence.v1p3beta1.AnnotateVideoProgress definition in proto format

static

AnnotateVideoRequest

Video annotation request.

Properties

Name Type Optional Description

inputUri

string

 

Input video location. Currently, only Google Cloud Storage URIs are supported, which must be specified in the following format: gs://bucket-id/object-id (other URI formats return google.rpc.Code.INVALID_ARGUMENT). For more information, see Request URIs. A video URI may include wildcards in object-id, and thus identify multiple videos. Supported wildcards: '*' to match 0 or more characters; '?' to match 1 character. If unset, the input video should be embedded in the request as input_content. If set, input_content should be unset.

inputContent

string

 

The video data bytes. If unset, the input video(s) should be specified via input_uri. If set, input_uri should be unset.

features

Array of number

 

Requested video annotation features.

The number should be among the values of Feature

videoContext

Object

 

Additional video context and/or feature-specific parameters.

This object should have the same structure as VideoContext

outputUri

string

 

Optional location where the output (in JSON format) should be stored. Currently, only Google Cloud Storage URIs are supported, which must be specified in the following format: gs://bucket-id/object-id (other URI formats return google.rpc.Code.INVALID_ARGUMENT). For more information, see Request URIs.

locationId

string

 

Optional cloud region where annotation should take place. Supported cloud regions: us-east1, us-west1, europe-west1, asia-east1. If no region is specified, a region will be determined based on video file location.

See also

google.cloud.videointelligence.v1p3beta1.AnnotateVideoRequest definition in proto format

static

AnnotateVideoResponse

Video annotation response. Included in the response field of the Operation returned by the GetOperation call of the google::longrunning::Operations service.

Property

Name Type Optional Description

annotationResults

Array of Object

 

Annotation results for all videos specified in AnnotateVideoRequest.

This object should have the same structure as VideoAnnotationResults

See also

google.cloud.videointelligence.v1p3beta1.AnnotateVideoResponse definition in proto format

static

DetectedAttribute

A generic detected attribute represented by name in string format.

Properties

Name Type Optional Description

name

string

 

The name of the attribute, i.e. glasses, dark_glasses, mouth_open etc. A full list of supported type names will be provided in the document.

confidence

number

 

Detected attribute confidence. Range [0, 1].

value

string

 

Text value of the detection result. For example, the value for "HairColor" can be "black", "blonde", etc.

See also

google.cloud.videointelligence.v1p3beta1.DetectedAttribute definition in proto format

static

Entity

Detected entity from video analysis.

Properties

Name Type Optional Description

entityId

string

 

Opaque entity ID. Some IDs may be available in Google Knowledge Graph Search API.

description

string

 

Textual description, e.g. Fixed-gear bicycle.

languageCode

string

 

Language code for description in BCP-47 format.

See also

google.cloud.videointelligence.v1p3beta1.Entity definition in proto format

static

ExplicitContentAnnotation

Explicit content annotation (based on per-frame visual signals only). If no explicit content has been detected in a frame, no annotations are present for that frame.

Property

Name Type Optional Description

frames

Array of Object

 

All video frames where explicit content was detected.

This object should have the same structure as ExplicitContentFrame

See also

google.cloud.videointelligence.v1p3beta1.ExplicitContentAnnotation definition in proto format

static

ExplicitContentDetectionConfig

Config for EXPLICIT_CONTENT_DETECTION.

Property

Name Type Optional Description

model

string

 

Model to use for explicit content detection. Supported values: "builtin/stable" (the default if unset) and "builtin/latest".

See also

google.cloud.videointelligence.v1p3beta1.ExplicitContentDetectionConfig definition in proto format

static

ExplicitContentFrame

Video frame level annotation results for explicit content.

Properties

Name Type Optional Description

timeOffset

Object

 

Time-offset, relative to the beginning of the video, corresponding to the video frame for this location.

This object should have the same structure as Duration

pornographyLikelihood

number

 

Likelihood of the pornography content..

The number should be among the values of Likelihood

See also

google.cloud.videointelligence.v1p3beta1.ExplicitContentFrame definition in proto format

static

LabelAnnotation

Label annotation.

Properties

Name Type Optional Description

entity

Object

 

Detected entity.

This object should have the same structure as Entity

categoryEntities

Array of Object

 

Common categories for the detected entity. E.g. when the label is Terrier the category is likely dog. And in some cases there might be more than one categories e.g. Terrier could also be a pet.

This object should have the same structure as Entity

segments

Array of Object

 

All video segments where a label was detected.

This object should have the same structure as LabelSegment

frames

Array of Object

 

All video frames where a label was detected.

This object should have the same structure as LabelFrame

See also

google.cloud.videointelligence.v1p3beta1.LabelAnnotation definition in proto format

static

LabelDetectionConfig

Config for LABEL_DETECTION.

Properties

Name Type Optional Description

labelDetectionMode

number

 

What labels should be detected with LABEL_DETECTION, in addition to video-level labels or segment-level labels. If unspecified, defaults to SHOT_MODE.

The number should be among the values of LabelDetectionMode

stationaryCamera

boolean

 

Whether the video has been shot from a stationary (i.e. non-moving) camera. When set to true, might improve detection accuracy for moving objects. Should be used with SHOT_AND_FRAME_MODE enabled.

model

string

 

Model to use for label detection. Supported values: "builtin/stable" (the default if unset) and "builtin/latest".

frameConfidenceThreshold

number

 

The confidence threshold we perform filtering on the labels from frame-level detection. If not set, it is set to 0.4 by default. The valid range for this threshold is [0.1, 0.9]. Any value set outside of this range will be clipped. Note: for best results please follow the default threshold. We will update the default threshold everytime when we release a new model.

videoConfidenceThreshold

number

 

The confidence threshold we perform filtering on the labels from video-level and shot-level detections. If not set, it is set to 0.3 by default. The valid range for this threshold is [0.1, 0.9]. Any value set outside of this range will be clipped. Note: for best results please follow the default threshold. We will update the default threshold everytime when we release a new model.

See also

google.cloud.videointelligence.v1p3beta1.LabelDetectionConfig definition in proto format

static

LabelFrame

Video frame level annotation results for label detection.

Properties

Name Type Optional Description

timeOffset

Object

 

Time-offset, relative to the beginning of the video, corresponding to the video frame for this location.

This object should have the same structure as Duration

confidence

number

 

Confidence that the label is accurate. Range: [0, 1].

See also

google.cloud.videointelligence.v1p3beta1.LabelFrame definition in proto format

static

LabelSegment

Video segment level annotation results for label detection.

Properties

Name Type Optional Description

segment

Object

 

Video segment where a label was detected.

This object should have the same structure as VideoSegment

confidence

number

 

Confidence that the label is accurate. Range: [0, 1].

See also

google.cloud.videointelligence.v1p3beta1.LabelSegment definition in proto format

static

LogoRecognitionAnnotation

Annotation corresponding to one detected, tracked and recognized logo class.

Properties

Name Type Optional Description

entity

Object

 

Entity category information to specify the logo class that all the logo tracks within this LogoRecognitionAnnotation are recognized as.

This object should have the same structure as Entity

tracks

Array of Object

 

All logo tracks where the recognized logo appears. Each track corresponds to one logo instance appearing in consecutive frames.

This object should have the same structure as Track

segments

Array of Object

 

All video segments where the recognized logo appears. There might be multiple instances of the same logo class appearing in one VideoSegment.

This object should have the same structure as VideoSegment

See also

google.cloud.videointelligence.v1p3beta1.LogoRecognitionAnnotation definition in proto format

static

NormalizedBoundingBox

Normalized bounding box. The normalized vertex coordinates are relative to the original image. Range: [0, 1].

Properties

Name Type Optional Description

left

number

 

Left X coordinate.

top

number

 

Top Y coordinate.

right

number

 

Right X coordinate.

bottom

number

 

Bottom Y coordinate.

See also

google.cloud.videointelligence.v1p3beta1.NormalizedBoundingBox definition in proto format

static

NormalizedBoundingPoly

Normalized bounding polygon for text (that might not be aligned with axis). Contains list of the corner points in clockwise order starting from top-left corner. For example, for a rectangular bounding box: When the text is horizontal it might look like: 0----1 | | 3----2

When it's clockwise rotated 180 degrees around the top-left corner it becomes: 2----3 | | 1----0

and the vertex order will still be (0, 1, 2, 3). Note that values can be less than 0, or greater than 1 due to trignometric calculations for location of the box.

Property

Name Type Optional Description

vertices

Array of Object

 

Normalized vertices of the bounding polygon.

This object should have the same structure as NormalizedVertex

See also

google.cloud.videointelligence.v1p3beta1.NormalizedBoundingPoly definition in proto format

static

NormalizedVertex

A vertex represents a 2D point in the image. NOTE: the normalized vertex coordinates are relative to the original image and range from 0 to 1.

Properties

Name Type Optional Description

x

number

 

X coordinate.

y

number

 

Y coordinate.

See also

google.cloud.videointelligence.v1p3beta1.NormalizedVertex definition in proto format

static

ObjectTrackingAnnotation

Annotations corresponding to one tracked object.

Properties

Name Type Optional Description

entity

Object

 

Entity to specify the object category that this track is labeled as.

This object should have the same structure as Entity

confidence

number

 

Object category's labeling confidence of this track.

frames

Array of Object

 

Information corresponding to all frames where this object track appears. Non-streaming batch mode: it may be one or multiple ObjectTrackingFrame messages in frames. Streaming mode: it can only be one ObjectTrackingFrame message in frames.

This object should have the same structure as ObjectTrackingFrame

segment

Object

 

Non-streaming batch mode ONLY. Each object track corresponds to one video segment where it appears.

This object should have the same structure as VideoSegment

trackId

number

 

Streaming mode ONLY. In streaming mode, we do not know the end time of a tracked object before it is completed. Hence, there is no VideoSegment info returned. Instead, we provide a unique identifiable integer track_id so that the customers can correlate the results of the ongoing ObjectTrackAnnotation of the same track_id over time.

See also

google.cloud.videointelligence.v1p3beta1.ObjectTrackingAnnotation definition in proto format

static

ObjectTrackingConfig

Config for OBJECT_TRACKING.

Property

Name Type Optional Description

model

string

 

Model to use for object tracking. Supported values: "builtin/stable" (the default if unset) and "builtin/latest".

See also

google.cloud.videointelligence.v1p3beta1.ObjectTrackingConfig definition in proto format

static

ObjectTrackingFrame

Video frame level annotations for object detection and tracking. This field stores per frame location, time offset, and confidence.

Properties

Name Type Optional Description

normalizedBoundingBox

Object

 

The normalized bounding box location of this object track for the frame.

This object should have the same structure as NormalizedBoundingBox

timeOffset

Object

 

The timestamp of the frame in microseconds.

This object should have the same structure as Duration

See also

google.cloud.videointelligence.v1p3beta1.ObjectTrackingFrame definition in proto format

static

ShotChangeDetectionConfig

Config for SHOT_CHANGE_DETECTION.

Property

Name Type Optional Description

model

string

 

Model to use for shot change detection. Supported values: "builtin/stable" (the default if unset) and "builtin/latest".

See also

google.cloud.videointelligence.v1p3beta1.ShotChangeDetectionConfig definition in proto format

static

SpeechContext

Provides "hints" to the speech recognizer to favor specific words and phrases in the results.

Property

Name Type Optional Description

phrases

Array of string

 

Optional A list of strings containing words and phrases "hints" so that the speech recognition is more likely to recognize them. This can be used to improve the accuracy for specific words and phrases, for example, if specific commands are typically spoken by the user. This can also be used to add additional words to the vocabulary of the recognizer. See usage limits.

See also

google.cloud.videointelligence.v1p3beta1.SpeechContext definition in proto format

static

SpeechRecognitionAlternative

Alternative hypotheses (a.k.a. n-best list).

Properties

Name Type Optional Description

transcript

string

 

Transcript text representing the words that the user spoke.

confidence

number

 

The confidence estimate between 0.0 and 1.0. A higher number indicates an estimated greater likelihood that the recognized words are correct. This field is typically provided only for the top hypothesis, and only for is_final=true results. Clients should not rely on the confidence field as it is not guaranteed to be accurate or consistent. The default of 0.0 is a sentinel value indicating confidence was not set.

words

Array of Object

 

A list of word-specific information for each recognized word.

This object should have the same structure as WordInfo

See also

google.cloud.videointelligence.v1p3beta1.SpeechRecognitionAlternative definition in proto format

static

SpeechTranscription

A speech recognition result corresponding to a portion of the audio.

Properties

Name Type Optional Description

alternatives

Array of Object

 

May contain one or more recognition hypotheses (up to the maximum specified in max_alternatives). These alternatives are ordered in terms of accuracy, with the top (first) alternative being the most probable, as ranked by the recognizer.

This object should have the same structure as SpeechRecognitionAlternative

languageCode

string

 

Output only. The BCP-47 language tag of the language in this result. This language code was detected to have the most likelihood of being spoken in the audio.

See also

google.cloud.videointelligence.v1p3beta1.SpeechTranscription definition in proto format

static

SpeechTranscriptionConfig

Config for SPEECH_TRANSCRIPTION.

Properties

Name Type Optional Description

languageCode

string

 

Required The language of the supplied audio as a BCP-47 language tag. Example: "en-US". See Language Support for a list of the currently supported language codes.

maxAlternatives

number

 

Optional Maximum number of recognition hypotheses to be returned. Specifically, the maximum number of SpeechRecognitionAlternative messages within each SpeechTranscription. The server may return fewer than max_alternatives. Valid values are 0-30. A value of 0 or 1 will return a maximum of one. If omitted, will return a maximum of one.

filterProfanity

boolean

 

Optional If set to true, the server will attempt to filter out profanities, replacing all but the initial character in each filtered word with asterisks, e.g. "f***". If set to false or omitted, profanities won't be filtered out.

speechContexts

Array of Object

 

Optional A means to provide context to assist the speech recognition.

This object should have the same structure as SpeechContext

enableAutomaticPunctuation

boolean

 

Optional If 'true', adds punctuation to recognition result hypotheses. This feature is only available in select languages. Setting this for requests in other languages has no effect at all. The default 'false' value does not add punctuation to result hypotheses. NOTE: "This is currently offered as an experimental service, complimentary to all users. In the future this may be exclusively available as a premium feature."

audioTracks

Array of number

 

Optional For file formats, such as MXF or MKV, supporting multiple audio tracks, specify up to two tracks. Default: track 0.

enableSpeakerDiarization

boolean

 

Optional If 'true', enables speaker detection for each recognized word in the top alternative of the recognition result using a speaker_tag provided in the WordInfo. Note: When this is true, we send all the words from the beginning of the audio for the top alternative in every consecutive responses. This is done in order to improve our speaker tags as our models learn to identify the speakers in the conversation over time.

diarizationSpeakerCount

number

 

Optional If set, specifies the estimated number of speakers in the conversation. If not set, defaults to '2'. Ignored unless enable_speaker_diarization is set to true.

enableWordConfidence

boolean

 

Optional If true, the top result includes a list of words and the confidence for those words. If false, no word-level confidence information is returned. The default is false.

See also

google.cloud.videointelligence.v1p3beta1.SpeechTranscriptionConfig definition in proto format

static

StreamingAnnotateVideoRequest

The top-level message sent by the client for the StreamingAnnotateVideo method. Multiple StreamingAnnotateVideoRequest messages are sent. The first message must only contain a StreamingVideoConfig message. All subsequent messages must only contain input_content data.

Properties

Name Type Optional Description

videoConfig

Object

 

Provides information to the annotator, specifing how to process the request. The first AnnotateStreamingVideoRequest message must only contain a video_config message.

This object should have the same structure as StreamingVideoConfig

inputContent

string

 

The video data to be annotated. Chunks of video data are sequentially sent in StreamingAnnotateVideoRequest messages. Except the initial StreamingAnnotateVideoRequest message containing only video_config, all subsequent AnnotateStreamingVideoRequest messages must only contain input_content field. Note: as with all bytes fields, protobuffers use a pure binary representation (not base64).

See also

google.cloud.videointelligence.v1p3beta1.StreamingAnnotateVideoRequest definition in proto format

static

StreamingAnnotateVideoResponse

StreamingAnnotateVideoResponse is the only message returned to the client by StreamingAnnotateVideo. A series of zero or more StreamingAnnotateVideoResponse messages are streamed back to the client.

Properties

Name Type Optional Description

error

Object

 

If set, returns a google.rpc.Status message that specifies the error for the operation.

This object should have the same structure as Status

annotationResults

Object

 

Streaming annotation results.

This object should have the same structure as StreamingVideoAnnotationResults

annotationResultsUri

string

 

GCS URI that stores annotation results of one streaming session. It is a directory that can hold multiple files in JSON format. Example uri format: gs://bucket_id/object_id/cloud_project_name-session_id

See also

google.cloud.videointelligence.v1p3beta1.StreamingAnnotateVideoResponse definition in proto format

static

StreamingExplicitContentDetectionConfig

Config for EXPLICIT_CONTENT_DETECTION in streaming mode. No customized config support.

See also

google.cloud.videointelligence.v1p3beta1.StreamingExplicitContentDetectionConfig definition in proto format

static

StreamingLabelDetectionConfig

Config for LABEL_DETECTION in streaming mode.

Property

Name Type Optional Description

stationaryCamera

boolean

 

Whether the video has been captured from a stationary (i.e. non-moving) camera. When set to true, might improve detection accuracy for moving objects. Default: false.

See also

google.cloud.videointelligence.v1p3beta1.StreamingLabelDetectionConfig definition in proto format

static

StreamingObjectTrackingConfig

Config for STREAMING_OBJECT_TRACKING. No customized config support.

See also

google.cloud.videointelligence.v1p3beta1.StreamingObjectTrackingConfig definition in proto format

static

StreamingShotChangeDetectionConfig

Config for SHOT_CHANGE_DETECTION in streaming mode. No customized config support.

See also

google.cloud.videointelligence.v1p3beta1.StreamingShotChangeDetectionConfig definition in proto format

static

StreamingStorageConfig

Config for streaming storage option.

Properties

Name Type Optional Description

enableStorageAnnotationResult

boolean

 

Enable streaming storage. Default: false.

annotationResultStorageDirectory

string

 

GCS URI to store all annotation results for one client. Client should specify this field as the top-level storage directory. Annotation results of different sessions will be put into different sub-directories denoted by project_name and session_id. All sub-directories will be auto generated by program and will be made accessible to client in response proto. URIs must be specified in the following format: gs://bucket-id/object-id bucket-id should be a valid GCS bucket created by client and bucket permission shall also be configured properly. object-id can be arbitrary string that make sense to client. Other URI formats will return error and cause GCS write failure.

See also

google.cloud.videointelligence.v1p3beta1.StreamingStorageConfig definition in proto format

static

StreamingVideoAnnotationResults

Streaming annotation results corresponding to a portion of the video that is currently being processed.

Properties

Name Type Optional Description

shotAnnotations

Array of Object

 

Shot annotation results. Each shot is represented as a video segment.

This object should have the same structure as VideoSegment

labelAnnotations

Array of Object

 

Label annotation results.

This object should have the same structure as LabelAnnotation

explicitAnnotation

Object

 

Explicit content annotation results.

This object should have the same structure as ExplicitContentAnnotation

objectAnnotations

Array of Object

 

Object tracking results.

This object should have the same structure as ObjectTrackingAnnotation

See also

google.cloud.videointelligence.v1p3beta1.StreamingVideoAnnotationResults definition in proto format

static

StreamingVideoConfig

Provides information to the annotator that specifies how to process the request.

Properties

Name Type Optional Description

feature

number

 

Requested annotation feature.

The number should be among the values of StreamingFeature

shotChangeDetectionConfig

Object

 

Config for STREAMING_SHOT_CHANGE_DETECTION.

This object should have the same structure as StreamingShotChangeDetectionConfig

labelDetectionConfig

Object

 

Config for STREAMING_LABEL_DETECTION.

This object should have the same structure as StreamingLabelDetectionConfig

explicitContentDetectionConfig

Object

 

Config for STREAMING_EXPLICIT_CONTENT_DETECTION.

This object should have the same structure as StreamingExplicitContentDetectionConfig

objectTrackingConfig

Object

 

Config for STREAMING_OBJECT_TRACKING.

This object should have the same structure as StreamingObjectTrackingConfig

storageConfig

Object

 

Streaming storage option. By default: storage is disabled.

This object should have the same structure as StreamingStorageConfig

See also

google.cloud.videointelligence.v1p3beta1.StreamingVideoConfig definition in proto format

static

TextAnnotation

Annotations related to one detected OCR text snippet. This will contain the corresponding text, confidence value, and frame level information for each detection.

Properties

Name Type Optional Description

text

string

 

The detected text.

segments

Array of Object

 

All video segments where OCR detected text appears.

This object should have the same structure as TextSegment

See also

google.cloud.videointelligence.v1p3beta1.TextAnnotation definition in proto format

static

TextDetectionConfig

Config for TEXT_DETECTION.

Properties

Name Type Optional Description

languageHints

Array of string

 

Language hint can be specified if the language to be detected is known a priori. It can increase the accuracy of the detection. Language hint must be language code in BCP-47 format.

Automatic language detection is performed if no hint is provided.

model

string

 

Model to use for text detection. Supported values: "builtin/stable" (the default if unset) and "builtin/latest".

See also

google.cloud.videointelligence.v1p3beta1.TextDetectionConfig definition in proto format

static

TextFrame

Video frame level annotation results for text annotation (OCR). Contains information regarding timestamp and bounding box locations for the frames containing detected OCR text snippets.

Properties

Name Type Optional Description

rotatedBoundingBox

Object

 

Bounding polygon of the detected text for this frame.

This object should have the same structure as NormalizedBoundingPoly

timeOffset

Object

 

Timestamp of this frame.

This object should have the same structure as Duration

See also

google.cloud.videointelligence.v1p3beta1.TextFrame definition in proto format

static

TextSegment

Video segment level annotation results for text detection.

Properties

Name Type Optional Description

segment

Object

 

Video segment where a text snippet was detected.

This object should have the same structure as VideoSegment

confidence

number

 

Confidence for the track of detected text. It is calculated as the highest over all frames where OCR detected text appears.

frames

Array of Object

 

Information related to the frames where OCR detected text appears.

This object should have the same structure as TextFrame

See also

google.cloud.videointelligence.v1p3beta1.TextSegment definition in proto format

static

TimestampedObject

For tracking related features, such as LOGO_RECOGNITION, FACE_DETECTION, CELEBRITY_RECOGNITION, PERSON_DETECTION. An object at time_offset with attributes, and located with normalized_bounding_box.

Properties

Name Type Optional Description

normalizedBoundingBox

Object

 

Normalized Bounding box in a frame, where the object is located.

This object should have the same structure as NormalizedBoundingBox

timeOffset

Object

 

Time-offset, relative to the beginning of the video, corresponding to the video frame for this object.

This object should have the same structure as Duration

attributes

Array of Object

 

Optional. The attributes of the object in the bounding box.

This object should have the same structure as DetectedAttribute

See also

google.cloud.videointelligence.v1p3beta1.TimestampedObject definition in proto format

static

Track

A track of an object instance.

Properties

Name Type Optional Description

segment

Object

 

Video segment of a track.

This object should have the same structure as VideoSegment

timestampedObjects

Array of Object

 

The object with timestamp and attributes per frame in the track.

This object should have the same structure as TimestampedObject

attributes

Array of Object

 

Optional. Attributes in the track level.

This object should have the same structure as DetectedAttribute

confidence

number

 

Optional. The confidence score of the tracked object.

See also

google.cloud.videointelligence.v1p3beta1.Track definition in proto format

static

VideoAnnotationProgress

Annotation progress for a single video.

Properties

Name Type Optional Description

inputUri

string

 

Video file location in Google Cloud Storage.

progressPercent

number

 

Approximate percentage processed thus far. Guaranteed to be 100 when fully processed.

startTime

Object

 

Time when the request was received.

This object should have the same structure as Timestamp

updateTime

Object

 

Time of the most recent update.

This object should have the same structure as Timestamp

See also

google.cloud.videointelligence.v1p3beta1.VideoAnnotationProgress definition in proto format

static

VideoAnnotationResults

Annotation results for a single video.

Properties

Name Type Optional Description

inputUri

string

 

Video file location in Google Cloud Storage.

segmentLabelAnnotations

Array of Object

 

Label annotations on video level or user specified segment level. There is exactly one element for each unique label.

This object should have the same structure as LabelAnnotation

shotLabelAnnotations

Array of Object

 

Label annotations on shot level. There is exactly one element for each unique label.

This object should have the same structure as LabelAnnotation

frameLabelAnnotations

Array of Object

 

Label annotations on frame level. There is exactly one element for each unique label.

This object should have the same structure as LabelAnnotation

shotAnnotations

Array of Object

 

Shot annotations. Each shot is represented as a video segment.

This object should have the same structure as VideoSegment

explicitAnnotation

Object

 

Explicit content annotation.

This object should have the same structure as ExplicitContentAnnotation

speechTranscriptions

Array of Object

 

Speech transcription.

This object should have the same structure as SpeechTranscription

textAnnotations

Array of Object

 

OCR text detection and tracking. Annotations for list of detected text snippets. Each will have list of frame information associated with it.

This object should have the same structure as TextAnnotation

objectAnnotations

Array of Object

 

Annotations for list of objects detected and tracked in video.

This object should have the same structure as ObjectTrackingAnnotation

logoRecognitionAnnotations

Array of Object

 

Annotations for list of logos detected, tracked and recognized in video.

This object should have the same structure as LogoRecognitionAnnotation

error

Object

 

If set, indicates an error. Note that for a single AnnotateVideoRequest some videos may succeed and some may fail.

This object should have the same structure as Status

See also

google.cloud.videointelligence.v1p3beta1.VideoAnnotationResults definition in proto format

static

VideoContext

Video context and/or feature-specific parameters.

Properties

Name Type Optional Description

segments

Array of Object

 

Video segments to annotate. The segments may overlap and are not required to be contiguous or span the whole video. If unspecified, each video is treated as a single segment.

This object should have the same structure as VideoSegment

labelDetectionConfig

Object

 

Config for LABEL_DETECTION.

This object should have the same structure as LabelDetectionConfig

shotChangeDetectionConfig

Object

 

Config for SHOT_CHANGE_DETECTION.

This object should have the same structure as ShotChangeDetectionConfig

explicitContentDetectionConfig

Object

 

Config for EXPLICIT_CONTENT_DETECTION.

This object should have the same structure as ExplicitContentDetectionConfig

speechTranscriptionConfig

Object

 

Config for SPEECH_TRANSCRIPTION.

This object should have the same structure as SpeechTranscriptionConfig

textDetectionConfig

Object

 

Config for TEXT_DETECTION.

This object should have the same structure as TextDetectionConfig

objectTrackingConfig

Object

 

Config for OBJECT_TRACKING.

This object should have the same structure as ObjectTrackingConfig

See also

google.cloud.videointelligence.v1p3beta1.VideoContext definition in proto format

static

VideoSegment

Video segment.

Properties

Name Type Optional Description

startTimeOffset

Object

 

Time-offset, relative to the beginning of the video, corresponding to the start of the segment (inclusive).

This object should have the same structure as Duration

endTimeOffset

Object

 

Time-offset, relative to the beginning of the video, corresponding to the end of the segment (inclusive).

This object should have the same structure as Duration

See also

google.cloud.videointelligence.v1p3beta1.VideoSegment definition in proto format

static

WordInfo

Word-specific information for recognized words. Word information is only included in the response when certain request parameters are set, such as enable_word_time_offsets.

Properties

Name Type Optional Description

startTime

Object

 

Time offset relative to the beginning of the audio, and corresponding to the start of the spoken word. This field is only set if enable_word_time_offsets=true and only in the top hypothesis. This is an experimental feature and the accuracy of the time offset can vary.

This object should have the same structure as Duration

endTime

Object

 

Time offset relative to the beginning of the audio, and corresponding to the end of the spoken word. This field is only set if enable_word_time_offsets=true and only in the top hypothesis. This is an experimental feature and the accuracy of the time offset can vary.

This object should have the same structure as Duration

word

string

 

The word corresponding to this set of information.

confidence

number

 

Output only. The confidence estimate between 0.0 and 1.0. A higher number indicates an estimated greater likelihood that the recognized words are correct. This field is set only for the top alternative. This field is not guaranteed to be accurate and users should not rely on it to be always provided. The default of 0.0 is a sentinel value indicating confidence was not set.

speakerTag

number

 

Output only. A distinct integer value is assigned for every speaker within the audio. This field specifies which one of those speakers was detected to have spoken this word. Value ranges from 1 up to diarization_speaker_count, and is only set if speaker diarization is enabled.

See also

google.cloud.videointelligence.v1p3beta1.WordInfo definition in proto format