As of January 1, 2020 this library no longer supports Python 2 on the latest released version. Library versions released prior to that date will continue to be available. For more information please visit Python 2 support on Google Cloud.

Types for Google Cloud Videointelligence v1beta2 API

class google.cloud.videointelligence_v1beta2.types.AnnotateVideoProgress(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Video annotation progress. Included in the metadata field of the Operation returned by the GetOperation call of the google::longrunning::Operations service.

annotation_progress

Progress metadata for all videos specified in AnnotateVideoRequest.

Type

MutableSequence[google.cloud.videointelligence_v1beta2.types.VideoAnnotationProgress]

class google.cloud.videointelligence_v1beta2.types.AnnotateVideoRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Video annotation request.

input_uri

Input video location. Currently, only Google Cloud Storage URIs are supported, which must be specified in the following format: gs://bucket-id/object-id (other URI formats return [google.rpc.Code.INVALID_ARGUMENT][google.rpc.Code.INVALID_ARGUMENT]). For more information, see Request URIs. A video URI may include wildcards in object-id, and thus identify multiple videos. Supported wildcards: ‘*’ to match 0 or more characters; ‘?’ to match 1 character. If unset, the input video should be embedded in the request as input_content. If set, input_content should be unset.

Type

str

input_content

The video data bytes. If unset, the input video(s) should be specified via input_uri. If set, input_uri should be unset.

Type

bytes

features

Required. Requested video annotation features.

Type

MutableSequence[google.cloud.videointelligence_v1beta2.types.Feature]

video_context

Additional video context and/or feature-specific parameters.

Type

google.cloud.videointelligence_v1beta2.types.VideoContext

output_uri

Optional. Location where the output (in JSON format) should be stored. Currently, only Google Cloud Storage URIs are supported, which must be specified in the following format: gs://bucket-id/object-id (other URI formats return [google.rpc.Code.INVALID_ARGUMENT][google.rpc.Code.INVALID_ARGUMENT]). For more information, see Request URIs.

Type

str

location_id

Optional. Cloud region where annotation should take place. Supported cloud regions: us-east1, us-west1, europe-west1, asia-east1. If no region is specified, a region will be determined based on video file location.

Type

str

class google.cloud.videointelligence_v1beta2.types.AnnotateVideoResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Video annotation response. Included in the response field of the Operation returned by the GetOperation call of the google::longrunning::Operations service.

annotation_results

Annotation results for all videos specified in AnnotateVideoRequest.

Type

MutableSequence[google.cloud.videointelligence_v1beta2.types.VideoAnnotationResults]

class google.cloud.videointelligence_v1beta2.types.Entity(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Detected entity from video analysis.

entity_id

Opaque entity ID. Some IDs may be available in Google Knowledge Graph Search API.

Type

str

description

Textual description, e.g. Fixed-gear bicycle.

Type

str

language_code

Language code for description in BCP-47 format.

Type

str

class google.cloud.videointelligence_v1beta2.types.ExplicitContentAnnotation(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Explicit content annotation (based on per-frame visual signals only). If no explicit content has been detected in a frame, no annotations are present for that frame.

frames

All video frames where explicit content was detected.

Type

MutableSequence[google.cloud.videointelligence_v1beta2.types.ExplicitContentFrame]

class google.cloud.videointelligence_v1beta2.types.ExplicitContentDetectionConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Config for EXPLICIT_CONTENT_DETECTION.

model

Model to use for explicit content detection. Supported values: “builtin/stable” (the default if unset) and “builtin/latest”.

Type

str

class google.cloud.videointelligence_v1beta2.types.ExplicitContentFrame(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Video frame level annotation results for explicit content.

time_offset

Time-offset, relative to the beginning of the video, corresponding to the video frame for this location.

Type

google.protobuf.duration_pb2.Duration

pornography_likelihood

Likelihood of the pornography content..

Type

google.cloud.videointelligence_v1beta2.types.Likelihood

class google.cloud.videointelligence_v1beta2.types.FaceAnnotation(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Face annotation.

thumbnail

Thumbnail of a representative face view (in JPEG format).

Type

bytes

segments

All video segments where a face was detected.

Type

MutableSequence[google.cloud.videointelligence_v1beta2.types.FaceSegment]

frames

All video frames where a face was detected.

Type

MutableSequence[google.cloud.videointelligence_v1beta2.types.FaceFrame]

class google.cloud.videointelligence_v1beta2.types.FaceDetectionConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Config for FACE_DETECTION.

model

Model to use for face detection. Supported values: “builtin/stable” (the default if unset) and “builtin/latest”.

Type

str

include_bounding_boxes

Whether bounding boxes be included in the face annotation output.

Type

bool

class google.cloud.videointelligence_v1beta2.types.FaceFrame(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Video frame level annotation results for face detection.

normalized_bounding_boxes

Normalized Bounding boxes in a frame. There can be more than one boxes if the same face is detected in multiple locations within the current frame.

Type

MutableSequence[google.cloud.videointelligence_v1beta2.types.NormalizedBoundingBox]

time_offset

Time-offset, relative to the beginning of the video, corresponding to the video frame for this location.

Type

google.protobuf.duration_pb2.Duration

class google.cloud.videointelligence_v1beta2.types.FaceSegment(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Video segment level annotation results for face detection.

segment

Video segment where a face was detected.

Type

google.cloud.videointelligence_v1beta2.types.VideoSegment

class google.cloud.videointelligence_v1beta2.types.Feature(value)[source]

Bases: proto.enums.Enum

Video annotation feature.

Values:
FEATURE_UNSPECIFIED (0):

Unspecified.

LABEL_DETECTION (1):

Label detection. Detect objects, such as dog or flower.

SHOT_CHANGE_DETECTION (2):

Shot change detection.

EXPLICIT_CONTENT_DETECTION (3):

Explicit content detection.

FACE_DETECTION (4):

Human face detection and tracking.

class google.cloud.videointelligence_v1beta2.types.LabelAnnotation(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Label annotation.

entity

Detected entity.

Type

google.cloud.videointelligence_v1beta2.types.Entity

category_entities

Common categories for the detected entity. E.g. when the label is Terrier the category is likely dog. And in some cases there might be more than one categories e.g. Terrier could also be a pet.

Type

MutableSequence[google.cloud.videointelligence_v1beta2.types.Entity]

segments

All video segments where a label was detected.

Type

MutableSequence[google.cloud.videointelligence_v1beta2.types.LabelSegment]

frames

All video frames where a label was detected.

Type

MutableSequence[google.cloud.videointelligence_v1beta2.types.LabelFrame]

class google.cloud.videointelligence_v1beta2.types.LabelDetectionConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Config for LABEL_DETECTION.

label_detection_mode

What labels should be detected with LABEL_DETECTION, in addition to video-level labels or segment-level labels. If unspecified, defaults to SHOT_MODE.

Type

google.cloud.videointelligence_v1beta2.types.LabelDetectionMode

stationary_camera

Whether the video has been shot from a stationary (i.e. non-moving) camera. When set to true, might improve detection accuracy for moving objects. Should be used with SHOT_AND_FRAME_MODE enabled.

Type

bool

model

Model to use for label detection. Supported values: “builtin/stable” (the default if unset) and “builtin/latest”.

Type

str

class google.cloud.videointelligence_v1beta2.types.LabelDetectionMode(value)[source]

Bases: proto.enums.Enum

Label detection mode.

Values:
LABEL_DETECTION_MODE_UNSPECIFIED (0):

Unspecified.

SHOT_MODE (1):

Detect shot-level labels.

FRAME_MODE (2):

Detect frame-level labels.

SHOT_AND_FRAME_MODE (3):

Detect both shot-level and frame-level labels.

class google.cloud.videointelligence_v1beta2.types.LabelFrame(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Video frame level annotation results for label detection.

time_offset

Time-offset, relative to the beginning of the video, corresponding to the video frame for this location.

Type

google.protobuf.duration_pb2.Duration

confidence

Confidence that the label is accurate. Range: [0, 1].

Type

float

class google.cloud.videointelligence_v1beta2.types.LabelSegment(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Video segment level annotation results for label detection.

segment

Video segment where a label was detected.

Type

google.cloud.videointelligence_v1beta2.types.VideoSegment

confidence

Confidence that the label is accurate. Range: [0, 1].

Type

float

class google.cloud.videointelligence_v1beta2.types.Likelihood(value)[source]

Bases: proto.enums.Enum

Bucketized representation of likelihood.

Values:
LIKELIHOOD_UNSPECIFIED (0):

Unspecified likelihood.

VERY_UNLIKELY (1):

Very unlikely.

UNLIKELY (2):

Unlikely.

POSSIBLE (3):

Possible.

LIKELY (4):

Likely.

VERY_LIKELY (5):

Very likely.

class google.cloud.videointelligence_v1beta2.types.NormalizedBoundingBox(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Normalized bounding box. The normalized vertex coordinates are relative to the original image. Range: [0, 1].

left

Left X coordinate.

Type

float

top

Top Y coordinate.

Type

float

right

Right X coordinate.

Type

float

bottom

Bottom Y coordinate.

Type

float

class google.cloud.videointelligence_v1beta2.types.ShotChangeDetectionConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Config for SHOT_CHANGE_DETECTION.

model

Model to use for shot change detection. Supported values: “builtin/stable” (the default if unset) and “builtin/latest”.

Type

str

class google.cloud.videointelligence_v1beta2.types.VideoAnnotationProgress(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Annotation progress for a single video.

input_uri

Video file location in Google Cloud Storage.

Type

str

progress_percent

Approximate percentage processed thus far. Guaranteed to be 100 when fully processed.

Type

int

start_time

Time when the request was received.

Type

google.protobuf.timestamp_pb2.Timestamp

update_time

Time of the most recent update.

Type

google.protobuf.timestamp_pb2.Timestamp

class google.cloud.videointelligence_v1beta2.types.VideoAnnotationResults(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Annotation results for a single video.

input_uri

Video file location in Google Cloud Storage.

Type

str

segment_label_annotations

Label annotations on video level or user specified segment level. There is exactly one element for each unique label.

Type

MutableSequence[google.cloud.videointelligence_v1beta2.types.LabelAnnotation]

shot_label_annotations

Label annotations on shot level. There is exactly one element for each unique label.

Type

MutableSequence[google.cloud.videointelligence_v1beta2.types.LabelAnnotation]

frame_label_annotations

Label annotations on frame level. There is exactly one element for each unique label.

Type

MutableSequence[google.cloud.videointelligence_v1beta2.types.LabelAnnotation]

face_annotations

Face annotations. There is exactly one element for each unique face.

Type

MutableSequence[google.cloud.videointelligence_v1beta2.types.FaceAnnotation]

shot_annotations

Shot annotations. Each shot is represented as a video segment.

Type

MutableSequence[google.cloud.videointelligence_v1beta2.types.VideoSegment]

explicit_annotation

Explicit content annotation.

Type

google.cloud.videointelligence_v1beta2.types.ExplicitContentAnnotation

error

If set, indicates an error. Note that for a single AnnotateVideoRequest some videos may succeed and some may fail.

Type

google.rpc.status_pb2.Status

class google.cloud.videointelligence_v1beta2.types.VideoContext(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Video context and/or feature-specific parameters.

segments

Video segments to annotate. The segments may overlap and are not required to be contiguous or span the whole video. If unspecified, each video is treated as a single segment.

Type

MutableSequence[google.cloud.videointelligence_v1beta2.types.VideoSegment]

label_detection_config

Config for LABEL_DETECTION.

Type

google.cloud.videointelligence_v1beta2.types.LabelDetectionConfig

shot_change_detection_config

Config for SHOT_CHANGE_DETECTION.

Type

google.cloud.videointelligence_v1beta2.types.ShotChangeDetectionConfig

explicit_content_detection_config

Config for EXPLICIT_CONTENT_DETECTION.

Type

google.cloud.videointelligence_v1beta2.types.ExplicitContentDetectionConfig

face_detection_config

Config for FACE_DETECTION.

Type

google.cloud.videointelligence_v1beta2.types.FaceDetectionConfig

class google.cloud.videointelligence_v1beta2.types.VideoSegment(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Video segment.

start_time_offset

Time-offset, relative to the beginning of the video, corresponding to the start of the segment (inclusive).

Type

google.protobuf.duration_pb2.Duration

end_time_offset

Time-offset, relative to the beginning of the video, corresponding to the end of the segment (inclusive).

Type

google.protobuf.duration_pb2.Duration