Types for Google Cloud Videointelligence v1beta2 API¶
- class google.cloud.videointelligence_v1beta2.types.AnnotateVideoProgress(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Video annotation progress. Included in the
metadata
field of theOperation
returned by theGetOperation
call of thegoogle::longrunning::Operations
service.- annotation_progress¶
Progress metadata for all videos specified in
AnnotateVideoRequest
.- Type
MutableSequence[google.cloud.videointelligence_v1beta2.types.VideoAnnotationProgress]
- class google.cloud.videointelligence_v1beta2.types.AnnotateVideoRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Video annotation request.
- input_uri¶
Input video location. Currently, only Google Cloud Storage URIs are supported, which must be specified in the following format:
gs://bucket-id/object-id
(other URI formats return [google.rpc.Code.INVALID_ARGUMENT][google.rpc.Code.INVALID_ARGUMENT]). For more information, see Request URIs. A video URI may include wildcards inobject-id
, and thus identify multiple videos. Supported wildcards: ‘*’ to match 0 or more characters; ‘?’ to match 1 character. If unset, the input video should be embedded in the request asinput_content
. If set,input_content
should be unset.- Type
- input_content¶
The video data bytes. If unset, the input video(s) should be specified via
input_uri
. If set,input_uri
should be unset.- Type
- features¶
Required. Requested video annotation features.
- Type
MutableSequence[google.cloud.videointelligence_v1beta2.types.Feature]
- video_context¶
Additional video context and/or feature-specific parameters.
- output_uri¶
Optional. Location where the output (in JSON format) should be stored. Currently, only Google Cloud Storage URIs are supported, which must be specified in the following format:
gs://bucket-id/object-id
(other URI formats return [google.rpc.Code.INVALID_ARGUMENT][google.rpc.Code.INVALID_ARGUMENT]). For more information, see Request URIs.- Type
- class google.cloud.videointelligence_v1beta2.types.AnnotateVideoResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Video annotation response. Included in the
response
field of theOperation
returned by theGetOperation
call of thegoogle::longrunning::Operations
service.- annotation_results¶
Annotation results for all videos specified in
AnnotateVideoRequest
.- Type
MutableSequence[google.cloud.videointelligence_v1beta2.types.VideoAnnotationResults]
- class google.cloud.videointelligence_v1beta2.types.Entity(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Detected entity from video analysis.
- entity_id¶
Opaque entity ID. Some IDs may be available in Google Knowledge Graph Search API.
- Type
- class google.cloud.videointelligence_v1beta2.types.ExplicitContentAnnotation(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Explicit content annotation (based on per-frame visual signals only). If no explicit content has been detected in a frame, no annotations are present for that frame.
- frames¶
All video frames where explicit content was detected.
- Type
MutableSequence[google.cloud.videointelligence_v1beta2.types.ExplicitContentFrame]
- class google.cloud.videointelligence_v1beta2.types.ExplicitContentDetectionConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Config for EXPLICIT_CONTENT_DETECTION.
- class google.cloud.videointelligence_v1beta2.types.ExplicitContentFrame(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Video frame level annotation results for explicit content.
- time_offset¶
Time-offset, relative to the beginning of the video, corresponding to the video frame for this location.
- pornography_likelihood¶
Likelihood of the pornography content..
- class google.cloud.videointelligence_v1beta2.types.FaceAnnotation(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Face annotation.
- segments¶
All video segments where a face was detected.
- Type
MutableSequence[google.cloud.videointelligence_v1beta2.types.FaceSegment]
- frames¶
All video frames where a face was detected.
- Type
MutableSequence[google.cloud.videointelligence_v1beta2.types.FaceFrame]
- class google.cloud.videointelligence_v1beta2.types.FaceDetectionConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Config for FACE_DETECTION.
- model¶
Model to use for face detection. Supported values: “builtin/stable” (the default if unset) and “builtin/latest”.
- Type
- class google.cloud.videointelligence_v1beta2.types.FaceFrame(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Video frame level annotation results for face detection.
- normalized_bounding_boxes¶
Normalized Bounding boxes in a frame. There can be more than one boxes if the same face is detected in multiple locations within the current frame.
- Type
MutableSequence[google.cloud.videointelligence_v1beta2.types.NormalizedBoundingBox]
- time_offset¶
Time-offset, relative to the beginning of the video, corresponding to the video frame for this location.
- class google.cloud.videointelligence_v1beta2.types.FaceSegment(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Video segment level annotation results for face detection.
- segment¶
Video segment where a face was detected.
- class google.cloud.videointelligence_v1beta2.types.Feature(value)[source]¶
Bases:
proto.enums.Enum
Video annotation feature.
- Values:
- FEATURE_UNSPECIFIED (0):
Unspecified.
- LABEL_DETECTION (1):
Label detection. Detect objects, such as dog or flower.
- SHOT_CHANGE_DETECTION (2):
Shot change detection.
- EXPLICIT_CONTENT_DETECTION (3):
Explicit content detection.
- FACE_DETECTION (4):
Human face detection and tracking.
- class google.cloud.videointelligence_v1beta2.types.LabelAnnotation(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Label annotation.
- entity¶
Detected entity.
- category_entities¶
Common categories for the detected entity. E.g. when the label is
Terrier
the category is likelydog
. And in some cases there might be more than one categories e.g.Terrier
could also be apet
.- Type
MutableSequence[google.cloud.videointelligence_v1beta2.types.Entity]
- segments¶
All video segments where a label was detected.
- Type
MutableSequence[google.cloud.videointelligence_v1beta2.types.LabelSegment]
- frames¶
All video frames where a label was detected.
- Type
MutableSequence[google.cloud.videointelligence_v1beta2.types.LabelFrame]
- class google.cloud.videointelligence_v1beta2.types.LabelDetectionConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Config for LABEL_DETECTION.
- label_detection_mode¶
What labels should be detected with LABEL_DETECTION, in addition to video-level labels or segment-level labels. If unspecified, defaults to
SHOT_MODE
.
- stationary_camera¶
Whether the video has been shot from a stationary (i.e. non-moving) camera. When set to true, might improve detection accuracy for moving objects. Should be used with
SHOT_AND_FRAME_MODE
enabled.- Type
- class google.cloud.videointelligence_v1beta2.types.LabelDetectionMode(value)[source]¶
Bases:
proto.enums.Enum
Label detection mode.
- Values:
- LABEL_DETECTION_MODE_UNSPECIFIED (0):
Unspecified.
- SHOT_MODE (1):
Detect shot-level labels.
- FRAME_MODE (2):
Detect frame-level labels.
- SHOT_AND_FRAME_MODE (3):
Detect both shot-level and frame-level labels.
- class google.cloud.videointelligence_v1beta2.types.LabelFrame(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Video frame level annotation results for label detection.
- time_offset¶
Time-offset, relative to the beginning of the video, corresponding to the video frame for this location.
- class google.cloud.videointelligence_v1beta2.types.LabelSegment(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Video segment level annotation results for label detection.
- segment¶
Video segment where a label was detected.
- class google.cloud.videointelligence_v1beta2.types.Likelihood(value)[source]¶
Bases:
proto.enums.Enum
Bucketized representation of likelihood.
- Values:
- LIKELIHOOD_UNSPECIFIED (0):
Unspecified likelihood.
- VERY_UNLIKELY (1):
Very unlikely.
- UNLIKELY (2):
Unlikely.
- POSSIBLE (3):
Possible.
- LIKELY (4):
Likely.
- VERY_LIKELY (5):
Very likely.
- class google.cloud.videointelligence_v1beta2.types.NormalizedBoundingBox(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Normalized bounding box. The normalized vertex coordinates are relative to the original image. Range: [0, 1].
- class google.cloud.videointelligence_v1beta2.types.ShotChangeDetectionConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Config for SHOT_CHANGE_DETECTION.
- class google.cloud.videointelligence_v1beta2.types.VideoAnnotationProgress(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Annotation progress for a single video.
- input_uri¶
Video file location in Google Cloud Storage.
- Type
- progress_percent¶
Approximate percentage processed thus far. Guaranteed to be 100 when fully processed.
- Type
- start_time¶
Time when the request was received.
- update_time¶
Time of the most recent update.
- class google.cloud.videointelligence_v1beta2.types.VideoAnnotationResults(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Annotation results for a single video.
- input_uri¶
Video file location in Google Cloud Storage.
- Type
- segment_label_annotations¶
Label annotations on video level or user specified segment level. There is exactly one element for each unique label.
- Type
MutableSequence[google.cloud.videointelligence_v1beta2.types.LabelAnnotation]
- shot_label_annotations¶
Label annotations on shot level. There is exactly one element for each unique label.
- Type
MutableSequence[google.cloud.videointelligence_v1beta2.types.LabelAnnotation]
- frame_label_annotations¶
Label annotations on frame level. There is exactly one element for each unique label.
- Type
MutableSequence[google.cloud.videointelligence_v1beta2.types.LabelAnnotation]
- face_annotations¶
Face annotations. There is exactly one element for each unique face.
- Type
MutableSequence[google.cloud.videointelligence_v1beta2.types.FaceAnnotation]
- shot_annotations¶
Shot annotations. Each shot is represented as a video segment.
- Type
MutableSequence[google.cloud.videointelligence_v1beta2.types.VideoSegment]
- explicit_annotation¶
Explicit content annotation.
- error¶
If set, indicates an error. Note that for a single
AnnotateVideoRequest
some videos may succeed and some may fail.- Type
google.rpc.status_pb2.Status
- class google.cloud.videointelligence_v1beta2.types.VideoContext(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Video context and/or feature-specific parameters.
- segments¶
Video segments to annotate. The segments may overlap and are not required to be contiguous or span the whole video. If unspecified, each video is treated as a single segment.
- Type
MutableSequence[google.cloud.videointelligence_v1beta2.types.VideoSegment]
- label_detection_config¶
Config for LABEL_DETECTION.
- shot_change_detection_config¶
Config for SHOT_CHANGE_DETECTION.
- explicit_content_detection_config¶
Config for EXPLICIT_CONTENT_DETECTION.
- face_detection_config¶
Config for FACE_DETECTION.
- class google.cloud.videointelligence_v1beta2.types.VideoSegment(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Video segment.
- start_time_offset¶
Time-offset, relative to the beginning of the video, corresponding to the start of the segment (inclusive).
- end_time_offset¶
Time-offset, relative to the beginning of the video, corresponding to the end of the segment (inclusive).