Types for Cloud Video Intelligence API Client¶
-
class
google.cloud.videointelligence_v1p2beta1.types.
AnnotateVideoProgress
¶ Video annotation progress. Included in the
metadata
field of theOperation
returned by theGetOperation
call of thegoogle::longrunning::Operations
service.-
annotation_progress
¶ Progress metadata for all videos specified in
AnnotateVideoRequest
.
-
annotation_progress
Field google.cloud.videointelligence.v1p2beta1.AnnotateVideoProgress.annotation_progress
-
-
class
google.cloud.videointelligence_v1p2beta1.types.
AnnotateVideoRequest
¶ Video annotation request.
-
input_uri
¶ Input video location. Currently, only Google Cloud Storage URIs are supported, which must be specified in the following format:
gs://bucket-id/object-id
(other URI formats return [google .rpc.Code.INVALID_ARGUMENT][google.rpc.Code.INVALID_ARGUMENT ]). For more information, see Request URIs. A video URI may include wildcards inobject-id
, and thus identify multiple videos. Supported wildcards: ‘*’ to match 0 or more characters; ‘?’ to match 1 character. If unset, the input video should be embedded in the request asinput_content
. If set,input_content
should be unset.
-
input_content
¶ The video data bytes. If unset, the input video(s) should be specified via
input_uri
. If set,input_uri
should be unset.
-
features
¶ Requested video annotation features.
-
video_context
¶ Additional video context and/or feature-specific parameters.
-
output_uri
¶ Optional location where the output (in JSON format) should be stored. Currently, only Google Cloud Storage URIs are supported, which must be specified in the following format:
gs://bucket-id/object-id
(other URI formats return [google .rpc.Code.INVALID_ARGUMENT][google.rpc.Code.INVALID_ARGUMENT ]). For more information, see Request URIs.
-
location_id
¶ Optional cloud region where annotation should take place. Supported cloud regions:
us-east1
,us-west1
,europe- west1
,asia-east1
. If no region is specified, a region will be determined based on video file location.
-
features
Field google.cloud.videointelligence.v1p2beta1.AnnotateVideoRequest.features
-
input_content
Field google.cloud.videointelligence.v1p2beta1.AnnotateVideoRequest.input_content
-
input_uri
Field google.cloud.videointelligence.v1p2beta1.AnnotateVideoRequest.input_uri
-
location_id
Field google.cloud.videointelligence.v1p2beta1.AnnotateVideoRequest.location_id
-
output_uri
Field google.cloud.videointelligence.v1p2beta1.AnnotateVideoRequest.output_uri
-
video_context
Field google.cloud.videointelligence.v1p2beta1.AnnotateVideoRequest.video_context
-
-
class
google.cloud.videointelligence_v1p2beta1.types.
AnnotateVideoResponse
¶ Video annotation response. Included in the
response
field of theOperation
returned by theGetOperation
call of thegoogle::longrunning::Operations
service.-
annotation_results
¶ Annotation results for all videos specified in
AnnotateVideoRequest
.
-
annotation_results
Field google.cloud.videointelligence.v1p2beta1.AnnotateVideoResponse.annotation_results
-
-
class
google.cloud.videointelligence_v1p2beta1.types.
Any
¶ -
type_url
¶ Field google.protobuf.Any.type_url
-
value
¶ Field google.protobuf.Any.value
-
-
class
google.cloud.videointelligence_v1p2beta1.types.
CancelOperationRequest
¶ -
name
¶ Field google.longrunning.CancelOperationRequest.name
-
-
class
google.cloud.videointelligence_v1p2beta1.types.
DeleteOperationRequest
¶ -
name
¶ Field google.longrunning.DeleteOperationRequest.name
-
-
class
google.cloud.videointelligence_v1p2beta1.types.
Duration
¶ -
nanos
¶ Field google.protobuf.Duration.nanos
-
seconds
¶ Field google.protobuf.Duration.seconds
-
-
class
google.cloud.videointelligence_v1p2beta1.types.
Entity
¶ Detected entity from video analysis.
-
entity_id
¶ Opaque entity ID. Some IDs may be available in Google Knowledge Graph Search API.
-
description
¶ Textual description, e.g.
Fixed-gear bicycle
.
-
language_code
¶ Language code for
description
in BCP-47 format.
-
description
Field google.cloud.videointelligence.v1p2beta1.Entity.description
-
entity_id
Field google.cloud.videointelligence.v1p2beta1.Entity.entity_id
-
language_code
Field google.cloud.videointelligence.v1p2beta1.Entity.language_code
-
-
class
google.cloud.videointelligence_v1p2beta1.types.
ExplicitContentAnnotation
¶ Explicit content annotation (based on per-frame visual signals only). If no explicit content has been detected in a frame, no annotations are present for that frame.
-
frames
¶ All video frames where explicit content was detected.
-
frames
Field google.cloud.videointelligence.v1p2beta1.ExplicitContentAnnotation.frames
-
-
class
google.cloud.videointelligence_v1p2beta1.types.
ExplicitContentDetectionConfig
¶ Config for EXPLICIT_CONTENT_DETECTION.
-
model
¶ Model to use for explicit content detection. Supported values: “builtin/stable” (the default if unset) and “builtin/latest”.
-
model
Field google.cloud.videointelligence.v1p2beta1.ExplicitContentDetectionConfig.model
-
-
class
google.cloud.videointelligence_v1p2beta1.types.
ExplicitContentFrame
¶ Video frame level annotation results for explicit content.
-
time_offset
¶ Time-offset, relative to the beginning of the video, corresponding to the video frame for this location.
-
pornography_likelihood
¶ Likelihood of the pornography content..
-
pornography_likelihood
Field google.cloud.videointelligence.v1p2beta1.ExplicitContentFrame.pornography_likelihood
-
time_offset
Field google.cloud.videointelligence.v1p2beta1.ExplicitContentFrame.time_offset
-
-
class
google.cloud.videointelligence_v1p2beta1.types.
GetOperationRequest
¶ -
name
¶ Field google.longrunning.GetOperationRequest.name
-
-
class
google.cloud.videointelligence_v1p2beta1.types.
LabelAnnotation
¶ Label annotation.
-
entity
¶ Detected entity.
-
category_entities
¶ Common categories for the detected entity. E.g. when the label is
Terrier
the category is likelydog
. And in some cases there might be more than one categories e.g.Terrier
could also be apet
.
-
segments
¶ All video segments where a label was detected.
-
frames
¶ All video frames where a label was detected.
-
category_entities
Field google.cloud.videointelligence.v1p2beta1.LabelAnnotation.category_entities
-
entity
Field google.cloud.videointelligence.v1p2beta1.LabelAnnotation.entity
-
frames
Field google.cloud.videointelligence.v1p2beta1.LabelAnnotation.frames
-
segments
Field google.cloud.videointelligence.v1p2beta1.LabelAnnotation.segments
-
-
class
google.cloud.videointelligence_v1p2beta1.types.
LabelDetectionConfig
¶ Config for LABEL_DETECTION.
-
label_detection_mode
¶ What labels should be detected with LABEL_DETECTION, in addition to video-level labels or segment-level labels. If unspecified, defaults to
SHOT_MODE
.
-
stationary_camera
¶ Whether the video has been shot from a stationary (i.e. non- moving) camera. When set to true, might improve detection accuracy for moving objects. Should be used with
SHOT_AND_FRAME_MODE
enabled.
-
model
¶ Model to use for label detection. Supported values: “builtin/stable” (the default if unset) and “builtin/latest”.
-
label_detection_mode
Field google.cloud.videointelligence.v1p2beta1.LabelDetectionConfig.label_detection_mode
-
model
Field google.cloud.videointelligence.v1p2beta1.LabelDetectionConfig.model
-
stationary_camera
Field google.cloud.videointelligence.v1p2beta1.LabelDetectionConfig.stationary_camera
-
-
class
google.cloud.videointelligence_v1p2beta1.types.
LabelFrame
¶ Video frame level annotation results for label detection.
-
time_offset
¶ Time-offset, relative to the beginning of the video, corresponding to the video frame for this location.
-
confidence
¶ Confidence that the label is accurate. Range: [0, 1].
-
confidence
Field google.cloud.videointelligence.v1p2beta1.LabelFrame.confidence
-
time_offset
Field google.cloud.videointelligence.v1p2beta1.LabelFrame.time_offset
-
-
class
google.cloud.videointelligence_v1p2beta1.types.
LabelSegment
¶ Video segment level annotation results for label detection.
-
segment
¶ Video segment where a label was detected.
-
confidence
¶ Confidence that the label is accurate. Range: [0, 1].
-
confidence
Field google.cloud.videointelligence.v1p2beta1.LabelSegment.confidence
-
segment
Field google.cloud.videointelligence.v1p2beta1.LabelSegment.segment
-
-
class
google.cloud.videointelligence_v1p2beta1.types.
ListOperationsRequest
¶ -
filter
¶ Field google.longrunning.ListOperationsRequest.filter
-
name
¶ Field google.longrunning.ListOperationsRequest.name
-
page_size
¶ Field google.longrunning.ListOperationsRequest.page_size
-
page_token
¶ Field google.longrunning.ListOperationsRequest.page_token
-
-
class
google.cloud.videointelligence_v1p2beta1.types.
ListOperationsResponse
¶ -
next_page_token
¶ Field google.longrunning.ListOperationsResponse.next_page_token
-
operations
¶ Field google.longrunning.ListOperationsResponse.operations
-
-
class
google.cloud.videointelligence_v1p2beta1.types.
NormalizedBoundingBox
¶ Normalized bounding box. The normalized vertex coordinates are relative to the original image. Range: [0, 1].
-
left
¶ Left X coordinate.
-
top
¶ Top Y coordinate.
-
right
¶ Right X coordinate.
-
bottom
¶ Bottom Y coordinate.
-
bottom
Field google.cloud.videointelligence.v1p2beta1.NormalizedBoundingBox.bottom
-
left
Field google.cloud.videointelligence.v1p2beta1.NormalizedBoundingBox.left
-
right
Field google.cloud.videointelligence.v1p2beta1.NormalizedBoundingBox.right
-
top
Field google.cloud.videointelligence.v1p2beta1.NormalizedBoundingBox.top
-
-
class
google.cloud.videointelligence_v1p2beta1.types.
NormalizedBoundingPoly
¶ Normalized bounding polygon for text (that might not be aligned with axis). Contains list of the corner points in clockwise order starting from top-left corner. For example, for a rectangular bounding box: When the text is horizontal it might look like: 0—-1 | | 3—-2
When it’s clockwise rotated 180 degrees around the top-left corner it becomes: 2—-3 | | 1—-0
and the vertex order will still be (0, 1, 2, 3). Note that values can be less than 0, or greater than 1 due to trignometric calculations for location of the box.
-
vertices
¶ Normalized vertices of the bounding polygon.
-
vertices
Field google.cloud.videointelligence.v1p2beta1.NormalizedBoundingPoly.vertices
-
-
class
google.cloud.videointelligence_v1p2beta1.types.
NormalizedVertex
¶ X coordinate.
-
y
¶ Y coordinate.
-
x
¶ Field google.cloud.videointelligence.v1p2beta1.NormalizedVertex.x
-
y
Field google.cloud.videointelligence.v1p2beta1.NormalizedVertex.y
-
-
class
google.cloud.videointelligence_v1p2beta1.types.
ObjectTrackingAnnotation
¶ Annotations corresponding to one tracked object.
-
entity
¶ Entity to specify the object category that this track is labeled as.
-
confidence
¶ Object category’s labeling confidence of this track.
-
frames
¶ Information corresponding to all frames where this object track appears.
-
segment
¶ Each object track corresponds to one video segment where it appears.
-
confidence
Field google.cloud.videointelligence.v1p2beta1.ObjectTrackingAnnotation.confidence
-
entity
Field google.cloud.videointelligence.v1p2beta1.ObjectTrackingAnnotation.entity
-
frames
Field google.cloud.videointelligence.v1p2beta1.ObjectTrackingAnnotation.frames
-
segment
Field google.cloud.videointelligence.v1p2beta1.ObjectTrackingAnnotation.segment
-
-
class
google.cloud.videointelligence_v1p2beta1.types.
ObjectTrackingFrame
¶ Video frame level annotations for object detection and tracking. This field stores per frame location, time offset, and confidence.
-
normalized_bounding_box
¶ The normalized bounding box location of this object track for the frame.
-
time_offset
¶ The timestamp of the frame in microseconds.
-
normalized_bounding_box
Field google.cloud.videointelligence.v1p2beta1.ObjectTrackingFrame.normalized_bounding_box
-
time_offset
Field google.cloud.videointelligence.v1p2beta1.ObjectTrackingFrame.time_offset
-
-
class
google.cloud.videointelligence_v1p2beta1.types.
Operation
¶ -
deserialize
()¶ Creates new method instance from given serialized data.
-
done
¶ Field google.longrunning.Operation.done
-
error
¶ Field google.longrunning.Operation.error
-
metadata
¶ Field google.longrunning.Operation.metadata
-
name
¶ Field google.longrunning.Operation.name
-
response
¶ Field google.longrunning.Operation.response
-
-
class
google.cloud.videointelligence_v1p2beta1.types.
OperationInfo
¶ -
metadata_type
¶ Field google.longrunning.OperationInfo.metadata_type
-
response_type
¶ Field google.longrunning.OperationInfo.response_type
-
-
class
google.cloud.videointelligence_v1p2beta1.types.
ShotChangeDetectionConfig
¶ Config for SHOT_CHANGE_DETECTION.
-
model
¶ Model to use for shot change detection. Supported values: “builtin/stable” (the default if unset) and “builtin/latest”.
-
model
Field google.cloud.videointelligence.v1p2beta1.ShotChangeDetectionConfig.model
-
-
class
google.cloud.videointelligence_v1p2beta1.types.
Status
¶ -
code
¶ Field google.rpc.Status.code
-
details
¶ Field google.rpc.Status.details
-
message
¶ Field google.rpc.Status.message
-
-
class
google.cloud.videointelligence_v1p2beta1.types.
TextAnnotation
¶ Annotations related to one detected OCR text snippet. This will contain the corresponding text, confidence value, and frame level information for each detection.
-
text
¶ The detected text.
-
segments
¶ All video segments where OCR detected text appears.
-
segments
Field google.cloud.videointelligence.v1p2beta1.TextAnnotation.segments
-
text
Field google.cloud.videointelligence.v1p2beta1.TextAnnotation.text
-
-
class
google.cloud.videointelligence_v1p2beta1.types.
TextDetectionConfig
¶ Config for TEXT_DETECTION.
-
language_hints
¶ Language hint can be specified if the language to be detected is known a priori. It can increase the accuracy of the detection. Language hint must be language code in BCP-47 format. Automatic language detection is performed if no hint is provided.
-
language_hints
Field google.cloud.videointelligence.v1p2beta1.TextDetectionConfig.language_hints
-
-
class
google.cloud.videointelligence_v1p2beta1.types.
TextFrame
¶ Video frame level annotation results for text annotation (OCR). Contains information regarding timestamp and bounding box locations for the frames containing detected OCR text snippets.
-
rotated_bounding_box
¶ Bounding polygon of the detected text for this frame.
-
time_offset
¶ Timestamp of this frame.
-
rotated_bounding_box
Field google.cloud.videointelligence.v1p2beta1.TextFrame.rotated_bounding_box
-
time_offset
Field google.cloud.videointelligence.v1p2beta1.TextFrame.time_offset
-
-
class
google.cloud.videointelligence_v1p2beta1.types.
TextSegment
¶ Video segment level annotation results for text detection.
-
segment
¶ Video segment where a text snippet was detected.
-
confidence
¶ Confidence for the track of detected text. It is calculated as the highest over all frames where OCR detected text appears.
-
frames
¶ Information related to the frames where OCR detected text appears.
-
confidence
Field google.cloud.videointelligence.v1p2beta1.TextSegment.confidence
-
frames
Field google.cloud.videointelligence.v1p2beta1.TextSegment.frames
-
segment
Field google.cloud.videointelligence.v1p2beta1.TextSegment.segment
-
-
class
google.cloud.videointelligence_v1p2beta1.types.
Timestamp
¶ -
nanos
¶ Field google.protobuf.Timestamp.nanos
-
seconds
¶ Field google.protobuf.Timestamp.seconds
-
-
class
google.cloud.videointelligence_v1p2beta1.types.
VideoAnnotationProgress
¶ Annotation progress for a single video.
-
input_uri
¶ Video file location in Google Cloud Storage.
-
progress_percent
¶ Approximate percentage processed thus far. Guaranteed to be 100 when fully processed.
-
start_time
¶ Time when the request was received.
-
update_time
¶ Time of the most recent update.
-
input_uri
Field google.cloud.videointelligence.v1p2beta1.VideoAnnotationProgress.input_uri
-
progress_percent
Field google.cloud.videointelligence.v1p2beta1.VideoAnnotationProgress.progress_percent
-
start_time
Field google.cloud.videointelligence.v1p2beta1.VideoAnnotationProgress.start_time
-
update_time
Field google.cloud.videointelligence.v1p2beta1.VideoAnnotationProgress.update_time
-
-
class
google.cloud.videointelligence_v1p2beta1.types.
VideoAnnotationResults
¶ Annotation results for a single video.
-
input_uri
¶ Video file location in Google Cloud Storage.
-
segment_label_annotations
¶ Label annotations on video level or user specified segment level. There is exactly one element for each unique label.
-
shot_label_annotations
¶ Label annotations on shot level. There is exactly one element for each unique label.
-
frame_label_annotations
¶ Label annotations on frame level. There is exactly one element for each unique label.
-
shot_annotations
¶ Shot annotations. Each shot is represented as a video segment.
-
explicit_annotation
¶ Explicit content annotation.
-
text_annotations
¶ OCR text detection and tracking. Annotations for list of detected text snippets. Each will have list of frame information associated with it.
-
object_annotations
¶ Annotations for list of objects detected and tracked in video.
-
error
¶ If set, indicates an error. Note that for a single
AnnotateVideoRequest
some videos may succeed and some may fail.
-
error
Field google.cloud.videointelligence.v1p2beta1.VideoAnnotationResults.error
-
explicit_annotation
Field google.cloud.videointelligence.v1p2beta1.VideoAnnotationResults.explicit_annotation
-
frame_label_annotations
Field google.cloud.videointelligence.v1p2beta1.VideoAnnotationResults.frame_label_annotations
-
input_uri
Field google.cloud.videointelligence.v1p2beta1.VideoAnnotationResults.input_uri
-
object_annotations
Field google.cloud.videointelligence.v1p2beta1.VideoAnnotationResults.object_annotations
-
segment_label_annotations
Field google.cloud.videointelligence.v1p2beta1.VideoAnnotationResults.segment_label_annotations
-
shot_annotations
Field google.cloud.videointelligence.v1p2beta1.VideoAnnotationResults.shot_annotations
-
shot_label_annotations
Field google.cloud.videointelligence.v1p2beta1.VideoAnnotationResults.shot_label_annotations
-
text_annotations
Field google.cloud.videointelligence.v1p2beta1.VideoAnnotationResults.text_annotations
-
-
class
google.cloud.videointelligence_v1p2beta1.types.
VideoContext
¶ Video context and/or feature-specific parameters.
-
segments
¶ Video segments to annotate. The segments may overlap and are not required to be contiguous or span the whole video. If unspecified, each video is treated as a single segment.
-
label_detection_config
¶ Config for LABEL_DETECTION.
-
shot_change_detection_config
¶ Config for SHOT_CHANGE_DETECTION.
-
explicit_content_detection_config
¶ Config for EXPLICIT_CONTENT_DETECTION.
-
text_detection_config
¶ Config for TEXT_DETECTION.
-
explicit_content_detection_config
Field google.cloud.videointelligence.v1p2beta1.VideoContext.explicit_content_detection_config
-
label_detection_config
Field google.cloud.videointelligence.v1p2beta1.VideoContext.label_detection_config
-
segments
Field google.cloud.videointelligence.v1p2beta1.VideoContext.segments
-
shot_change_detection_config
Field google.cloud.videointelligence.v1p2beta1.VideoContext.shot_change_detection_config
-
text_detection_config
Field google.cloud.videointelligence.v1p2beta1.VideoContext.text_detection_config
-
-
class
google.cloud.videointelligence_v1p2beta1.types.
VideoSegment
¶ Video segment.
-
start_time_offset
¶ Time-offset, relative to the beginning of the video, corresponding to the start of the segment (inclusive).
-
end_time_offset
¶ Time-offset, relative to the beginning of the video, corresponding to the end of the segment (inclusive).
-
end_time_offset
Field google.cloud.videointelligence.v1p2beta1.VideoSegment.end_time_offset
-
start_time_offset
Field google.cloud.videointelligence.v1p2beta1.VideoSegment.start_time_offset
-