google.cloud.videointelligence. v1beta2
Source: index.
Properties
Abstract types
Properties
Feature number
Video annotation feature.
Properties
| Name | Type | Optional | Description |
|---|---|---|---|
|
FEATURE_UNSPECIFIED |
|
|
Unspecified. |
|
LABEL_DETECTION |
|
|
Label detection. Detect objects, such as dog or flower. |
|
SHOT_CHANGE_DETECTION |
|
|
Shot change detection. |
|
EXPLICIT_CONTENT_DETECTION |
|
|
Explicit content detection. |
|
FACE_DETECTION |
|
|
Human face detection and tracking. |
LabelDetectionMode number
Label detection mode.
Properties
| Name | Type | Optional | Description |
|---|---|---|---|
|
LABEL_DETECTION_MODE_UNSPECIFIED |
|
|
Unspecified. |
|
SHOT_MODE |
|
|
Detect shot-level labels. |
|
FRAME_MODE |
|
|
Detect frame-level labels. |
|
SHOT_AND_FRAME_MODE |
|
|
Detect both shot-level and frame-level labels. |
Likelihood number
Bucketized representation of likelihood.
Properties
| Name | Type | Optional | Description |
|---|---|---|---|
|
LIKELIHOOD_UNSPECIFIED |
|
|
Unspecified likelihood. |
|
VERY_UNLIKELY |
|
|
Very unlikely. |
|
UNLIKELY |
|
|
Unlikely. |
|
POSSIBLE |
|
|
Possible. |
|
LIKELY |
|
|
Likely. |
|
VERY_LIKELY |
|
|
Very likely. |
Abstract types
AnnotateVideoProgress
Video annotation progress. Included in the metadata
field of the Operation returned by the GetOperation
call of the google::longrunning::Operations service.
Property
| Name | Type | Optional | Description |
|---|---|---|---|
|
annotationProgress |
Array of Object |
|
Progress metadata for all videos specified in This object should have the same structure as VideoAnnotationProgress |
AnnotateVideoRequest
Video annotation request.
Properties
| Name | Type | Optional | Description |
|---|---|---|---|
|
inputUri |
string |
|
Input video location. Currently, only
Google Cloud Storage URIs are
supported, which must be specified in the following format:
|
|
inputContent |
string |
|
The video data bytes.
If unset, the input video(s) should be specified via |
|
features |
Array of number |
|
Requested video annotation features. The number should be among the values of Feature |
|
videoContext |
Object |
|
Additional video context and/or feature-specific parameters. This object should have the same structure as VideoContext |
|
outputUri |
string |
|
Optional location where the output (in JSON format) should be stored.
Currently, only Google Cloud Storage
URIs are supported, which must be specified in the following format:
|
|
locationId |
string |
|
Optional cloud region where annotation should take place. Supported cloud
regions: |
AnnotateVideoResponse
Video annotation response. Included in the response
field of the Operation returned by the GetOperation
call of the google::longrunning::Operations service.
Property
| Name | Type | Optional | Description |
|---|---|---|---|
|
annotationResults |
Array of Object |
|
Annotation results for all videos specified in This object should have the same structure as VideoAnnotationResults |
Entity
Detected entity from video analysis.
Properties
| Name | Type | Optional | Description |
|---|---|---|---|
|
entityId |
string |
|
Opaque entity ID. Some IDs may be available in Google Knowledge Graph Search API. |
|
description |
string |
|
Textual description, e.g. |
|
languageCode |
string |
|
Language code for |
ExplicitContentAnnotation
Explicit content annotation (based on per-frame visual signals only). If no explicit content has been detected in a frame, no annotations are present for that frame.
Property
| Name | Type | Optional | Description |
|---|---|---|---|
|
frames |
Array of Object |
|
All video frames where explicit content was detected. This object should have the same structure as ExplicitContentFrame |
- See also
-
google.cloud.videointelligence.v1beta2.ExplicitContentAnnotation definition in proto format
ExplicitContentDetectionConfig
Config for EXPLICIT_CONTENT_DETECTION.
Property
| Name | Type | Optional | Description |
|---|---|---|---|
|
model |
string |
|
Model to use for explicit content detection. Supported values: "builtin/stable" (the default if unset) and "builtin/latest". |
- See also
-
google.cloud.videointelligence.v1beta2.ExplicitContentDetectionConfig definition in proto format
ExplicitContentFrame
Video frame level annotation results for explicit content.
Properties
| Name | Type | Optional | Description |
|---|---|---|---|
|
timeOffset |
Object |
|
Time-offset, relative to the beginning of the video, corresponding to the video frame for this location. This object should have the same structure as Duration |
|
pornographyLikelihood |
number |
|
Likelihood of the pornography content.. The number should be among the values of Likelihood |
FaceAnnotation
Face annotation.
Properties
| Name | Type | Optional | Description |
|---|---|---|---|
|
thumbnail |
string |
|
Thumbnail of a representative face view (in JPEG format). |
|
segments |
Array of Object |
|
All video segments where a face was detected. This object should have the same structure as FaceSegment |
|
frames |
Array of Object |
|
All video frames where a face was detected. This object should have the same structure as FaceFrame |
FaceDetectionConfig
Config for FACE_DETECTION.
Properties
| Name | Type | Optional | Description |
|---|---|---|---|
|
model |
string |
|
Model to use for face detection. Supported values: "builtin/stable" (the default if unset) and "builtin/latest". |
|
includeBoundingBoxes |
boolean |
|
Whether bounding boxes be included in the face annotation output. |
FaceFrame
Video frame level annotation results for face detection.
Properties
| Name | Type | Optional | Description |
|---|---|---|---|
|
normalizedBoundingBoxes |
Array of Object |
|
Normalized Bounding boxes in a frame. There can be more than one boxes if the same face is detected in multiple locations within the current frame. This object should have the same structure as NormalizedBoundingBox |
|
timeOffset |
Object |
|
Time-offset, relative to the beginning of the video, corresponding to the video frame for this location. This object should have the same structure as Duration |
FaceSegment
Video segment level annotation results for face detection.
Property
| Name | Type | Optional | Description |
|---|---|---|---|
|
segment |
Object |
|
Video segment where a face was detected. This object should have the same structure as VideoSegment |
LabelAnnotation
Label annotation.
Properties
| Name | Type | Optional | Description |
|---|---|---|---|
|
entity |
Object |
|
Detected entity. This object should have the same structure as Entity |
|
categoryEntities |
Array of Object |
|
Common categories for the detected entity.
E.g. when the label is This object should have the same structure as Entity |
|
segments |
Array of Object |
|
All video segments where a label was detected. This object should have the same structure as LabelSegment |
|
frames |
Array of Object |
|
All video frames where a label was detected. This object should have the same structure as LabelFrame |
LabelDetectionConfig
Config for LABEL_DETECTION.
Properties
| Name | Type | Optional | Description |
|---|---|---|---|
|
labelDetectionMode |
number |
|
What labels should be detected with LABEL_DETECTION, in addition to
video-level labels or segment-level labels.
If unspecified, defaults to The number should be among the values of LabelDetectionMode |
|
stationaryCamera |
boolean |
|
Whether the video has been shot from a stationary (i.e. non-moving) camera.
When set to true, might improve detection accuracy for moving objects.
Should be used with |
|
model |
string |
|
Model to use for label detection. Supported values: "builtin/stable" (the default if unset) and "builtin/latest". |
LabelFrame
Video frame level annotation results for label detection.
Properties
| Name | Type | Optional | Description |
|---|---|---|---|
|
timeOffset |
Object |
|
Time-offset, relative to the beginning of the video, corresponding to the video frame for this location. This object should have the same structure as Duration |
|
confidence |
number |
|
Confidence that the label is accurate. Range: [0, 1]. |
LabelSegment
Video segment level annotation results for label detection.
Properties
| Name | Type | Optional | Description |
|---|---|---|---|
|
segment |
Object |
|
Video segment where a label was detected. This object should have the same structure as VideoSegment |
|
confidence |
number |
|
Confidence that the label is accurate. Range: [0, 1]. |
NormalizedBoundingBox
Normalized bounding box. The normalized vertex coordinates are relative to the original image. Range: [0, 1].
Properties
| Name | Type | Optional | Description |
|---|---|---|---|
|
left |
number |
|
Left X coordinate. |
|
top |
number |
|
Top Y coordinate. |
|
right |
number |
|
Right X coordinate. |
|
bottom |
number |
|
Bottom Y coordinate. |
ShotChangeDetectionConfig
Config for SHOT_CHANGE_DETECTION.
Property
| Name | Type | Optional | Description |
|---|---|---|---|
|
model |
string |
|
Model to use for shot change detection. Supported values: "builtin/stable" (the default if unset) and "builtin/latest". |
- See also
-
google.cloud.videointelligence.v1beta2.ShotChangeDetectionConfig definition in proto format
VideoAnnotationProgress
Annotation progress for a single video.
Properties
| Name | Type | Optional | Description |
|---|---|---|---|
|
inputUri |
string |
|
Video file location in Google Cloud Storage. |
|
progressPercent |
number |
|
Approximate percentage processed thus far. Guaranteed to be 100 when fully processed. |
|
startTime |
Object |
|
Time when the request was received. This object should have the same structure as Timestamp |
|
updateTime |
Object |
|
Time of the most recent update. This object should have the same structure as Timestamp |
VideoAnnotationResults
Annotation results for a single video.
Properties
| Name | Type | Optional | Description |
|---|---|---|---|
|
inputUri |
string |
|
Video file location in Google Cloud Storage. |
|
segmentLabelAnnotations |
Array of Object |
|
Label annotations on video level or user specified segment level. There is exactly one element for each unique label. This object should have the same structure as LabelAnnotation |
|
shotLabelAnnotations |
Array of Object |
|
Label annotations on shot level. There is exactly one element for each unique label. This object should have the same structure as LabelAnnotation |
|
frameLabelAnnotations |
Array of Object |
|
Label annotations on frame level. There is exactly one element for each unique label. This object should have the same structure as LabelAnnotation |
|
faceAnnotations |
Array of Object |
|
Face annotations. There is exactly one element for each unique face. This object should have the same structure as FaceAnnotation |
|
shotAnnotations |
Array of Object |
|
Shot annotations. Each shot is represented as a video segment. This object should have the same structure as VideoSegment |
|
explicitAnnotation |
Object |
|
Explicit content annotation. This object should have the same structure as ExplicitContentAnnotation |
|
error |
Object |
|
If set, indicates an error. Note that for a single This object should have the same structure as Status |
VideoContext
Video context and/or feature-specific parameters.
Properties
| Name | Type | Optional | Description |
|---|---|---|---|
|
segments |
Array of Object |
|
Video segments to annotate. The segments may overlap and are not required to be contiguous or span the whole video. If unspecified, each video is treated as a single segment. This object should have the same structure as VideoSegment |
|
labelDetectionConfig |
Object |
|
Config for LABEL_DETECTION. This object should have the same structure as LabelDetectionConfig |
|
shotChangeDetectionConfig |
Object |
|
Config for SHOT_CHANGE_DETECTION. This object should have the same structure as ShotChangeDetectionConfig |
|
explicitContentDetectionConfig |
Object |
|
Config for EXPLICIT_CONTENT_DETECTION. This object should have the same structure as ExplicitContentDetectionConfig |
|
faceDetectionConfig |
Object |
|
Config for FACE_DETECTION. This object should have the same structure as FaceDetectionConfig |
VideoSegment
Video segment.
Properties
| Name | Type | Optional | Description |
|---|---|---|---|
|
startTimeOffset |
Object |
|
Time-offset, relative to the beginning of the video, corresponding to the start of the segment (inclusive). This object should have the same structure as Duration |
|
endTimeOffset |
Object |
|
Time-offset, relative to the beginning of the video, corresponding to the end of the segment (inclusive). This object should have the same structure as Duration |