Members
(static, constant) Feature :number
Video annotation feature.
Properties:
Name | Type | Description |
---|---|---|
FEATURE_UNSPECIFIED |
number |
Unspecified. |
LABEL_DETECTION |
number |
Label detection. Detect objects, such as dog or flower. |
SHOT_CHANGE_DETECTION |
number |
Shot change detection. |
EXPLICIT_CONTENT_DETECTION |
number |
Explicit content detection. |
FACE_DETECTION |
number |
Human face detection and tracking. |
(static, constant) LabelDetectionMode :number
Label detection mode.
Properties:
Name | Type | Description |
---|---|---|
LABEL_DETECTION_MODE_UNSPECIFIED |
number |
Unspecified. |
SHOT_MODE |
number |
Detect shot-level labels. |
FRAME_MODE |
number |
Detect frame-level labels. |
SHOT_AND_FRAME_MODE |
number |
Detect both shot-level and frame-level labels. |
(static, constant) Likelihood :number
Bucketized representation of likelihood.
Properties:
Name | Type | Description |
---|---|---|
LIKELIHOOD_UNSPECIFIED |
number |
Unspecified likelihood. |
VERY_UNLIKELY |
number |
Very unlikely. |
UNLIKELY |
number |
Unlikely. |
POSSIBLE |
number |
Possible. |
LIKELY |
number |
Likely. |
VERY_LIKELY |
number |
Very likely. |
Type Definitions
AnnotateVideoProgress
Video annotation progress. Included in the metadata
field of the Operation
returned by the GetOperation
call of the google::longrunning::Operations
service.
Properties:
Name | Type | Description |
---|---|---|
annotationProgress |
Array.<Object> |
Progress metadata for all videos specified in This object should have the same structure as VideoAnnotationProgress |
- Source:
- See:
AnnotateVideoRequest
Video annotation request.
Properties:
Name | Type | Description |
---|---|---|
inputUri |
string |
Input video location. Currently, only
Google Cloud Storage URIs are
supported, which must be specified in the following format:
|
inputContent |
Buffer |
The video data bytes.
If unset, the input video(s) should be specified via |
features |
Array.<number> |
Requested video annotation features. The number should be among the values of Feature |
videoContext |
Object |
Additional video context and/or feature-specific parameters. This object should have the same structure as VideoContext |
outputUri |
string |
Optional location where the output (in JSON format) should be stored.
Currently, only Google Cloud Storage
URIs are supported, which must be specified in the following format:
|
locationId |
string |
Optional cloud region where annotation should take place. Supported cloud
regions: |
- Source:
- See:
AnnotateVideoResponse
Video annotation response. Included in the response
field of the Operation
returned by the GetOperation
call of the google::longrunning::Operations
service.
Properties:
Name | Type | Description |
---|---|---|
annotationResults |
Array.<Object> |
Annotation results for all videos specified in This object should have the same structure as VideoAnnotationResults |
- Source:
- See:
Entity
Detected entity from video analysis.
Properties:
Name | Type | Description |
---|---|---|
entityId |
string |
Opaque entity ID. Some IDs may be available in Google Knowledge Graph Search API. |
description |
string |
Textual description, e.g. |
languageCode |
string |
Language code for |
- Source:
- See:
ExplicitContentAnnotation
Explicit content annotation (based on per-frame visual signals only). If no explicit content has been detected in a frame, no annotations are present for that frame.
Properties:
Name | Type | Description |
---|---|---|
frames |
Array.<Object> |
All video frames where explicit content was detected. This object should have the same structure as ExplicitContentFrame |
- Source:
- See:
ExplicitContentDetectionConfig
Config for EXPLICIT_CONTENT_DETECTION.
Properties:
Name | Type | Description |
---|---|---|
model |
string |
Model to use for explicit content detection. Supported values: "builtin/stable" (the default if unset) and "builtin/latest". |
- Source:
- See:
ExplicitContentFrame
Video frame level annotation results for explicit content.
Properties:
Name | Type | Description |
---|---|---|
timeOffset |
Object |
Time-offset, relative to the beginning of the video, corresponding to the video frame for this location. This object should have the same structure as Duration |
pornographyLikelihood |
number |
Likelihood of the pornography content.. The number should be among the values of Likelihood |
- Source:
- See:
FaceAnnotation
Face annotation.
Properties:
Name | Type | Description |
---|---|---|
thumbnail |
Buffer |
Thumbnail of a representative face view (in JPEG format). |
segments |
Array.<Object> |
All video segments where a face was detected. This object should have the same structure as FaceSegment |
frames |
Array.<Object> |
All video frames where a face was detected. This object should have the same structure as FaceFrame |
- Source:
- See:
FaceDetectionConfig
Config for FACE_DETECTION.
Properties:
Name | Type | Description |
---|---|---|
model |
string |
Model to use for face detection. Supported values: "builtin/stable" (the default if unset) and "builtin/latest". |
includeBoundingBoxes |
boolean |
Whether bounding boxes be included in the face annotation output. |
- Source:
- See:
FaceFrame
Video frame level annotation results for face detection.
Properties:
Name | Type | Description |
---|---|---|
normalizedBoundingBoxes |
Array.<Object> |
Normalized Bounding boxes in a frame. There can be more than one boxes if the same face is detected in multiple locations within the current frame. This object should have the same structure as NormalizedBoundingBox |
timeOffset |
Object |
Time-offset, relative to the beginning of the video, corresponding to the video frame for this location. This object should have the same structure as Duration |
- Source:
- See:
FaceSegment
Video segment level annotation results for face detection.
Properties:
Name | Type | Description |
---|---|---|
segment |
Object |
Video segment where a face was detected. This object should have the same structure as VideoSegment |
- Source:
- See:
LabelAnnotation
Label annotation.
Properties:
Name | Type | Description |
---|---|---|
entity |
Object |
Detected entity. This object should have the same structure as Entity |
categoryEntities |
Array.<Object> |
Common categories for the detected entity.
E.g. when the label is This object should have the same structure as Entity |
segments |
Array.<Object> |
All video segments where a label was detected. This object should have the same structure as LabelSegment |
frames |
Array.<Object> |
All video frames where a label was detected. This object should have the same structure as LabelFrame |
- Source:
- See:
LabelDetectionConfig
Config for LABEL_DETECTION.
Properties:
Name | Type | Description |
---|---|---|
labelDetectionMode |
number |
What labels should be detected with LABEL_DETECTION, in addition to
video-level labels or segment-level labels.
If unspecified, defaults to The number should be among the values of LabelDetectionMode |
stationaryCamera |
boolean |
Whether the video has been shot from a stationary (i.e. non-moving) camera.
When set to true, might improve detection accuracy for moving objects.
Should be used with |
model |
string |
Model to use for label detection. Supported values: "builtin/stable" (the default if unset) and "builtin/latest". |
- Source:
- See:
LabelFrame
Video frame level annotation results for label detection.
Properties:
Name | Type | Description |
---|---|---|
timeOffset |
Object |
Time-offset, relative to the beginning of the video, corresponding to the video frame for this location. This object should have the same structure as Duration |
confidence |
number |
Confidence that the label is accurate. Range: [0, 1]. |
- Source:
- See:
LabelSegment
Video segment level annotation results for label detection.
Properties:
Name | Type | Description |
---|---|---|
segment |
Object |
Video segment where a label was detected. This object should have the same structure as VideoSegment |
confidence |
number |
Confidence that the label is accurate. Range: [0, 1]. |
- Source:
- See:
NormalizedBoundingBox
Normalized bounding box. The normalized vertex coordinates are relative to the original image. Range: [0, 1].
Properties:
Name | Type | Description |
---|---|---|
left |
number |
Left X coordinate. |
top |
number |
Top Y coordinate. |
right |
number |
Right X coordinate. |
bottom |
number |
Bottom Y coordinate. |
- Source:
- See:
ShotChangeDetectionConfig
Config for SHOT_CHANGE_DETECTION.
Properties:
Name | Type | Description |
---|---|---|
model |
string |
Model to use for shot change detection. Supported values: "builtin/stable" (the default if unset) and "builtin/latest". |
- Source:
- See:
VideoAnnotationProgress
Annotation progress for a single video.
Properties:
Name | Type | Description |
---|---|---|
inputUri |
string |
Video file location in Google Cloud Storage. |
progressPercent |
number |
Approximate percentage processed thus far. Guaranteed to be 100 when fully processed. |
startTime |
Object |
Time when the request was received. This object should have the same structure as Timestamp |
updateTime |
Object |
Time of the most recent update. This object should have the same structure as Timestamp |
- Source:
- See:
VideoAnnotationResults
Annotation results for a single video.
Properties:
Name | Type | Description |
---|---|---|
inputUri |
string |
Video file location in Google Cloud Storage. |
segmentLabelAnnotations |
Array.<Object> |
Label annotations on video level or user specified segment level. There is exactly one element for each unique label. This object should have the same structure as LabelAnnotation |
shotLabelAnnotations |
Array.<Object> |
Label annotations on shot level. There is exactly one element for each unique label. This object should have the same structure as LabelAnnotation |
frameLabelAnnotations |
Array.<Object> |
Label annotations on frame level. There is exactly one element for each unique label. This object should have the same structure as LabelAnnotation |
faceAnnotations |
Array.<Object> |
Face annotations. There is exactly one element for each unique face. This object should have the same structure as FaceAnnotation |
shotAnnotations |
Array.<Object> |
Shot annotations. Each shot is represented as a video segment. This object should have the same structure as VideoSegment |
explicitAnnotation |
Object |
Explicit content annotation. This object should have the same structure as ExplicitContentAnnotation |
error |
Object |
If set, indicates an error. Note that for a single This object should have the same structure as Status |
- Source:
- See:
VideoContext
Video context and/or feature-specific parameters.
Properties:
Name | Type | Description |
---|---|---|
segments |
Array.<Object> |
Video segments to annotate. The segments may overlap and are not required to be contiguous or span the whole video. If unspecified, each video is treated as a single segment. This object should have the same structure as VideoSegment |
labelDetectionConfig |
Object |
Config for LABEL_DETECTION. This object should have the same structure as LabelDetectionConfig |
shotChangeDetectionConfig |
Object |
Config for SHOT_CHANGE_DETECTION. This object should have the same structure as ShotChangeDetectionConfig |
explicitContentDetectionConfig |
Object |
Config for EXPLICIT_CONTENT_DETECTION. This object should have the same structure as ExplicitContentDetectionConfig |
faceDetectionConfig |
Object |
Config for FACE_DETECTION. This object should have the same structure as FaceDetectionConfig |
- Source:
- See:
VideoSegment
Video segment.
Properties:
Name | Type | Description |
---|---|---|
startTimeOffset |
Object |
Time-offset, relative to the beginning of the video, corresponding to the start of the segment (inclusive). This object should have the same structure as Duration |
endTimeOffset |
Object |
Time-offset, relative to the beginning of the video, corresponding to the end of the segment (inclusive). This object should have the same structure as Duration |