As of January 1, 2020 this library no longer supports Python 2 on the latest released version. Library versions released prior to that date will continue to be available. For more information please visit Python 2 support on Google Cloud.

Types for Google Cloud Aiplatform V1 Schema Predict Prediction v1 API

class google.cloud.aiplatform.v1.schema.predict.prediction_v1.types.ClassificationPredictionResult(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: Message

Prediction output format for Image and Text Classification.

ids

The resource IDs of the AnnotationSpecs that had been identified.

Type:

MutableSequence[int]

display_names

The display names of the AnnotationSpecs that had been identified, order matches the IDs.

Type:

MutableSequence[str]

confidences

The Model’s confidences in correctness of the predicted IDs, higher value means higher confidence. Order matches the Ids.

Type:

MutableSequence[float]

class google.cloud.aiplatform.v1.schema.predict.prediction_v1.types.ImageObjectDetectionPredictionResult(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: Message

Prediction output format for Image Object Detection.

ids

The resource IDs of the AnnotationSpecs that had been identified, ordered by the confidence score descendingly.

Type:

MutableSequence[int]

display_names

The display names of the AnnotationSpecs that had been identified, order matches the IDs.

Type:

MutableSequence[str]

confidences

The Model’s confidences in correctness of the predicted IDs, higher value means higher confidence. Order matches the Ids.

Type:

MutableSequence[float]

bboxes

Bounding boxes, i.e. the rectangles over the image, that pinpoint the found AnnotationSpecs. Given in order that matches the IDs. Each bounding box is an array of 4 numbers xMin, xMax, yMin, and yMax, which represent the extremal coordinates of the box. They are relative to the image size, and the point 0,0 is in the top left of the image.

Type:

MutableSequence[google.protobuf.struct_pb2.ListValue]

class google.cloud.aiplatform.v1.schema.predict.prediction_v1.types.ImageSegmentationPredictionResult(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: Message

Prediction output format for Image Segmentation.

category_mask

A PNG image where each pixel in the mask represents the category in which the pixel in the original image was predicted to belong to. The size of this image will be the same as the original image. The mapping between the AnntoationSpec and the color can be found in model’s metadata. The model will choose the most likely category and if none of the categories reach the confidence threshold, the pixel will be marked as background.

Type:

str

confidence_mask

A one channel image which is encoded as an 8bit lossless PNG. The size of the image will be the same as the original image. For a specific pixel, darker color means less confidence in correctness of the cateogry in the categoryMask for the corresponding pixel. Black means no confidence and white means complete confidence.

Type:

str

class google.cloud.aiplatform.v1.schema.predict.prediction_v1.types.TabularClassificationPredictionResult(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: Message

Prediction output format for Tabular Classification.

classes

The name of the classes being classified, contains all possible values of the target column.

Type:

MutableSequence[str]

scores

The model’s confidence in each class being correct, higher value means higher confidence. The N-th score corresponds to the N-th class in classes.

Type:

MutableSequence[float]

class google.cloud.aiplatform.v1.schema.predict.prediction_v1.types.TabularRegressionPredictionResult(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: Message

Prediction output format for Tabular Regression.

value

The regression value.

Type:

float

lower_bound

The lower bound of the prediction interval.

Type:

float

upper_bound

The upper bound of the prediction interval.

Type:

float

class google.cloud.aiplatform.v1.schema.predict.prediction_v1.types.TextExtractionPredictionResult(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: Message

Prediction output format for Text Extraction.

ids

The resource IDs of the AnnotationSpecs that had been identified, ordered by the confidence score descendingly.

Type:

MutableSequence[int]

display_names

The display names of the AnnotationSpecs that had been identified, order matches the IDs.

Type:

MutableSequence[str]

text_segment_start_offsets

The start offsets, inclusive, of the text segment in which the AnnotationSpec has been identified. Expressed as a zero-based number of characters as measured from the start of the text snippet.

Type:

MutableSequence[int]

text_segment_end_offsets

The end offsets, inclusive, of the text segment in which the AnnotationSpec has been identified. Expressed as a zero-based number of characters as measured from the start of the text snippet.

Type:

MutableSequence[int]

confidences

The Model’s confidences in correctness of the predicted IDs, higher value means higher confidence. Order matches the Ids.

Type:

MutableSequence[float]

class google.cloud.aiplatform.v1.schema.predict.prediction_v1.types.TextSentimentPredictionResult(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: Message

Prediction output format for Text Sentiment

sentiment

The integer sentiment labels between 0 (inclusive) and sentimentMax label (inclusive), while 0 maps to the least positive sentiment and sentimentMax maps to the most positive one. The higher the score is, the more positive the sentiment in the text snippet is. Note: sentimentMax is an integer value between 1 (inclusive) and 10 (inclusive).

Type:

int

class google.cloud.aiplatform.v1.schema.predict.prediction_v1.types.VideoActionRecognitionPredictionResult(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: Message

Prediction output format for Video Action Recognition.

id

The resource ID of the AnnotationSpec that had been identified.

Type:

str

display_name

The display name of the AnnotationSpec that had been identified.

Type:

str

time_segment_start

The beginning, inclusive, of the video’s time segment in which the AnnotationSpec has been identified. Expressed as a number of seconds as measured from the start of the video, with fractions up to a microsecond precision, and with “s” appended at the end.

Type:

google.protobuf.duration_pb2.Duration

time_segment_end

The end, exclusive, of the video’s time segment in which the AnnotationSpec has been identified. Expressed as a number of seconds as measured from the start of the video, with fractions up to a microsecond precision, and with “s” appended at the end.

Type:

google.protobuf.duration_pb2.Duration

confidence

The Model’s confidence in correction of this prediction, higher value means higher confidence.

Type:

google.protobuf.wrappers_pb2.FloatValue

class google.cloud.aiplatform.v1.schema.predict.prediction_v1.types.VideoClassificationPredictionResult(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: Message

Prediction output format for Video Classification.

id

The resource ID of the AnnotationSpec that had been identified.

Type:

str

display_name

The display name of the AnnotationSpec that had been identified.

Type:

str

type_

The type of the prediction. The requested types can be configured via parameters. This will be one of - segment-classification - shot-classification - one-sec-interval-classification

Type:

str

time_segment_start

The beginning, inclusive, of the video’s time segment in which the AnnotationSpec has been identified. Expressed as a number of seconds as measured from the start of the video, with fractions up to a microsecond precision, and with “s” appended at the end. Note that for ‘segment-classification’ prediction type, this equals the original ‘timeSegmentStart’ from the input instance, for other types it is the start of a shot or a 1 second interval respectively.

Type:

google.protobuf.duration_pb2.Duration

time_segment_end

The end, exclusive, of the video’s time segment in which the AnnotationSpec has been identified. Expressed as a number of seconds as measured from the start of the video, with fractions up to a microsecond precision, and with “s” appended at the end. Note that for ‘segment-classification’ prediction type, this equals the original ‘timeSegmentEnd’ from the input instance, for other types it is the end of a shot or a 1 second interval respectively.

Type:

google.protobuf.duration_pb2.Duration

confidence

The Model’s confidence in correction of this prediction, higher value means higher confidence.

Type:

google.protobuf.wrappers_pb2.FloatValue

class google.cloud.aiplatform.v1.schema.predict.prediction_v1.types.VideoObjectTrackingPredictionResult(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: Message

Prediction output format for Video Object Tracking.

id

The resource ID of the AnnotationSpec that had been identified.

Type:

str

display_name

The display name of the AnnotationSpec that had been identified.

Type:

str

time_segment_start

The beginning, inclusive, of the video’s time segment in which the object instance has been detected. Expressed as a number of seconds as measured from the start of the video, with fractions up to a microsecond precision, and with “s” appended at the end.

Type:

google.protobuf.duration_pb2.Duration

time_segment_end

The end, inclusive, of the video’s time segment in which the object instance has been detected. Expressed as a number of seconds as measured from the start of the video, with fractions up to a microsecond precision, and with “s” appended at the end.

Type:

google.protobuf.duration_pb2.Duration

confidence

The Model’s confidence in correction of this prediction, higher value means higher confidence.

Type:

google.protobuf.wrappers_pb2.FloatValue

frames

All of the frames of the video in which a single object instance has been detected. The bounding boxes in the frames identify the same object.

Type:

MutableSequence[google.cloud.aiplatform.v1.schema.predict.prediction_v1.types.VideoObjectTrackingPredictionResult.Frame]

class Frame(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: Message

The fields xMin, xMax, yMin, and yMax refer to a bounding box, i.e. the rectangle over the video frame pinpointing the found AnnotationSpec. The coordinates are relative to the frame size, and the point 0,0 is in the top left of the frame.

time_offset

A time (frame) of a video in which the object has been detected. Expressed as a number of seconds as measured from the start of the video, with fractions up to a microsecond precision, and with “s” appended at the end.

Type:

google.protobuf.duration_pb2.Duration

x_min

The leftmost coordinate of the bounding box.

Type:

google.protobuf.wrappers_pb2.FloatValue

x_max

The rightmost coordinate of the bounding box.

Type:

google.protobuf.wrappers_pb2.FloatValue

y_min

The topmost coordinate of the bounding box.

Type:

google.protobuf.wrappers_pb2.FloatValue

y_max

The bottommost coordinate of the bounding box.

Type:

google.protobuf.wrappers_pb2.FloatValue