Types for Google Cloud Aiplatform V1beta1 Schema Predict Prediction v1beta1 API¶
- class google.cloud.aiplatform.v1beta1.schema.predict.prediction_v1beta1.types.ClassificationPredictionResult(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
Message
Prediction output format for Image and Text Classification.
- class google.cloud.aiplatform.v1beta1.schema.predict.prediction_v1beta1.types.ImageObjectDetectionPredictionResult(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
Message
Prediction output format for Image Object Detection.
- ids¶
The resource IDs of the AnnotationSpecs that had been identified, ordered by the confidence score descendingly.
- Type:
MutableSequence[int]
- display_names¶
The display names of the AnnotationSpecs that had been identified, order matches the IDs.
- Type:
MutableSequence[str]
- confidences¶
The Model’s confidences in correctness of the predicted IDs, higher value means higher confidence. Order matches the Ids.
- Type:
MutableSequence[float]
- bboxes¶
Bounding boxes, i.e. the rectangles over the image, that pinpoint the found AnnotationSpecs. Given in order that matches the IDs. Each bounding box is an array of 4 numbers
xMin
,xMax
,yMin
, andyMax
, which represent the extremal coordinates of the box. They are relative to the image size, and the point 0,0 is in the top left of the image.- Type:
MutableSequence[google.protobuf.struct_pb2.ListValue]
- class google.cloud.aiplatform.v1beta1.schema.predict.prediction_v1beta1.types.ImageSegmentationPredictionResult(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
Message
Prediction output format for Image Segmentation.
- category_mask¶
A PNG image where each pixel in the mask represents the category in which the pixel in the original image was predicted to belong to. The size of this image will be the same as the original image. The mapping between the AnntoationSpec and the color can be found in model’s metadata. The model will choose the most likely category and if none of the categories reach the confidence threshold, the pixel will be marked as background.
- Type:
- confidence_mask¶
A one channel image which is encoded as an 8bit lossless PNG. The size of the image will be the same as the original image. For a specific pixel, darker color means less confidence in correctness of the cateogry in the categoryMask for the corresponding pixel. Black means no confidence and white means complete confidence.
- Type:
- class google.cloud.aiplatform.v1beta1.schema.predict.prediction_v1beta1.types.TabularClassificationPredictionResult(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
Message
Prediction output format for Tabular Classification.
- class google.cloud.aiplatform.v1beta1.schema.predict.prediction_v1beta1.types.TabularRegressionPredictionResult(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
Message
Prediction output format for Tabular Regression.
- class google.cloud.aiplatform.v1beta1.schema.predict.prediction_v1beta1.types.TextExtractionPredictionResult(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
Message
Prediction output format for Text Extraction.
- ids¶
The resource IDs of the AnnotationSpecs that had been identified, ordered by the confidence score descendingly.
- Type:
MutableSequence[int]
- display_names¶
The display names of the AnnotationSpecs that had been identified, order matches the IDs.
- Type:
MutableSequence[str]
- text_segment_start_offsets¶
The start offsets, inclusive, of the text segment in which the AnnotationSpec has been identified. Expressed as a zero-based number of characters as measured from the start of the text snippet.
- Type:
MutableSequence[int]
- class google.cloud.aiplatform.v1beta1.schema.predict.prediction_v1beta1.types.TextSentimentPredictionResult(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
Message
Prediction output format for Text Sentiment
- sentiment¶
The integer sentiment labels between 0 (inclusive) and sentimentMax label (inclusive), while 0 maps to the least positive sentiment and sentimentMax maps to the most positive one. The higher the score is, the more positive the sentiment in the text snippet is. Note: sentimentMax is an integer value between 1 (inclusive) and 10 (inclusive).
- Type:
- class google.cloud.aiplatform.v1beta1.schema.predict.prediction_v1beta1.types.TimeSeriesForecastingPredictionResult(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
Message
Prediction output format for Time Series Forecasting.
- class google.cloud.aiplatform.v1beta1.schema.predict.prediction_v1beta1.types.VideoActionRecognitionPredictionResult(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
Message
Prediction output format for Video Action Recognition.
- time_segment_start¶
The beginning, inclusive, of the video’s time segment in which the AnnotationSpec has been identified. Expressed as a number of seconds as measured from the start of the video, with fractions up to a microsecond precision, and with “s” appended at the end.
- time_segment_end¶
The end, exclusive, of the video’s time segment in which the AnnotationSpec has been identified. Expressed as a number of seconds as measured from the start of the video, with fractions up to a microsecond precision, and with “s” appended at the end.
- confidence¶
The Model’s confidence in correction of this prediction, higher value means higher confidence.
- class google.cloud.aiplatform.v1beta1.schema.predict.prediction_v1beta1.types.VideoClassificationPredictionResult(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
Message
Prediction output format for Video Classification.
- type_¶
The type of the prediction. The requested types can be configured via parameters. This will be one of - segment-classification - shot-classification - one-sec-interval-classification
- Type:
- time_segment_start¶
The beginning, inclusive, of the video’s time segment in which the AnnotationSpec has been identified. Expressed as a number of seconds as measured from the start of the video, with fractions up to a microsecond precision, and with “s” appended at the end. Note that for ‘segment-classification’ prediction type, this equals the original ‘timeSegmentStart’ from the input instance, for other types it is the start of a shot or a 1 second interval respectively.
- time_segment_end¶
The end, exclusive, of the video’s time segment in which the AnnotationSpec has been identified. Expressed as a number of seconds as measured from the start of the video, with fractions up to a microsecond precision, and with “s” appended at the end. Note that for ‘segment-classification’ prediction type, this equals the original ‘timeSegmentEnd’ from the input instance, for other types it is the end of a shot or a 1 second interval respectively.
- confidence¶
The Model’s confidence in correction of this prediction, higher value means higher confidence.
- class google.cloud.aiplatform.v1beta1.schema.predict.prediction_v1beta1.types.VideoObjectTrackingPredictionResult(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
Message
Prediction output format for Video Object Tracking.
- time_segment_start¶
The beginning, inclusive, of the video’s time segment in which the object instance has been detected. Expressed as a number of seconds as measured from the start of the video, with fractions up to a microsecond precision, and with “s” appended at the end.
- time_segment_end¶
The end, inclusive, of the video’s time segment in which the object instance has been detected. Expressed as a number of seconds as measured from the start of the video, with fractions up to a microsecond precision, and with “s” appended at the end.
- confidence¶
The Model’s confidence in correction of this prediction, higher value means higher confidence.
- frames¶
All of the frames of the video in which a single object instance has been detected. The bounding boxes in the frames identify the same object.
- class Frame(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
Message
The fields
xMin
,xMax
,yMin
, andyMax
refer to a bounding box, i.e. the rectangle over the video frame pinpointing the found AnnotationSpec. The coordinates are relative to the frame size, and the point 0,0 is in the top left of the frame.- time_offset¶
A time (frame) of a video in which the object has been detected. Expressed as a number of seconds as measured from the start of the video, with fractions up to a microsecond precision, and with “s” appended at the end.
- x_min¶
The leftmost coordinate of the bounding box.
- x_max¶
The rightmost coordinate of the bounding box.
- y_min¶
The topmost coordinate of the bounding box.
- y_max¶
The bottommost coordinate of the bounding box.