Types for Google Cloud Automl v1 API¶
- class google.cloud.automl_v1.types.AnnotationPayload(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Contains annotation information that is relevant to AutoML.
This message has oneof fields (mutually exclusive fields). For each oneof, at most one member field can be set at the same time. Setting any member of the oneof automatically clears all other members.
- classification¶
Annotation details for content or image classification.
This field is a member of oneof
detail
.
- image_object_detection¶
Annotation details for image object detection.
This field is a member of oneof
detail
.
- annotation_spec_id¶
Output only . The resource ID of the annotation spec that this annotation pertains to. The annotation spec comes from either an ancestor dataset, or the dataset that was used to train the model in use.
- Type
- display_name¶
Output only. The value of [display_name][google.cloud.automl.v1.AnnotationSpec.display_name] when the model was trained. Because this field returns a value at model training time, for different models trained using the same dataset, the returned value could be different as model owner could update the
display_name
between any two model training.- Type
- class google.cloud.automl_v1.types.AnnotationSpec(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A definition of an annotation spec.
- name¶
Output only. Resource name of the annotation spec. Form: ‘projects/{project_id}/locations/{location_id}/datasets/{dataset_id}/annotationSpecs/{annotation_spec_id}’
- Type
- display_name¶
Required. The name of the annotation spec to show in the interface. The name can be up to 32 characters long and must match the regexp
[a-zA-Z0-9_]+
.- Type
- class google.cloud.automl_v1.types.BatchPredictInputConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Input configuration for BatchPredict Action.
The format of input depends on the ML problem of the model used for prediction. As input source the [gcs_source][google.cloud.automl.v1.InputConfig.gcs_source] is expected, unless specified otherwise.
The formats are represented in EBNF with commas being literal and with non-terminal symbols defined near the end of this comment. The formats are:
AutoML Vision
Classification
One or more CSV files where each line is a single column:
GCS_FILE_PATH
The Google Cloud Storage location of an image of up to 30MB in size. Supported extensions: .JPEG, .GIF, .PNG. This path is treated as the ID in the batch predict output.
Sample rows:
gs://folder/image1.jpeg gs://folder/image2.gif gs://folder/image3.png
Object Detection
One or more CSV files where each line is a single column:
GCS_FILE_PATH
The Google Cloud Storage location of an image of up to 30MB in size. Supported extensions: .JPEG, .GIF, .PNG. This path is treated as the ID in the batch predict output.
Sample rows:
gs://folder/image1.jpeg gs://folder/image2.gif gs://folder/image3.png
AutoML Video Intelligence
Classification
One or more CSV files where each line is a single column:
GCS_FILE_PATH,TIME_SEGMENT_START,TIME_SEGMENT_END
GCS_FILE_PATH
is the Google Cloud Storage location of video up to 50GB in size and up to 3h in duration duration. Supported extensions: .MOV, .MPEG4, .MP4, .AVI.TIME_SEGMENT_START
andTIME_SEGMENT_END
must be within the length of the video, and the end time must be after the start time.Sample rows:
gs://folder/video1.mp4,10,40 gs://folder/video1.mp4,20,60 gs://folder/vid2.mov,0,inf
Object Tracking
One or more CSV files where each line is a single column:
GCS_FILE_PATH,TIME_SEGMENT_START,TIME_SEGMENT_END
GCS_FILE_PATH
is the Google Cloud Storage location of video up to 50GB in size and up to 3h in duration duration. Supported extensions: .MOV, .MPEG4, .MP4, .AVI.TIME_SEGMENT_START
andTIME_SEGMENT_END
must be within the length of the video, and the end time must be after the start time.Sample rows:
gs://folder/video1.mp4,10,40 gs://folder/video1.mp4,20,60 gs://folder/vid2.mov,0,inf
AutoML Natural Language
Classification
One or more CSV files where each line is a single column:
GCS_FILE_PATH
GCS_FILE_PATH
is the Google Cloud Storage location of a text file. Supported file extensions: .TXT, .PDF, .TIF, .TIFFText files can be no larger than 10MB in size.
Sample rows:
gs://folder/text1.txt gs://folder/text2.pdf gs://folder/text3.tif
Sentiment Analysis
One or more CSV files where each line is a single column:GCS_FILE_PATH
GCS_FILE_PATH
is the Google Cloud Storage location of a text file. Supported file extensions: .TXT, .PDF, .TIF, .TIFFText files can be no larger than 128kB in size.
Sample rows:
gs://folder/text1.txt gs://folder/text2.pdf gs://folder/text3.tif
Entity Extraction
One or more JSONL (JSON Lines) files that either provide inline text or documents. You can only use one format, either inline text or documents, for a single call to [AutoMl.BatchPredict].
Each JSONL file contains a per line a proto that wraps a temporary user-assigned TextSnippet ID (string up to 2000 characters long) called “id”, a TextSnippet proto (in JSON representation) and zero or more TextFeature protos. Any given text snippet content must have 30,000 characters or less, and also be UTF-8 NFC encoded (ASCII already is). The IDs provided should be unique.
Each document JSONL file contains, per line, a proto that wraps a Document proto with
input_config
set. Each document cannot exceed 2MB in size.Supported document extensions: .PDF, .TIF, .TIFF
Each JSONL file must not exceed 100MB in size, and no more than 20 JSONL files may be passed.
Sample inline JSONL file (Shown with artificial line breaks. Actual line breaks are denoted by “n”.):
{ "id": "my_first_id", "text_snippet": { "content": "dog car cat"}, "text_features": [ { "text_segment": {"start_offset": 4, "end_offset": 6}, "structural_type": PARAGRAPH, "bounding_poly": { "normalized_vertices": [ {"x": 0.1, "y": 0.1}, {"x": 0.1, "y": 0.3}, {"x": 0.3, "y": 0.3}, {"x": 0.3, "y": 0.1}, ] }, } ], }\n { "id": "2", "text_snippet": { "content": "Extended sample content", "mime_type": "text/plain" } }
Sample document JSONL file (Shown with artificial line breaks. Actual line breaks are denoted by “n”.):
{ "document": { "input_config": { "gcs_source": { "input_uris": [ "gs://folder/document1.pdf" ] } } } }\n { "document": { "input_config": { "gcs_source": { "input_uris": [ "gs://folder/document2.tif" ] } } } }
AutoML Tables
See Preparing your training data for more information.
You can use either [gcs_source][google.cloud.automl.v1.BatchPredictInputConfig.gcs_source] or [bigquery_source][BatchPredictInputConfig.bigquery_source].
For gcs_source:
CSV file(s), each by itself 10GB or smaller and total size must be 100GB or smaller, where first file must have a header containing column names. If the first row of a subsequent file is the same as the header, then it is also treated as a header. All other rows contain values for the corresponding columns.
The column names must contain the model’s [input_feature_column_specs’][google.cloud.automl.v1.TablesModelMetadata.input_feature_column_specs] [display_name-s][google.cloud.automl.v1.ColumnSpec.display_name] (order doesn’t matter). The columns corresponding to the model’s input feature column specs must contain values compatible with the column spec’s data types. Prediction on all the rows, i.e. the CSV lines, will be attempted.
Sample rows from a CSV file:
"First Name","Last Name","Dob","Addresses" "John","Doe","1968-01-22","[{"status":"current","address":"123_First_Avenue","city":"Seattle","state":"WA","zip":"11111","numberOfYears":"1"},{"status":"previous","address":"456_Main_Street","city":"Portland","state":"OR","zip":"22222","numberOfYears":"5"}]" "Jane","Doe","1980-10-16","[{"status":"current","address":"789_Any_Avenue","city":"Albany","state":"NY","zip":"33333","numberOfYears":"2"},{"status":"previous","address":"321_Main_Street","city":"Hoboken","state":"NJ","zip":"44444","numberOfYears":"3"}]}
For bigquery_source:
The URI of a BigQuery table. The user data size of the BigQuery table must be 100GB or smaller.
The column names must contain the model’s [input_feature_column_specs’][google.cloud.automl.v1.TablesModelMetadata.input_feature_column_specs] [display_name-s][google.cloud.automl.v1.ColumnSpec.display_name] (order doesn’t matter). The columns corresponding to the model’s input feature column specs must contain values compatible with the column spec’s data types. Prediction on all the rows of the table will be attempted.
Input field definitions:
GCS_FILE_PATH
: The path to a file on Google Cloud Storage. For example, “gs://folder/video.avi”.TIME_SEGMENT_START
: (TIME_OFFSET
) Expresses a beginning, inclusive, of a time segment within an example that has a time dimension (e.g. video).TIME_SEGMENT_END
: (TIME_OFFSET
) Expresses an end, exclusive, of a time segment within n example that has a time dimension (e.g. video).TIME_OFFSET
: A number of seconds as measured from the start of an example (e.g. video). Fractions are allowed, up to a microsecond precision. “inf” is allowed, and it means the end of the example.Errors:
If any of the provided CSV files can’t be parsed or if more than certain percent of CSV rows cannot be processed then the operation fails and prediction does not happen. Regardless of overall success or failure the per-row failures, up to a certain count cap, will be listed in Operation.metadata.partial_failures.
- class google.cloud.automl_v1.types.BatchPredictOperationMetadata(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Details of BatchPredict operation.
- input_config¶
Output only. The input config that was given upon starting this batch predict operation.
- output_info¶
Output only. Information further describing this batch predict’s output.
- class BatchPredictOutputInfo(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Further describes this batch predict’s output. Supplements [BatchPredictOutputConfig][google.cloud.automl.v1.BatchPredictOutputConfig].
- class google.cloud.automl_v1.types.BatchPredictOutputConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Output configuration for BatchPredict Action.
As destination the [gcs_destination][google.cloud.automl.v1.BatchPredictOutputConfig.gcs_destination] must be set unless specified otherwise for a domain. If gcs_destination is set then in the given directory a new directory is created. Its name will be “prediction–”, where timestamp is in YYYY-MM-DDThh:mm:ss.sssZ ISO-8601 format. The contents of it depends on the ML problem the predictions are made for.
For Image Classification: In the created directory files
image_classification_1.jsonl
,image_classification_2.jsonl
,…,image_classification_N.jsonl
will be created, where N may be 1, and depends on the total number of the successfully predicted images and annotations. A single image will be listed only once with all its annotations, and its annotations will never be split across files. Each .JSONL file will contain, per line, a JSON representation of a proto that wraps image’s “ID” : “<id_value>” followed by a list of zero or more AnnotationPayload protos (called annotations), which have classification detail populated. If prediction for any image failed (partially or completely), then an additionalerrors_1.jsonl
,errors_2.jsonl
,…,errors_N.jsonl
files will be created (N depends on total number of failed predictions). These files will have a JSON representation of a proto that wraps the same “ID” : “<id_value>” but here followed by exactly one`google.rpc.Status
<https://github.com/googleapis/googleapis/blob/master/google/rpc/status.proto>`__ containing onlycode
andmessage
fields.For Image Object Detection: In the created directory files
image_object_detection_1.jsonl
,image_object_detection_2.jsonl
,…,image_object_detection_N.jsonl
will be created, where N may be 1, and depends on the total number of the successfully predicted images and annotations. Each .JSONL file will contain, per line, a JSON representation of a proto that wraps image’s “ID” : “<id_value>” followed by a list of zero or more AnnotationPayload protos (called annotations), which have image_object_detection detail populated. A single image will be listed only once with all its annotations, and its annotations will never be split across files. If prediction for any image failed (partially or completely), then additionalerrors_1.jsonl
,errors_2.jsonl
,…,errors_N.jsonl
files will be created (N depends on total number of failed predictions). These files will have a JSON representation of a proto that wraps the same “ID” : “<id_value>” but here followed by exactly one`google.rpc.Status
<https://github.com/googleapis/googleapis/blob/master/google/rpc/status.proto>`__ containing onlycode
andmessage
fields.For Video Classification: In the created directory a video_classification.csv file, and a .JSON file per each video classification requested in the input (i.e. each line in given CSV(s)), will be created.
The format of video_classification.csv is: GCS_FILE_PATH,TIME_SEGMENT_START,TIME_SEGMENT_END,JSON_FILE_NAME,STATUS where: GCS_FILE_PATH,TIME_SEGMENT_START,TIME_SEGMENT_END = matches 1 to 1 the prediction input lines (i.e. video_classification.csv has precisely the same number of lines as the prediction input had.) JSON_FILE_NAME = Name of .JSON file in the output directory, which contains prediction responses for the video time segment. STATUS = "OK" if prediction completed successfully, or an error code with message otherwise. If STATUS is not "OK" then the .JSON file for that line may not exist or be empty. Each .JSON file, assuming STATUS is "OK", will contain a list of AnnotationPayload protos in JSON format, which are the predictions for the video time segment the file is assigned to in the video_classification.csv. All AnnotationPayload protos will have video_classification field set, and will be sorted by video_classification.type field (note that the returned types are governed by `classifaction_types` parameter in [PredictService.BatchPredictRequest.params][]).
For Video Object Tracking: In the created directory a video_object_tracking.csv file will be created, and multiple files video_object_trackinng_1.json, video_object_trackinng_2.json,…, video_object_trackinng_N.json, where N is the number of requests in the input (i.e. the number of lines in given CSV(s)).
The format of video_object_tracking.csv is: GCS_FILE_PATH,TIME_SEGMENT_START,TIME_SEGMENT_END,JSON_FILE_NAME,STATUS where: GCS_FILE_PATH,TIME_SEGMENT_START,TIME_SEGMENT_END = matches 1 to 1 the prediction input lines (i.e. video_object_tracking.csv has precisely the same number of lines as the prediction input had.) JSON_FILE_NAME = Name of .JSON file in the output directory, which contains prediction responses for the video time segment. STATUS = "OK" if prediction completed successfully, or an error code with message otherwise. If STATUS is not "OK" then the .JSON file for that line may not exist or be empty. Each .JSON file, assuming STATUS is "OK", will contain a list of AnnotationPayload protos in JSON format, which are the predictions for each frame of the video time segment the file is assigned to in video_object_tracking.csv. All AnnotationPayload protos will have video_object_tracking field set.
For Text Classification: In the created directory files
text_classification_1.jsonl
,text_classification_2.jsonl
,…,text_classification_N.jsonl
will be created, where N may be 1, and depends on the total number of inputs and annotations found.Each .JSONL file will contain, per line, a JSON representation of a proto that wraps input text file (or document) in the text snippet (or document) proto and a list of zero or more AnnotationPayload protos (called annotations), which have classification detail populated. A single text file (or document) will be listed only once with all its annotations, and its annotations will never be split across files. If prediction for any input file (or document) failed (partially or completely), then additional `errors_1.jsonl`, `errors_2.jsonl`,..., `errors_N.jsonl` files will be created (N depends on total number of failed predictions). These files will have a JSON representation of a proto that wraps input file followed by exactly one [`google.rpc.Status`](https://github.com/googleapis/googleapis/blob/master/google/rpc/status.proto) containing only `code` and `message`.
For Text Sentiment: In the created directory files
text_sentiment_1.jsonl
,text_sentiment_2.jsonl
,…,text_sentiment_N.jsonl
will be created, where N may be 1, and depends on the total number of inputs and annotations found.Each .JSONL file will contain, per line, a JSON representation of a proto that wraps input text file (or document) in the text snippet (or document) proto and a list of zero or more AnnotationPayload protos (called annotations), which have text_sentiment detail populated. A single text file (or document) will be listed only once with all its annotations, and its annotations will never be split across files. If prediction for any input file (or document) failed (partially or completely), then additional `errors_1.jsonl`, `errors_2.jsonl`,..., `errors_N.jsonl` files will be created (N depends on total number of failed predictions). These files will have a JSON representation of a proto that wraps input file followed by exactly one [`google.rpc.Status`](https://github.com/googleapis/googleapis/blob/master/google/rpc/status.proto) containing only `code` and `message`.
For Text Extraction: In the created directory files
text_extraction_1.jsonl
,text_extraction_2.jsonl
,…,text_extraction_N.jsonl
will be created, where N may be 1, and depends on the total number of inputs and annotations found. The contents of these .JSONL file(s) depend on whether the input used inline text, or documents. If input was inline, then each .JSONL file will contain, per line, a JSON representation of a proto that wraps given in request text snippet’s “id” (if specified), followed by input text snippet, and a list of zero or more AnnotationPayload protos (called annotations), which have text_extraction detail populated. A single text snippet will be listed only once with all its annotations, and its annotations will never be split across files. If input used documents, then each .JSONL file will contain, per line, a JSON representation of a proto that wraps given in request document proto, followed by its OCR-ed representation in the form of a text snippet, finally followed by a list of zero or more AnnotationPayload protos (called annotations), which have text_extraction detail populated and refer, via their indices, to the OCR-ed text snippet. A single document (and its text snippet) will be listed only once with all its annotations, and its annotations will never be split across files. If prediction for any text snippet failed (partially or completely), then additionalerrors_1.jsonl
,errors_2.jsonl
,…,errors_N.jsonl
files will be created (N depends on total number of failed predictions). These files will have a JSON representation of a proto that wraps either the “id” : “<id_value>” (in case of inline) or the document proto (in case of document) but here followed by exactly one`google.rpc.Status
<https://github.com/googleapis/googleapis/blob/master/google/rpc/status.proto>`__ containing onlycode
andmessage
.For Tables: Output depends on whether [gcs_destination][google.cloud.automl.v1p1beta.BatchPredictOutputConfig.gcs_destination] or [bigquery_destination][google.cloud.automl.v1p1beta.BatchPredictOutputConfig.bigquery_destination] is set (either is allowed). Google Cloud Storage case: In the created directory files
tables_1.csv
,tables_2.csv
,…,tables_N.csv
will be created, where N may be 1, and depends on the total number of the successfully predicted rows. For all CLASSIFICATION [prediction_type-s][google.cloud.automl.v1p1beta.TablesModelMetadata.prediction_type]: Each .csv file will contain a header, listing all columns’ [display_name-s][google.cloud.automl.v1p1beta.ColumnSpec.display_name] given on input followed by M target column names in the format of “<[target_column_specs][google.cloud.automl.v1p1beta.TablesModelMetadata.target_column_spec] [display_name][google.cloud.automl.v1p1beta.ColumnSpec.display_name]>*score” where M is the number of distinct target values, i.e. number of distinct values in the target column of the table used to train the model. Subsequent lines will contain the respective values of successfully predicted rows, with the last, i.e. the target, columns having the corresponding prediction [scores][google.cloud.automl.v1p1beta.TablesAnnotation.score]. For REGRESSION and FORECASTING [prediction_type-s][google.cloud.automl.v1p1beta.TablesModelMetadata.prediction_type]: Each .csv file will contain a header, listing all columns’ [display_name-s][google.cloud.automl.v1p1beta.display_name] given on input followed by the predicted target column with name in the format of “predicted<[target_column_specs][google.cloud.automl.v1p1beta.TablesModelMetadata.target_column_spec] [display_name][google.cloud.automl.v1p1beta.ColumnSpec.display_name]>” Subsequent lines will contain the respective values of successfully predicted rows, with the last, i.e. the target, column having the predicted target value. If prediction for any rows failed, then an additionalerrors_1.csv
,errors_2.csv
,…,errors_N.csv
will be created (N depends on total number of failed rows). These files will have analogous format astables_*.csv
, but always with a single target column having*`google.rpc.Status
<https://github.com/googleapis/googleapis/blob/master/google/rpc/status.proto>`__represented as a JSON string, and containing only ``code`` and ``message``. BigQuery case: [bigquery_destination][google.cloud.automl.v1p1beta.OutputConfig.bigquery_destination] pointing to a BigQuery project must be set. In the given project a new dataset will be created with name ``prediction_<model-display-name>_<timestamp-of-prediction-call>`` where will be made BigQuery-dataset-name compatible (e.g. most special characters will become underscores), and timestamp will be in YYYY_MM_DDThh_mm_ss_sssZ “based on ISO-8601” format. In the dataset two tables will be created, ``predictions``, and ``errors``. The ``predictions`` table’s column names will be the input columns’ [display_name-s][google.cloud.automl.v1p1beta.ColumnSpec.display_name] followed by the target column with name in the format of “predicted<[target_column_specs][google.cloud.automl.v1p1beta.TablesModelMetadata.target_column_spec] [display_name][google.cloud.automl.v1p1beta.ColumnSpec.display_name]>” The input feature columns will contain the respective values of successfully predicted rows, with the target column having an ARRAY of [AnnotationPayloads][google.cloud.automl.v1p1beta.AnnotationPayload], represented as STRUCT-s, containing [TablesAnnotation][google.cloud.automl.v1p1beta.TablesAnnotation]. Theerrors
table contains rows for which the prediction has failed, it has analogous input columns while the target column name is in the format of “errors_<[target_column_specs][google.cloud.automl.v1p1beta.TablesModelMetadata.target_column_spec] [display_name][google.cloud.automl.v1p1beta.ColumnSpec.display_name]>”, and as a value has`google.rpc.Status
<https://github.com/googleapis/googleapis/blob/master/google/rpc/status.proto>`__ represented as a STRUCT, and containing onlycode
andmessage
.
- class google.cloud.automl_v1.types.BatchPredictRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Request message for [PredictionService.BatchPredict][google.cloud.automl.v1.PredictionService.BatchPredict].
- input_config¶
Required. The input configuration for batch prediction.
- output_config¶
Required. The Configuration specifying where output predictions should be written.
- params¶
Additional domain-specific parameters for the predictions, any string must be up to 25000 characters long.
AutoML Natural Language Classification
score_threshold
: (float) A value from 0.0 to 1.0. When the model makes predictions for a text snippet, it will only produce results that have at least this confidence score. The default is 0.5.AutoML Vision Classification
score_threshold
: (float) A value from 0.0 to 1.0. When the model makes predictions for an image, it will only produce results that have at least this confidence score. The default is 0.5.AutoML Vision Object Detection
score_threshold
: (float) When Model detects objects on the image, it will only produce bounding boxes which have at least this confidence score. Value in 0 to 1 range, default is 0.5.max_bounding_box_count
: (int64) The maximum number of bounding boxes returned per image. The default is 100, the number of bounding boxes returned might be limited by the server. AutoML Video Intelligence Classificationscore_threshold
: (float) A value from 0.0 to 1.0. When the model makes predictions for a video, it will only produce results that have at least this confidence score. The default is 0.5.segment_classification
: (boolean) Set to true to request segment-level classification. AutoML Video Intelligence returns labels and their confidence scores for the entire segment of the video that user specified in the request configuration. The default is true.shot_classification
: (boolean) Set to true to request shot-level classification. AutoML Video Intelligence determines the boundaries for each camera shot in the entire segment of the video that user specified in the request configuration. AutoML Video Intelligence then returns labels and their confidence scores for each detected shot, along with the start and end time of the shot. The default is false.WARNING: Model evaluation is not done for this classification type, the quality of it depends on training data, but there are no metrics provided to describe that quality.
1s_interval_classification
: (boolean) Set to true to request classification for a video at one-second intervals. AutoML Video Intelligence returns labels and their confidence scores for each second of the entire segment of the video that user specified in the request configuration. The default is false.WARNING: Model evaluation is not done for this classification type, the quality of it depends on training data, but there are no metrics provided to describe that quality.
AutoML Video Intelligence Object Tracking
score_threshold
: (float) When Model detects objects on video frames, it will only produce bounding boxes which have at least this confidence score. Value in 0 to 1 range, default is 0.5.max_bounding_box_count
: (int64) The maximum number of bounding boxes returned per image. The default is 100, the number of bounding boxes returned might be limited by the server.min_bounding_box_size
: (float) Only bounding boxes with shortest edge at least that long as a relative value of video frame size are returned. Value in 0 to 1 range. Default is 0.
- class ParamsEntry(mapping=None, *, ignore_unknown_fields=False, **kwargs)¶
Bases:
proto.message.Message
- class google.cloud.automl_v1.types.BatchPredictResult(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Result of the Batch Predict. This message is returned in [response][google.longrunning.Operation.response] of the operation returned by the [PredictionService.BatchPredict][google.cloud.automl.v1.PredictionService.BatchPredict].
- metadata¶
Additional domain-specific prediction response metadata.
AutoML Vision Object Detection
max_bounding_box_count
: (int64) The maximum number of bounding boxes returned per image.AutoML Video Intelligence Object Tracking
max_bounding_box_count
: (int64) The maximum number of bounding boxes returned per frame.
- class MetadataEntry(mapping=None, *, ignore_unknown_fields=False, **kwargs)¶
Bases:
proto.message.Message
- class google.cloud.automl_v1.types.BoundingBoxMetricsEntry(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Bounding box matching model metrics for a single intersection-over-union threshold and multiple label match confidence thresholds.
- iou_threshold¶
Output only. The intersection-over-union threshold value used to compute this metrics entry.
- Type
- mean_average_precision¶
Output only. The mean average precision, most often close to au_prc.
- Type
- confidence_metrics_entries¶
Output only. Metrics for each label-match confidence_threshold from 0.05,0.10,…,0.95,0.96,0.97,0.98,0.99. Precision-recall curve is derived from them.
- Type
MutableSequence[google.cloud.automl_v1.types.BoundingBoxMetricsEntry.ConfidenceMetricsEntry]
- class ConfidenceMetricsEntry(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Metrics for a single confidence threshold.
- confidence_threshold¶
Output only. The confidence threshold value used to compute the metrics.
- Type
- class google.cloud.automl_v1.types.BoundingPoly(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A bounding polygon of a detected object on a plane. On output both vertices and normalized_vertices are provided. The polygon is formed by connecting vertices in the order they are listed.
- normalized_vertices¶
Output only . The bounding polygon normalized vertices.
- Type
MutableSequence[google.cloud.automl_v1.types.NormalizedVertex]
- class google.cloud.automl_v1.types.ClassificationAnnotation(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Contains annotation details specific to classification.
- score¶
Output only. A confidence estimate between 0.0 and 1.0. A higher value means greater confidence that the annotation is positive. If a user approves an annotation as negative or positive, the score value remains unchanged. If a user creates an annotation, the score is 0 for negative or 1 for positive.
- Type
- class google.cloud.automl_v1.types.ClassificationEvaluationMetrics(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Model evaluation metrics for classification problems. Note: For Video Classification this metrics only describe quality of the Video Classification predictions of “segment_classification” type.
- au_prc¶
Output only. The Area Under Precision-Recall Curve metric. Micro-averaged for the overall evaluation.
- Type
- au_roc¶
Output only. The Area Under Receiver Operating Characteristic curve metric. Micro-averaged for the overall evaluation.
- Type
- confidence_metrics_entry¶
Output only. Metrics for each confidence_threshold in 0.00,0.05,0.10,…,0.95,0.96,0.97,0.98,0.99 and position_threshold = INT32_MAX_VALUE. ROC and precision-recall curves, and other aggregated metrics are derived from them. The confidence metrics entries may also be supplied for additional values of position_threshold, but from these no aggregated metrics are computed.
- Type
MutableSequence[google.cloud.automl_v1.types.ClassificationEvaluationMetrics.ConfidenceMetricsEntry]
- confusion_matrix¶
Output only. Confusion matrix of the evaluation. Only set for MULTICLASS classification problems where number of labels is no more than 10. Only set for model level evaluation, not for evaluation per label.
- annotation_spec_id¶
Output only. The annotation spec ids used for this evaluation.
- Type
MutableSequence[str]
- class ConfidenceMetricsEntry(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Metrics for a single confidence threshold.
- confidence_threshold¶
Output only. Metrics are computed with an assumption that the model never returns predictions with score lower than this value.
- Type
- position_threshold¶
Output only. Metrics are computed with an assumption that the model always returns at most this many predictions (ordered by their score, descendingly), but they all still need to meet the confidence_threshold.
- Type
- false_positive_rate¶
Output only. False Positive Rate for the given confidence threshold.
- Type
- recall_at1¶
Output only. The Recall (True Positive Rate) when only considering the label that has the highest prediction score and not below the confidence threshold for each example.
- Type
- precision_at1¶
Output only. The precision when only considering the label that has the highest prediction score and not below the confidence threshold for each example.
- Type
- false_positive_rate_at1¶
Output only. The False Positive Rate when only considering the label that has the highest prediction score and not below the confidence threshold for each example.
- Type
- f1_score_at1¶
Output only. The harmonic mean of [recall_at1][google.cloud.automl.v1.ClassificationEvaluationMetrics.ConfidenceMetricsEntry.recall_at1] and [precision_at1][google.cloud.automl.v1.ClassificationEvaluationMetrics.ConfidenceMetricsEntry.precision_at1].
- Type
- true_positive_count¶
Output only. The number of model created labels that match a ground truth label.
- Type
- false_positive_count¶
Output only. The number of model created labels that do not match a ground truth label.
- Type
- false_negative_count¶
Output only. The number of ground truth labels that are not matched by a model created label.
- Type
- class ConfusionMatrix(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Confusion matrix of the model running the classification.
- annotation_spec_id¶
Output only. IDs of the annotation specs used in the confusion matrix. For Tables CLASSIFICATION [prediction_type][google.cloud.automl.v1p1beta.TablesModelMetadata.prediction_type] only list of [annotation_spec_display_name-s][] is populated.
- Type
MutableSequence[str]
- display_name¶
Output only. Display name of the annotation specs used in the confusion matrix, as they were at the moment of the evaluation. For Tables CLASSIFICATION [prediction_type-s][google.cloud.automl.v1p1beta.TablesModelMetadata.prediction_type], distinct values of the target column at the moment of the model evaluation are populated here.
- Type
MutableSequence[str]
- row¶
Output only. Rows in the confusion matrix. The number of rows is equal to the size of
annotation_spec_id
.row[i].example_count[j]
is the number of examples that have ground truth of theannotation_spec_id[i]
and are predicted asannotation_spec_id[j]
by the model being evaluated.- Type
MutableSequence[google.cloud.automl_v1.types.ClassificationEvaluationMetrics.ConfusionMatrix.Row]
- class Row(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Output only. A row in the confusion matrix.
- example_count¶
Output only. Value of the specific cell in the confusion matrix. The number of values each row has (i.e. the length of the row) is equal to the length of the
annotation_spec_id
field or, if that one is not populated, length of the [display_name][google.cloud.automl.v1.ClassificationEvaluationMetrics.ConfusionMatrix.display_name] field.- Type
MutableSequence[int]
- class google.cloud.automl_v1.types.ClassificationType(value)[source]¶
Bases:
proto.enums.Enum
Type of the classification problem.
- Values:
- CLASSIFICATION_TYPE_UNSPECIFIED (0):
An un-set value of this enum.
- MULTICLASS (1):
At most one label is allowed per example.
- MULTILABEL (2):
Multiple labels are allowed for one example.
- class google.cloud.automl_v1.types.CreateDatasetOperationMetadata(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Details of CreateDataset operation.
- class google.cloud.automl_v1.types.CreateDatasetRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Request message for [AutoMl.CreateDataset][google.cloud.automl.v1.AutoMl.CreateDataset].
- dataset¶
Required. The dataset to create.
- class google.cloud.automl_v1.types.CreateModelOperationMetadata(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Details of CreateModel operation.
- class google.cloud.automl_v1.types.CreateModelRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Request message for [AutoMl.CreateModel][google.cloud.automl.v1.AutoMl.CreateModel].
- model¶
Required. The model to create.
- class google.cloud.automl_v1.types.Dataset(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A workspace for solving a single, particular machine learning (ML) problem. A workspace contains examples that may be annotated.
This message has oneof fields (mutually exclusive fields). For each oneof, at most one member field can be set at the same time. Setting any member of the oneof automatically clears all other members.
- translation_dataset_metadata¶
Metadata for a dataset used for translation.
This field is a member of oneof
dataset_metadata
.
- image_classification_dataset_metadata¶
Metadata for a dataset used for image classification.
This field is a member of oneof
dataset_metadata
.
- text_classification_dataset_metadata¶
Metadata for a dataset used for text classification.
This field is a member of oneof
dataset_metadata
.
- image_object_detection_dataset_metadata¶
Metadata for a dataset used for image object detection.
This field is a member of oneof
dataset_metadata
.
- text_extraction_dataset_metadata¶
Metadata for a dataset used for text extraction.
This field is a member of oneof
dataset_metadata
.
- text_sentiment_dataset_metadata¶
Metadata for a dataset used for text sentiment.
This field is a member of oneof
dataset_metadata
.
- name¶
Output only. The resource name of the dataset. Form:
projects/{project_id}/locations/{location_id}/datasets/{dataset_id}
- Type
- display_name¶
Required. The name of the dataset to show in the interface. The name can be up to 32 characters long and can consist only of ASCII Latin letters A-Z and a-z, underscores (_), and ASCII digits 0-9.
- Type
- description¶
User-provided description of the dataset. The description can be up to 25000 characters long.
- Type
- create_time¶
Output only. Timestamp when this dataset was created.
- etag¶
Used to perform consistent read-modify-write updates. If not set, a blind “overwrite” update happens.
- Type
- labels¶
Optional. The labels with user-defined metadata to organize your dataset. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. Label values are optional. Label keys must start with a letter.
See https://goo.gl/xmQnxf for more information on and examples of labels.
- class LabelsEntry(mapping=None, *, ignore_unknown_fields=False, **kwargs)¶
Bases:
proto.message.Message
- class google.cloud.automl_v1.types.DeleteDatasetRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Request message for [AutoMl.DeleteDataset][google.cloud.automl.v1.AutoMl.DeleteDataset].
- class google.cloud.automl_v1.types.DeleteModelRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Request message for [AutoMl.DeleteModel][google.cloud.automl.v1.AutoMl.DeleteModel].
- class google.cloud.automl_v1.types.DeleteOperationMetadata(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Details of operations that perform deletes of any entities.
- class google.cloud.automl_v1.types.DeployModelOperationMetadata(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Details of DeployModel operation.
- class google.cloud.automl_v1.types.DeployModelRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Request message for [AutoMl.DeployModel][google.cloud.automl.v1.AutoMl.DeployModel].
This message has oneof fields (mutually exclusive fields). For each oneof, at most one member field can be set at the same time. Setting any member of the oneof automatically clears all other members.
- image_object_detection_model_deployment_metadata¶
Model deployment metadata specific to Image Object Detection.
This field is a member of oneof
model_deployment_metadata
.
- class google.cloud.automl_v1.types.Document(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A structured text document e.g. a PDF.
- input_config¶
An input config specifying the content of the document.
- document_text¶
The plain text version of this document.
- layout¶
Describes the layout of the document. Sorted by [page_number][].
- Type
MutableSequence[google.cloud.automl_v1.types.Document.Layout]
- document_dimensions¶
The dimensions of the page in the document.
- class Layout(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Describes the layout information of a [text_segment][google.cloud.automl.v1.Document.Layout.text_segment] in the document.
- text_segment¶
Text Segment that represents a segment in [document_text][google.cloud.automl.v1p1beta.Document.document_text].
- page_number¶
Page number of the [text_segment][google.cloud.automl.v1.Document.Layout.text_segment] in the original document, starts from 1.
- Type
- bounding_poly¶
The position of the [text_segment][google.cloud.automl.v1.Document.Layout.text_segment] in the page. Contains exactly 4 [normalized_vertices][google.cloud.automl.v1p1beta.BoundingPoly.normalized_vertices] and they are connected by edges in the order provided, which will represent a rectangle parallel to the frame. The [NormalizedVertex-s][google.cloud.automl.v1p1beta.NormalizedVertex] are relative to the page. Coordinates are based on top-left as point (0,0).
- text_segment_type¶
The type of the [text_segment][google.cloud.automl.v1.Document.Layout.text_segment] in document.
- class TextSegmentType(value)[source]¶
Bases:
proto.enums.Enum
The type of TextSegment in the context of the original document.
- Values:
- TEXT_SEGMENT_TYPE_UNSPECIFIED (0):
Should not be used.
- TOKEN (1):
The text segment is a token. e.g. word.
- PARAGRAPH (2):
The text segment is a paragraph.
- FORM_FIELD (3):
The text segment is a form field.
- FORM_FIELD_NAME (4):
The text segment is the name part of a form field. It will be treated as child of another FORM_FIELD TextSegment if its span is subspan of another TextSegment with type FORM_FIELD.
- FORM_FIELD_CONTENTS (5):
The text segment is the text content part of a form field. It will be treated as child of another FORM_FIELD TextSegment if its span is subspan of another TextSegment with type FORM_FIELD.
- TABLE (6):
The text segment is a whole table, including headers, and all rows.
- TABLE_HEADER (7):
The text segment is a table’s headers. It will be treated as child of another TABLE TextSegment if its span is subspan of another TextSegment with type TABLE.
- TABLE_ROW (8):
The text segment is a row in table. It will be treated as child of another TABLE TextSegment if its span is subspan of another TextSegment with type TABLE.
- TABLE_CELL (9):
The text segment is a cell in table. It will be treated as child of another TABLE_ROW TextSegment if its span is subspan of another TextSegment with type TABLE_ROW.
- class google.cloud.automl_v1.types.DocumentDimensions(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Message that describes dimension of a document.
- unit¶
Unit of the dimension.
- class DocumentDimensionUnit(value)[source]¶
Bases:
proto.enums.Enum
Unit of the document dimension.
- Values:
- DOCUMENT_DIMENSION_UNIT_UNSPECIFIED (0):
Should not be used.
- INCH (1):
Document dimension is measured in inches.
- CENTIMETER (2):
Document dimension is measured in centimeters.
- POINT (3):
Document dimension is measured in points. 72 points = 1 inch.
- class google.cloud.automl_v1.types.DocumentInputConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Input configuration of a [Document][google.cloud.automl.v1.Document].
- gcs_source¶
The Google Cloud Storage location of the document file. Only a single path should be given.
Max supported size: 512MB.
Supported extensions: .PDF.
- class google.cloud.automl_v1.types.ExamplePayload(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Example data used for training or prediction.
This message has oneof fields (mutually exclusive fields). For each oneof, at most one member field can be set at the same time. Setting any member of the oneof automatically clears all other members.
- class google.cloud.automl_v1.types.ExportDataOperationMetadata(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Details of ExportData operation.
- output_info¶
Output only. Information further describing this export data’s output.
- class ExportDataOutputInfo(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Further describes this export data’s output. Supplements [OutputConfig][google.cloud.automl.v1.OutputConfig].
- class google.cloud.automl_v1.types.ExportDataRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Request message for [AutoMl.ExportData][google.cloud.automl.v1.AutoMl.ExportData].
- output_config¶
Required. The desired output location.
- class google.cloud.automl_v1.types.ExportModelOperationMetadata(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Details of ExportModel operation.
- output_info¶
Output only. Information further describing the output of this model export.
- class ExportModelOutputInfo(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Further describes the output of model export. Supplements [ModelExportOutputConfig][google.cloud.automl.v1.ModelExportOutputConfig].
- class google.cloud.automl_v1.types.ExportModelRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Request message for [AutoMl.ExportModel][google.cloud.automl.v1.AutoMl.ExportModel]. Models need to be enabled for exporting, otherwise an error code will be returned.
- output_config¶
Required. The desired output location and configuration.
- class google.cloud.automl_v1.types.GcsDestination(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
The Google Cloud Storage location where the output is to be written to.
- class google.cloud.automl_v1.types.GcsSource(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
The Google Cloud Storage location for the input content.
- class google.cloud.automl_v1.types.GetAnnotationSpecRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Request message for [AutoMl.GetAnnotationSpec][google.cloud.automl.v1.AutoMl.GetAnnotationSpec].
- class google.cloud.automl_v1.types.GetDatasetRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Request message for [AutoMl.GetDataset][google.cloud.automl.v1.AutoMl.GetDataset].
- class google.cloud.automl_v1.types.GetModelEvaluationRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Request message for [AutoMl.GetModelEvaluation][google.cloud.automl.v1.AutoMl.GetModelEvaluation].
- class google.cloud.automl_v1.types.GetModelRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Request message for [AutoMl.GetModel][google.cloud.automl.v1.AutoMl.GetModel].
- class google.cloud.automl_v1.types.Image(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A representation of an image. Only images up to 30MB in size are supported.
- class google.cloud.automl_v1.types.ImageClassificationDatasetMetadata(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Dataset metadata that is specific to image classification.
- classification_type¶
Required. Type of the classification problem.
- class google.cloud.automl_v1.types.ImageClassificationModelDeploymentMetadata(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Model deployment metadata specific to Image Classification.
- node_count¶
Input only. The number of nodes to deploy the model on. A node is an abstraction of a machine resource, which can handle online prediction QPS as given in the model’s [node_qps][google.cloud.automl.v1.ImageClassificationModelMetadata.node_qps]. Must be between 1 and 100, inclusive on both ends.
- Type
- class google.cloud.automl_v1.types.ImageClassificationModelMetadata(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Model metadata for image classification.
- base_model_id¶
Optional. The ID of the
base
model. If it is specified, the new model will be created based on thebase
model. Otherwise, the new model will be created from scratch. Thebase
model must be in the sameproject
andlocation
as the new model to create, and have the samemodel_type
.- Type
- train_budget_milli_node_hours¶
Optional. The train budget of creating this model, expressed in milli node hours i.e. 1,000 value in this field means 1 node hour. The actual
train_cost
will be equal or less than this value. If further model training ceases to provide any improvements, it will stop without using full budget and the stop_reason will beMODEL_CONVERGED
. Note, node_hour = actual_hour * number_of_nodes_invovled. For model typecloud
(default), the train budget must be between 8,000 and 800,000 milli node hours, inclusive. The default value is 192, 000 which represents one day in wall time. For model typemobile-low-latency-1
,mobile-versatile-1
,mobile-high-accuracy-1
,mobile-core-ml-low-latency-1
,mobile-core-ml-versatile-1
,mobile-core-ml-high-accuracy-1
, the train budget must be between 1,000 and 100,000 milli node hours, inclusive. The default value is 24, 000 which represents one day in wall time.- Type
- train_cost_milli_node_hours¶
Output only. The actual train cost of creating this model, expressed in milli node hours, i.e. 1,000 value in this field means 1 node hour. Guaranteed to not exceed the train budget.
- Type
- stop_reason¶
Output only. The reason that this create model operation stopped, e.g.
BUDGET_REACHED
,MODEL_CONVERGED
.- Type
- model_type¶
Optional. Type of the model. The available values are:
cloud
- Model to be used via prediction calls to AutoML API. This is the default value.mobile-low-latency-1
- A model that, in addition to providing prediction via AutoML API, can also be exported (see [AutoMl.ExportModel][google.cloud.automl.v1.AutoMl.ExportModel]) and used on a mobile or edge device with TensorFlow afterwards. Expected to have low latency, but may have lower prediction quality than other models.mobile-versatile-1
- A model that, in addition to providing prediction via AutoML API, can also be exported (see [AutoMl.ExportModel][google.cloud.automl.v1.AutoMl.ExportModel]) and used on a mobile or edge device with TensorFlow afterwards.mobile-high-accuracy-1
- A model that, in addition to providing prediction via AutoML API, can also be exported (see [AutoMl.ExportModel][google.cloud.automl.v1.AutoMl.ExportModel]) and used on a mobile or edge device with TensorFlow afterwards. Expected to have a higher latency, but should also have a higher prediction quality than other models.mobile-core-ml-low-latency-1
- A model that, in addition to providing prediction via AutoML API, can also be exported (see [AutoMl.ExportModel][google.cloud.automl.v1.AutoMl.ExportModel]) and used on a mobile device with Core ML afterwards. Expected to have low latency, but may have lower prediction quality than other models.mobile-core-ml-versatile-1
- A model that, in addition to providing prediction via AutoML API, can also be exported (see [AutoMl.ExportModel][google.cloud.automl.v1.AutoMl.ExportModel]) and used on a mobile device with Core ML afterwards.mobile-core-ml-high-accuracy-1
- A model that, in addition to providing prediction via AutoML API, can also be exported (see [AutoMl.ExportModel][google.cloud.automl.v1.AutoMl.ExportModel]) and used on a mobile device with Core ML afterwards. Expected to have a higher latency, but should also have a higher prediction quality than other models.
- Type
- node_qps¶
Output only. An approximate number of online prediction QPS that can be supported by this model per each node on which it is deployed.
- Type
- class google.cloud.automl_v1.types.ImageObjectDetectionAnnotation(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Annotation details for image object detection.
- bounding_box¶
Output only. The rectangle representing the object location.
- class google.cloud.automl_v1.types.ImageObjectDetectionDatasetMetadata(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Dataset metadata specific to image object detection.
- class google.cloud.automl_v1.types.ImageObjectDetectionEvaluationMetrics(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Model evaluation metrics for image object detection problems. Evaluates prediction quality of labeled bounding boxes.
- evaluated_bounding_box_count¶
Output only. The total number of bounding boxes (i.e. summed over all images) the ground truth used to create this evaluation had.
- Type
- bounding_box_metrics_entries¶
Output only. The bounding boxes match metrics for each Intersection-over-union threshold 0.05,0.10,…,0.95,0.96,0.97,0.98,0.99 and each label confidence threshold 0.05,0.10,…,0.95,0.96,0.97,0.98,0.99 pair.
- Type
MutableSequence[google.cloud.automl_v1.types.BoundingBoxMetricsEntry]
- class google.cloud.automl_v1.types.ImageObjectDetectionModelDeploymentMetadata(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Model deployment metadata specific to Image Object Detection.
- node_count¶
Input only. The number of nodes to deploy the model on. A node is an abstraction of a machine resource, which can handle online prediction QPS as given in the model’s [qps_per_node][google.cloud.automl.v1.ImageObjectDetectionModelMetadata.qps_per_node]. Must be between 1 and 100, inclusive on both ends.
- Type
- class google.cloud.automl_v1.types.ImageObjectDetectionModelMetadata(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Model metadata specific to image object detection.
- model_type¶
Optional. Type of the model. The available values are:
cloud-high-accuracy-1
- (default) A model to be used via prediction calls to AutoML API. Expected to have a higher latency, but should also have a higher prediction quality than other models.cloud-low-latency-1
- A model to be used via prediction calls to AutoML API. Expected to have low latency, but may have lower prediction quality than other models.mobile-low-latency-1
- A model that, in addition to providing prediction via AutoML API, can also be exported (see [AutoMl.ExportModel][google.cloud.automl.v1.AutoMl.ExportModel]) and used on a mobile or edge device with TensorFlow afterwards. Expected to have low latency, but may have lower prediction quality than other models.mobile-versatile-1
- A model that, in addition to providing prediction via AutoML API, can also be exported (see [AutoMl.ExportModel][google.cloud.automl.v1.AutoMl.ExportModel]) and used on a mobile or edge device with TensorFlow afterwards.mobile-high-accuracy-1
- A model that, in addition to providing prediction via AutoML API, can also be exported (see [AutoMl.ExportModel][google.cloud.automl.v1.AutoMl.ExportModel]) and used on a mobile or edge device with TensorFlow afterwards. Expected to have a higher latency, but should also have a higher prediction quality than other models.
- Type
- node_count¶
Output only. The number of nodes this model is deployed on. A node is an abstraction of a machine resource, which can handle online prediction QPS as given in the qps_per_node field.
- Type
- node_qps¶
Output only. An approximate number of online prediction QPS that can be supported by this model per each node on which it is deployed.
- Type
- stop_reason¶
Output only. The reason that this create model operation stopped, e.g.
BUDGET_REACHED
,MODEL_CONVERGED
.- Type
- train_budget_milli_node_hours¶
Optional. The train budget of creating this model, expressed in milli node hours i.e. 1,000 value in this field means 1 node hour. The actual
train_cost
will be equal or less than this value. If further model training ceases to provide any improvements, it will stop without using full budget and the stop_reason will beMODEL_CONVERGED
. Note, node_hour = actual_hour * number_of_nodes_invovled. For model typecloud-high-accuracy-1
(default) andcloud-low-latency-1
, the train budget must be between 20,000 and 900,000 milli node hours, inclusive. The default value is 216, 000 which represents one day in wall time. For model typemobile-low-latency-1
,mobile-versatile-1
,mobile-high-accuracy-1
,mobile-core-ml-low-latency-1
,mobile-core-ml-versatile-1
,mobile-core-ml-high-accuracy-1
, the train budget must be between 1,000 and 100,000 milli node hours, inclusive. The default value is 24, 000 which represents one day in wall time.- Type
- class google.cloud.automl_v1.types.ImportDataOperationMetadata(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Details of ImportData operation.
- class google.cloud.automl_v1.types.ImportDataRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Request message for [AutoMl.ImportData][google.cloud.automl.v1.AutoMl.ImportData].
- name¶
Required. Dataset name. Dataset must already exist. All imported annotations and examples will be added.
- Type
- input_config¶
Required. The desired input location and its domain specific semantics, if any.
- class google.cloud.automl_v1.types.InputConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Input configuration for [AutoMl.ImportData][google.cloud.automl.v1.AutoMl.ImportData] action.
The format of input depends on dataset_metadata the Dataset into which the import is happening has. As input source the [gcs_source][google.cloud.automl.v1.InputConfig.gcs_source] is expected, unless specified otherwise. Additionally any input .CSV file by itself must be 100MB or smaller, unless specified otherwise. If an “example” file (that is, image, video etc.) with identical content (even if it had different
GCS_FILE_PATH
) is mentioned multiple times, then its label, bounding boxes etc. are appended. The same file should be always provided with the sameML_USE
andGCS_FILE_PATH
, if it is not, then these values are nondeterministically selected from the given ones.The formats are represented in EBNF with commas being literal and with non-terminal symbols defined near the end of this comment. The formats are:
AutoML Vision
Classification
See Preparing your training data for more information.
CSV file(s) with each line in format:
ML_USE,GCS_FILE_PATH,LABEL,LABEL,...
ML_USE
- Identifies the data set that the current row (file) applies to. This value can be one of the following:TRAIN
- Rows in this file are used to train the model.TEST
- Rows in this file are used to test the model during training.UNASSIGNED
- Rows in this file are not categorized. They are Automatically divided into train and test data. 80% for training and 20% for testing.
GCS_FILE_PATH
- The Google Cloud Storage location of an image of up to 30MB in size. Supported extensions: .JPEG, .GIF, .PNG, .WEBP, .BMP, .TIFF, .ICO.LABEL
- A label that identifies the object in the image.
For the
MULTICLASS
classification type, at most oneLABEL
is allowed per image. If an image has not yet been labeled, then it should be mentioned just once with noLABEL
.Some sample rows:
TRAIN,gs://folder/image1.jpg,daisy TEST,gs://folder/image2.jpg,dandelion,tulip,rose UNASSIGNED,gs://folder/image3.jpg,daisy UNASSIGNED,gs://folder/image4.jpg
Object Detection
See [Preparing your training data](https://cloud.google.com/vision/automl/object-detection/docs/prepare) for more information.A CSV file(s) with each line in format:
ML_USE,GCS_FILE_PATH,[LABEL],(BOUNDING_BOX | ,,,,,,,)
ML_USE
- Identifies the data set that the current row (file) applies to. This value can be one of the following:TRAIN
- Rows in this file are used to train the model.TEST
- Rows in this file are used to test the model during training.UNASSIGNED
- Rows in this file are not categorized. They are Automatically divided into train and test data. 80% for training and 20% for testing.
GCS_FILE_PATH
- The Google Cloud Storage location of an image of up to 30MB in size. Supported extensions: .JPEG, .GIF, .PNG. Each image is assumed to be exhaustively labeled.LABEL
- A label that identifies the object in the image specified by theBOUNDING_BOX
.BOUNDING BOX
- The vertices of an object in the example image. The minimum allowedBOUNDING_BOX
edge length is 0.01, and no more than 500BOUNDING_BOX
instances per image are allowed (oneBOUNDING_BOX
per line). If an image has no looked for objects then it should be mentioned just once with no LABEL and the “,,,,,,,” in place of theBOUNDING_BOX
.
Four sample rows:
TRAIN,gs://folder/image1.png,car,0.1,0.1,,,0.3,0.3,, TRAIN,gs://folder/image1.png,bike,.7,.6,,,.8,.9,, UNASSIGNED,gs://folder/im2.png,car,0.1,0.1,0.2,0.1,0.2,0.3,0.1,0.3 TEST,gs://folder/im3.png,,,,,,,,,
AutoML Video Intelligence
Classification
See Preparing your training data for more information.
CSV file(s) with each line in format:
ML_USE,GCS_FILE_PATH
For
ML_USE
, do not useVALIDATE
.GCS_FILE_PATH
is the path to another .csv file that describes training example for a givenML_USE
, using the following row format:GCS_FILE_PATH,(LABEL,TIME_SEGMENT_START,TIME_SEGMENT_END | ,,)
Here
GCS_FILE_PATH
leads to a video of up to 50GB in size and up to 3h duration. Supported extensions: .MOV, .MPEG4, .MP4, .AVI.TIME_SEGMENT_START
andTIME_SEGMENT_END
must be within the length of the video, and the end time must be after the start time. Any segment of a video which has one or more labels on it, is considered a hard negative for all other labels. Any segment with no labels on it is considered to be unknown. If a whole video is unknown, then it should be mentioned just once with “,,” in place ofLABEL, TIME_SEGMENT_START,TIME_SEGMENT_END
.Sample top level CSV file:
TRAIN,gs://folder/train_videos.csv TEST,gs://folder/test_videos.csv UNASSIGNED,gs://folder/other_videos.csv
Sample rows of a CSV file for a particular ML_USE:
gs://folder/video1.avi,car,120,180.000021 gs://folder/video1.avi,bike,150,180.000021 gs://folder/vid2.avi,car,0,60.5 gs://folder/vid3.avi,,,
Object Tracking
See Preparing your training data for more information.
CSV file(s) with each line in format:
ML_USE,GCS_FILE_PATH
For
ML_USE
, do not useVALIDATE
.GCS_FILE_PATH
is the path to another .csv file that describes training example for a givenML_USE
, using the following row format:GCS_FILE_PATH,LABEL,[INSTANCE_ID],TIMESTAMP,BOUNDING_BOX
or
GCS_FILE_PATH,,,,,,,,,,
Here
GCS_FILE_PATH
leads to a video of up to 50GB in size and up to 3h duration. Supported extensions: .MOV, .MPEG4, .MP4, .AVI. ProvidingINSTANCE_ID
s can help to obtain a better model. When a specific labeled entity leaves the video frame, and shows up afterwards it is not required, albeit preferable, that the sameINSTANCE_ID
is given to it.TIMESTAMP
must be within the length of the video, theBOUNDING_BOX
is assumed to be drawn on the closest video’s frame to theTIMESTAMP
. Any mentioned by theTIMESTAMP
frame is expected to be exhaustively labeled and no more than 500BOUNDING_BOX
-es per frame are allowed. If a whole video is unknown, then it should be mentioned just once with “,,,,,,,,,,” in place ofLABEL, [INSTANCE_ID],TIMESTAMP,BOUNDING_BOX
.Sample top level CSV file:
TRAIN,gs://folder/train_videos.csv TEST,gs://folder/test_videos.csv UNASSIGNED,gs://folder/other_videos.csv
Seven sample rows of a CSV file for a particular ML_USE:
gs://folder/video1.avi,car,1,12.10,0.8,0.8,0.9,0.8,0.9,0.9,0.8,0.9 gs://folder/video1.avi,car,1,12.90,0.4,0.8,0.5,0.8,0.5,0.9,0.4,0.9 gs://folder/video1.avi,car,2,12.10,.4,.2,.5,.2,.5,.3,.4,.3 gs://folder/video1.avi,car,2,12.90,.8,.2,,,.9,.3,, gs://folder/video1.avi,bike,,12.50,.45,.45,,,.55,.55,, gs://folder/video2.avi,car,1,0,.1,.9,,,.9,.1,, gs://folder/video2.avi,,,,,,,,,,,
AutoML Natural Language
Entity Extraction
See Preparing your training data for more information.
One or more CSV file(s) with each line in the following format:
ML_USE,GCS_FILE_PATH
ML_USE
- Identifies the data set that the current row (file) applies to. This value can be one of the following:TRAIN
- Rows in this file are used to train the model.TEST
- Rows in this file are used to test the model during training.UNASSIGNED
- Rows in this file are not categorized. They are Automatically divided into train and test data. 80% for training and 20% for testing..
GCS_FILE_PATH
- a Identifies JSON Lines (.JSONL) file stored in Google Cloud Storage that contains in-line text in-line as documents for model training.
After the training data set has been determined from the
TRAIN
andUNASSIGNED
CSV files, the training data is divided into train and validation data sets. 70% for training and 30% for validation.For example:
TRAIN,gs://folder/file1.jsonl VALIDATE,gs://folder/file2.jsonl TEST,gs://folder/file3.jsonl
In-line JSONL files
In-line .JSONL files contain, per line, a JSON document that wraps a [
text_snippet
][google.cloud.automl.v1.TextSnippet] field followed by one or more [annotations
][google.cloud.automl.v1.AnnotationPayload] fields, which havedisplay_name
andtext_extraction
fields to describe the entity from the text snippet. Multiple JSON documents can be separated using line breaks (n).The supplied text must be annotated exhaustively. For example, if you include the text “horse”, but do not label it as “animal”, then “horse” is assumed to not be an “animal”.
Any given text snippet content must have 30,000 characters or less, and also be UTF-8 NFC encoded. ASCII is accepted as it is UTF-8 NFC encoded.
For example:
{ "text_snippet": { "content": "dog car cat" }, "annotations": [ { "display_name": "animal", "text_extraction": { "text_segment": {"start_offset": 0, "end_offset": 2} } }, { "display_name": "vehicle", "text_extraction": { "text_segment": {"start_offset": 4, "end_offset": 6} } }, { "display_name": "animal", "text_extraction": { "text_segment": {"start_offset": 8, "end_offset": 10} } } ] }\n { "text_snippet": { "content": "This dog is good." }, "annotations": [ { "display_name": "animal", "text_extraction": { "text_segment": {"start_offset": 5, "end_offset": 7} } } ] }
JSONL files that reference documents
.JSONL files contain, per line, a JSON document that wraps a
input_config
that contains the path to a source document. Multiple JSON documents can be separated using line breaks (n).Supported document extensions: .PDF, .TIF, .TIFF
For example:
{ "document": { "input_config": { "gcs_source": { "input_uris": [ "gs://folder/document1.pdf" ] } } } }\n { "document": { "input_config": { "gcs_source": { "input_uris": [ "gs://folder/document2.tif" ] } } } }
In-line JSONL files with document layout information
Note: You can only annotate documents using the UI. The format described below applies to annotated documents exported using the UI or
exportData
.In-line .JSONL files for documents contain, per line, a JSON document that wraps a
document
field that provides the textual content of the document and the layout information.For example:
{ "document": { "document_text": { "content": "dog car cat" } "layout": [ { "text_segment": { "start_offset": 0, "end_offset": 11, }, "page_number": 1, "bounding_poly": { "normalized_vertices": [ {"x": 0.1, "y": 0.1}, {"x": 0.1, "y": 0.3}, {"x": 0.3, "y": 0.3}, {"x": 0.3, "y": 0.1}, ], }, "text_segment_type": TOKEN, } ], "document_dimensions": { "width": 8.27, "height": 11.69, "unit": INCH, } "page_count": 3, }, "annotations": [ { "display_name": "animal", "text_extraction": { "text_segment": {"start_offset": 0, "end_offset": 3} } }, { "display_name": "vehicle", "text_extraction": { "text_segment": {"start_offset": 4, "end_offset": 7} } }, { "display_name": "animal", "text_extraction": { "text_segment": {"start_offset": 8, "end_offset": 11} } }, ],
Classification
See Preparing your training data for more information.
One or more CSV file(s) with each line in the following format:
ML_USE,(TEXT_SNIPPET | GCS_FILE_PATH),LABEL,LABEL,...
ML_USE
- Identifies the data set that the current row (file) applies to. This value can be one of the following:TRAIN
- Rows in this file are used to train the model.TEST
- Rows in this file are used to test the model during training.UNASSIGNED
- Rows in this file are not categorized. They are Automatically divided into train and test data. 80% for training and 20% for testing.
TEXT_SNIPPET
andGCS_FILE_PATH
are distinguished by a pattern. If the column content is a valid Google Cloud Storage file path, that is, prefixed by “gs://”, it is treated as aGCS_FILE_PATH
. Otherwise, if the content is enclosed in double quotes (“”), it is treated as aTEXT_SNIPPET
. ForGCS_FILE_PATH
, the path must lead to a file with supported extension and UTF-8 encoding, for example, “gs://folder/content.txt” AutoML imports the file content as a text snippet. ForTEXT_SNIPPET
, AutoML imports the column content excluding quotes. In both cases, size of the content must be 10MB or less in size. For zip files, the size of each file inside the zip must be 10MB or less in size.For the
MULTICLASS
classification type, at most oneLABEL
is allowed.The
ML_USE
andLABEL
columns are optional. Supported file extensions: .TXT, .PDF, .TIF, .TIFF, .ZIP
A maximum of 100 unique labels are allowed per CSV row.
Sample rows:
TRAIN,"They have bad food and very rude",RudeService,BadFood gs://folder/content.txt,SlowService TEST,gs://folder/document.pdf VALIDATE,gs://folder/text_files.zip,BadFood
Sentiment Analysis
See Preparing your training data for more information.
CSV file(s) with each line in format:
ML_USE,(TEXT_SNIPPET | GCS_FILE_PATH),SENTIMENT
ML_USE
- Identifies the data set that the current row (file) applies to. This value can be one of the following:TRAIN
- Rows in this file are used to train the model.TEST
- Rows in this file are used to test the model during training.UNASSIGNED
- Rows in this file are not categorized. They are Automatically divided into train and test data. 80% for training and 20% for testing.
TEXT_SNIPPET
andGCS_FILE_PATH
are distinguished by a pattern. If the column content is a valid Google Cloud Storage file path, that is, prefixed by “gs://”, it is treated as aGCS_FILE_PATH
. Otherwise, if the content is enclosed in double quotes (“”), it is treated as aTEXT_SNIPPET
. ForGCS_FILE_PATH
, the path must lead to a file with supported extension and UTF-8 encoding, for example, “gs://folder/content.txt” AutoML imports the file content as a text snippet. ForTEXT_SNIPPET
, AutoML imports the column content excluding quotes. In both cases, size of the content must be 128kB or less in size. For zip files, the size of each file inside the zip must be 128kB or less in size.The
ML_USE
andSENTIMENT
columns are optional. Supported file extensions: .TXT, .PDF, .TIF, .TIFF, .ZIPSENTIMENT
- An integer between 0 and Dataset.text_sentiment_dataset_metadata.sentiment_max (inclusive). Describes the ordinal of the sentiment - higher value means a more positive sentiment. All the values are completely relative, i.e. neither 0 needs to mean a negative or neutral sentiment nor sentiment_max needs to mean a positive one - it is just required that 0 is the least positive sentiment in the data, and sentiment_max is the most positive one. The SENTIMENT shouldn’t be confused with “score” or “magnitude” from the previous Natural Language Sentiment Analysis API. All SENTIMENT values between 0 and sentiment_max must be represented in the imported data. On prediction the same 0 to sentiment_max range will be used. The difference between neighboring sentiment values needs not to be uniform, e.g. 1 and 2 may be similar whereas the difference between 2 and 3 may be large.
Sample rows:
TRAIN,"@freewrytin this is way too good for your product",2 gs://folder/content.txt,3 TEST,gs://folder/document.pdf VALIDATE,gs://folder/text_files.zip,2
AutoML Tables
See Preparing your training data for more information.
You can use either [gcs_source][google.cloud.automl.v1.InputConfig.gcs_source] or [bigquery_source][google.cloud.automl.v1.InputConfig.bigquery_source]. All input is concatenated into a single [primary_table_spec_id][google.cloud.automl.v1.TablesDatasetMetadata.primary_table_spec_id]
For gcs_source:
CSV file(s), where the first row of the first file is the header, containing unique column names. If the first row of a subsequent file is the same as the header, then it is also treated as a header. All other rows contain values for the corresponding columns.
Each .CSV file by itself must be 10GB or smaller, and their total size must be 100GB or smaller.
First three sample rows of a CSV file:
"Id","First Name","Last Name","Dob","Addresses" "1","John","Doe","1968-01-22","[{"status":"current","address":"123_First_Avenue","city":"Seattle","state":"WA","zip":"11111","numberOfYears":"1"},{"status":"previous","address":"456_Main_Street","city":"Portland","state":"OR","zip":"22222","numberOfYears":"5"}]" "2","Jane","Doe","1980-10-16","[{"status":"current","address":"789_Any_Avenue","city":"Albany","state":"NY","zip":"33333","numberOfYears":"2"},{"status":"previous","address":"321_Main_Street","city":"Hoboken","state":"NJ","zip":"44444","numberOfYears":"3"}]}
For bigquery_source:
An URI of a BigQuery table. The user data size of the BigQuery table must be 100GB or smaller.
An imported table must have between 2 and 1,000 columns, inclusive, and between 1000 and 100,000,000 rows, inclusive. There are at most 5 import data running in parallel.
Input field definitions:
ML_USE
: (“TRAIN” | “VALIDATE” | “TEST” | “UNASSIGNED”) Describes how the given example (file) should be used for model training. “UNASSIGNED” can be used when user has no preference.GCS_FILE_PATH
: The path to a file on Google Cloud Storage. For example, “gs://folder/image1.png”.LABEL
: A display name of an object on an image, video etc., e.g. “dog”. Must be up to 32 characters long and can consist only of ASCII Latin letters A-Z and a-z, underscores(_), and ASCII digits 0-9. For each label an AnnotationSpec is created which display_name becomes the label; AnnotationSpecs are given back in predictions.INSTANCE_ID
: A positive integer that identifies a specific instance of a labeled entity on an example. Used e.g. to track two cars on a video while being able to tell apart which one is which.BOUNDING_BOX
: (VERTEX,VERTEX,VERTEX,VERTEX
|VERTEX,,,VERTEX,,
) A rectangle parallel to the frame of the example (image, video). If 4 vertices are given they are connected by edges in the order provided, if 2 are given they are recognized as diagonally opposite vertices of the rectangle.VERTEX
: (COORDINATE,COORDINATE
) First coordinate is horizontal (x), the second is vertical (y).COORDINATE
: A float in 0 to 1 range, relative to total length of image or video in given dimension. For fractions the leading non-decimal 0 can be omitted (i.e. 0.3 = .3). Point 0,0 is in top left.TIME_SEGMENT_START
: (TIME_OFFSET
) Expresses a beginning, inclusive, of a time segment within an example that has a time dimension (e.g. video).TIME_SEGMENT_END
: (TIME_OFFSET
) Expresses an end, exclusive, of a time segment within n example that has a time dimension (e.g. video).TIME_OFFSET
: A number of seconds as measured from the start of an example (e.g. video). Fractions are allowed, up to a microsecond precision. “inf” is allowed, and it means the end of the example.TEXT_SNIPPET
: The content of a text snippet, UTF-8 encoded, enclosed within double quotes (“”).DOCUMENT
: A field that provides the textual content with document and the layout information.Errors:
If any of the provided CSV files can’t be parsed or if more than certain percent of CSV rows cannot be processed then the operation fails and nothing is imported. Regardless of overall success or failure the per-row failures, up to a certain count cap, is listed in Operation.metadata.partial_failures.
- gcs_source¶
The Google Cloud Storage location for the input content. For [AutoMl.ImportData][google.cloud.automl.v1.AutoMl.ImportData],
gcs_source
points to a CSV file with a structure described in [InputConfig][google.cloud.automl.v1.InputConfig].This field is a member of oneof
source
.
- params¶
Additional domain-specific parameters describing the semantic of the imported data, any string must be up to 25000 characters long.
AutoML Tables
schema_inference_version
: (integer) This value must be supplied. The version of the algorithm to use for the initial inference of the column data types of the imported table. Allowed values: “1”.
- class ParamsEntry(mapping=None, *, ignore_unknown_fields=False, **kwargs)¶
Bases:
proto.message.Message
- class google.cloud.automl_v1.types.ListDatasetsRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Request message for [AutoMl.ListDatasets][google.cloud.automl.v1.AutoMl.ListDatasets].
- filter¶
An expression for filtering the results of the request.
dataset_metadata
- for existence of the case (e.g.image_classification_dataset_metadata:*
). Some examples of using the filter are:translation_dataset_metadata:*
–> The dataset hastranslation_dataset_metadata
.
- Type
- page_size¶
Requested page size. Server may return fewer results than requested. If unspecified, server will pick a default size.
- Type
- class google.cloud.automl_v1.types.ListDatasetsResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Response message for [AutoMl.ListDatasets][google.cloud.automl.v1.AutoMl.ListDatasets].
- datasets¶
The datasets read.
- Type
MutableSequence[google.cloud.automl_v1.types.Dataset]
- class google.cloud.automl_v1.types.ListModelEvaluationsRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Request message for [AutoMl.ListModelEvaluations][google.cloud.automl.v1.AutoMl.ListModelEvaluations].
- parent¶
Required. Resource name of the model to list the model evaluations for. If modelId is set as “-”, this will list model evaluations from across all models of the parent location.
- Type
- filter¶
Required. An expression for filtering the results of the request.
annotation_spec_id
- for =, != or existence. See example below for the last.
Some examples of using the filter are:
annotation_spec_id!=4
–> The model evaluation was done for annotation spec with ID different than 4.NOT annotation_spec_id:*
–> The model evaluation was done for aggregate of all annotation specs.
- Type
- page_token¶
A token identifying a page of results for the server to return. Typically obtained via [ListModelEvaluationsResponse.next_page_token][google.cloud.automl.v1.ListModelEvaluationsResponse.next_page_token] of the previous [AutoMl.ListModelEvaluations][google.cloud.automl.v1.AutoMl.ListModelEvaluations] call.
- Type
- class google.cloud.automl_v1.types.ListModelEvaluationsResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Response message for [AutoMl.ListModelEvaluations][google.cloud.automl.v1.AutoMl.ListModelEvaluations].
- model_evaluation¶
List of model evaluations in the requested page.
- Type
MutableSequence[google.cloud.automl_v1.types.ModelEvaluation]
- next_page_token¶
A token to retrieve next page of results. Pass to the [ListModelEvaluationsRequest.page_token][google.cloud.automl.v1.ListModelEvaluationsRequest.page_token] field of a new [AutoMl.ListModelEvaluations][google.cloud.automl.v1.AutoMl.ListModelEvaluations] request to obtain that page.
- Type
- class google.cloud.automl_v1.types.ListModelsRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Request message for [AutoMl.ListModels][google.cloud.automl.v1.AutoMl.ListModels].
- filter¶
An expression for filtering the results of the request.
model_metadata
- for existence of the case (e.g.video_classification_model_metadata:*
).dataset_id
- for = or !=. Some examples of using the filter are:image_classification_model_metadata:*
–> The model hasimage_classification_model_metadata
.dataset_id=5
–> The model was created from a dataset with ID 5.
- Type
- class google.cloud.automl_v1.types.ListModelsResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Response message for [AutoMl.ListModels][google.cloud.automl.v1.AutoMl.ListModels].
- model¶
List of models in the requested page.
- Type
MutableSequence[google.cloud.automl_v1.types.Model]
- class google.cloud.automl_v1.types.Model(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
API proto representing a trained machine learning model.
This message has oneof fields (mutually exclusive fields). For each oneof, at most one member field can be set at the same time. Setting any member of the oneof automatically clears all other members.
- translation_model_metadata¶
Metadata for translation models.
This field is a member of oneof
model_metadata
.
- image_classification_model_metadata¶
Metadata for image classification models.
This field is a member of oneof
model_metadata
.
- text_classification_model_metadata¶
Metadata for text classification models.
This field is a member of oneof
model_metadata
.
- image_object_detection_model_metadata¶
Metadata for image object detection models.
This field is a member of oneof
model_metadata
.
- text_extraction_model_metadata¶
Metadata for text extraction models.
This field is a member of oneof
model_metadata
.
- text_sentiment_model_metadata¶
Metadata for text sentiment models.
This field is a member of oneof
model_metadata
.
- name¶
Output only. Resource name of the model. Format:
projects/{project_id}/locations/{location_id}/models/{model_id}
- Type
- display_name¶
Required. The name of the model to show in the interface. The name can be up to 32 characters long and can consist only of ASCII Latin letters A-Z and a-z, underscores (_), and ASCII digits 0-9. It must start with a letter.
- Type
- dataset_id¶
Required. The resource ID of the dataset used to create the model. The dataset must come from the same ancestor project and location.
- Type
- create_time¶
Output only. Timestamp when the model training finished and can be used for prediction.
- update_time¶
Output only. Timestamp when this model was last updated.
- deployment_state¶
Output only. Deployment state of the model. A model can only serve prediction requests after it gets deployed.
- etag¶
Used to perform a consistent read-modify-write updates. If not set, a blind “overwrite” update happens.
- Type
- labels¶
Optional. The labels with user-defined metadata to organize your model. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. Label values are optional. Label keys must start with a letter.
See https://goo.gl/xmQnxf for more information on and examples of labels.
- class DeploymentState(value)[source]¶
Bases:
proto.enums.Enum
Deployment state of the model.
- Values:
- DEPLOYMENT_STATE_UNSPECIFIED (0):
Should not be used, an un-set enum has this value by default.
- DEPLOYED (1):
Model is deployed.
- UNDEPLOYED (2):
Model is not deployed.
- class LabelsEntry(mapping=None, *, ignore_unknown_fields=False, **kwargs)¶
Bases:
proto.message.Message
- class google.cloud.automl_v1.types.ModelEvaluation(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Evaluation results of a model.
This message has oneof fields (mutually exclusive fields). For each oneof, at most one member field can be set at the same time. Setting any member of the oneof automatically clears all other members.
- classification_evaluation_metrics¶
Model evaluation metrics for image, text, video and tables classification. Tables problem is considered a classification when the target column is CATEGORY DataType.
This field is a member of oneof
metrics
.
- translation_evaluation_metrics¶
Model evaluation metrics for translation.
This field is a member of oneof
metrics
.
- image_object_detection_evaluation_metrics¶
Model evaluation metrics for image object detection.
This field is a member of oneof
metrics
.
- text_sentiment_evaluation_metrics¶
Evaluation metrics for text sentiment models.
This field is a member of oneof
metrics
.
- text_extraction_evaluation_metrics¶
Evaluation metrics for text extraction models.
This field is a member of oneof
metrics
.
- name¶
Output only. Resource name of the model evaluation. Format:
projects/{project_id}/locations/{location_id}/models/{model_id}/modelEvaluations/{model_evaluation_id}
- Type
- annotation_spec_id¶
Output only. The ID of the annotation spec that the model evaluation applies to. The The ID is empty for the overall model evaluation. For Tables annotation specs in the dataset do not exist and this ID is always not set, but for CLASSIFICATION [prediction_type-s][google.cloud.automl.v1.TablesModelMetadata.prediction_type] the [display_name][google.cloud.automl.v1.ModelEvaluation.display_name] field is used.
- Type
- display_name¶
Output only. The value of [display_name][google.cloud.automl.v1.AnnotationSpec.display_name] at the moment when the model was trained. Because this field returns a value at model training time, for different models trained from the same dataset, the values may differ, since display names could had been changed between the two model’s trainings. For Tables CLASSIFICATION [prediction_type-s][google.cloud.automl.v1.TablesModelMetadata.prediction_type] distinct values of the target column at the moment of the model evaluation are populated here. The display_name is empty for the overall model evaluation.
- Type
- create_time¶
Output only. Timestamp when this model evaluation was created.
- evaluated_example_count¶
Output only. The number of examples used for model evaluation, i.e. for which ground truth from time of model creation is compared against the predicted annotations created by the model. For overall ModelEvaluation (i.e. with annotation_spec_id not set) this is the total number of all examples used for evaluation. Otherwise, this is the count of examples that according to the ground truth were annotated by the [annotation_spec_id][google.cloud.automl.v1.ModelEvaluation.annotation_spec_id].
- Type
- class google.cloud.automl_v1.types.ModelExportOutputConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Output configuration for ModelExport Action.
- gcs_destination¶
Required. The Google Cloud Storage location where the model is to be written to. This location may only be set for the following model formats: “tflite”, “edgetpu_tflite”, “tf_saved_model”, “tf_js”, “core_ml”.
Under the directory given as the destination a new one with name “model-export–”, where timestamp is in YYYY-MM-DDThh:mm:ss.sssZ ISO-8601 format, will be created. Inside the model and any of its supporting files will be written.
This field is a member of oneof
destination
.
- model_format¶
The format in which the model must be exported. The available, and default, formats depend on the problem and model type (if given problem and type combination doesn’t have a format listed, it means its models are not exportable):
For Image Classification mobile-low-latency-1, mobile-versatile-1, mobile-high-accuracy-1: “tflite” (default), “edgetpu_tflite”, “tf_saved_model”, “tf_js”, “docker”.
For Image Classification mobile-core-ml-low-latency-1, mobile-core-ml-versatile-1, mobile-core-ml-high-accuracy-1: “core_ml” (default).
For Image Object Detection mobile-low-latency-1, mobile-versatile-1, mobile-high-accuracy-1: “tflite”, “tf_saved_model”, “tf_js”. Formats description:
tflite - Used for Android mobile devices.
edgetpu_tflite - Used for Edge TPU devices.
tf_saved_model - A tensorflow model in SavedModel format.
tf_js - A TensorFlow.js model that can be used in the browser and in Node.js using JavaScript.
docker - Used for Docker containers. Use the params field to customize the container. The container is verified to work correctly on ubuntu 16.04 operating system. See more at containers quickstart
core_ml - Used for iOS mobile devices.
- Type
- params¶
Additional model-type and format specific parameters describing the requirements for the to be exported model files, any string must be up to 25000 characters long.
For
docker
format:cpu_architecture
- (string) “x86_64” (default).gpu_architecture
- (string) “none” (default), “nvidia”.
- class ParamsEntry(mapping=None, *, ignore_unknown_fields=False, **kwargs)¶
Bases:
proto.message.Message
- class google.cloud.automl_v1.types.NormalizedVertex(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A vertex represents a 2D point in the image. The normalized vertex coordinates are between 0 to 1 fractions relative to the original plane (image, video). E.g. if the plane (e.g. whole image) would have size 10 x 20 then a point with normalized coordinates (0.1, 0.3) would be at the position (1, 6) on that plane.
- class google.cloud.automl_v1.types.OperationMetadata(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Metadata used across all long running operations returned by AutoML API.
This message has oneof fields (mutually exclusive fields). For each oneof, at most one member field can be set at the same time. Setting any member of the oneof automatically clears all other members.
- undeploy_model_details¶
Details of an UndeployModel operation.
This field is a member of oneof
details
.
- create_dataset_details¶
Details of CreateDataset operation.
This field is a member of oneof
details
.
- partial_failures¶
Output only. Partial failures encountered. E.g. single files that couldn’t be read. This field should never exceed 20 entries. Status details field will contain standard GCP error details.
- Type
MutableSequence[google.rpc.status_pb2.Status]
- create_time¶
Output only. Time when the operation was created.
- update_time¶
Output only. Time when the operation was updated for the last time.
- class google.cloud.automl_v1.types.OutputConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
For Translation: CSV file
translation.csv
, with each line in format: ML_USE,GCS_FILE_PATH GCS_FILE_PATH leads to a .TSV file which describes examples that have given ML_USE, using the following row format per line: TEXT_SNIPPET (in source language) \t TEXT_SNIPPET (in target language)For Tables: Output depends on whether the dataset was imported from Google Cloud Storage or BigQuery. Google Cloud Storage case: [gcs_destination][google.cloud.automl.v1p1beta.OutputConfig.gcs_destination] must be set. Exported are CSV file(s)
tables_1.csv
,tables_2.csv
,…,tables_N.csv
with each having as header line the table’s column names, and all other lines contain values for the header columns. BigQuery case: [bigquery_destination][google.cloud.automl.v1p1beta.OutputConfig.bigquery_destination] pointing to a BigQuery project must be set. In the given project a new dataset will be created with nameexport_data_<automl-dataset-display-name>_<timestamp-of-export-call>
where will be made BigQuery-dataset-name compatible (e.g. most special characters will become underscores), and timestamp will be in YYYY_MM_DDThh_mm_ss_sssZ “based on ISO-8601” format. In that dataset a new table calledprimary_table
will be created, and filled with precisely the same data as this obtained on import.
- gcs_destination¶
Required. The Google Cloud Storage location where the output is to be written to. For Image Object Detection, Text Extraction, Video Classification and Tables, in the given directory a new directory will be created with name: export_data– where timestamp is in YYYY-MM-DDThh:mm:ss.sssZ ISO-8601 format. All export output will be written into that directory.
This field is a member of oneof
destination
.
- class google.cloud.automl_v1.types.PredictRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Request message for [PredictionService.Predict][google.cloud.automl.v1.PredictionService.Predict].
- payload¶
Required. Payload to perform a prediction on. The payload must match the problem type that the model was trained to solve.
- params¶
Additional domain-specific parameters, any string must be up to 25000 characters long.
AutoML Vision Classification
score_threshold
: (float) A value from 0.0 to 1.0. When the model makes predictions for an image, it will only produce results that have at least this confidence score. The default is 0.5.AutoML Vision Object Detection
score_threshold
: (float) When Model detects objects on the image, it will only produce bounding boxes which have at least this confidence score. Value in 0 to 1 range, default is 0.5.max_bounding_box_count
: (int64) The maximum number of bounding boxes returned. The default is 100. The number of returned bounding boxes might be limited by the server.AutoML Tables
feature_importance
: (boolean) Whether [feature_importance][google.cloud.automl.v1.TablesModelColumnInfo.feature_importance] is populated in the returned list of [TablesAnnotation][google.cloud.automl.v1.TablesAnnotation] objects. The default is false.
- class ParamsEntry(mapping=None, *, ignore_unknown_fields=False, **kwargs)¶
Bases:
proto.message.Message
- class google.cloud.automl_v1.types.PredictResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Response message for [PredictionService.Predict][google.cloud.automl.v1.PredictionService.Predict].
- payload¶
Prediction result. AutoML Translation and AutoML Natural Language Sentiment Analysis return precisely one payload.
- Type
MutableSequence[google.cloud.automl_v1.types.AnnotationPayload]
- preprocessed_input¶
The preprocessed example that AutoML actually makes prediction on. Empty if AutoML does not preprocess the input example.
For AutoML Natural Language (Classification, Entity Extraction, and Sentiment Analysis), if the input is a document, the recognized text is returned in the [document_text][google.cloud.automl.v1.Document.document_text] property.
- metadata¶
Additional domain-specific prediction response metadata.
AutoML Vision Object Detection
max_bounding_box_count
: (int64) The maximum number of bounding boxes to return per image.AutoML Natural Language Sentiment Analysis
sentiment_score
: (float, deprecated) A value between -1 and 1, -1 maps to least positive sentiment, while 1 maps to the most positive one and the higher the score, the more positive the sentiment in the document is. Yet these values are relative to the training data, so e.g. if all data was positive then -1 is also positive (though the least).sentiment_score
is not the same as “score” and “magnitude” from Sentiment Analysis in the Natural Language API.
- class MetadataEntry(mapping=None, *, ignore_unknown_fields=False, **kwargs)¶
Bases:
proto.message.Message
- class google.cloud.automl_v1.types.TextClassificationDatasetMetadata(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Dataset metadata for classification.
- classification_type¶
Required. Type of the classification problem.
- class google.cloud.automl_v1.types.TextClassificationModelMetadata(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Model metadata that is specific to text classification.
- classification_type¶
Output only. Classification type of the dataset used to train this model.
- class google.cloud.automl_v1.types.TextExtractionAnnotation(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Annotation for identifying spans of text.
- class google.cloud.automl_v1.types.TextExtractionDatasetMetadata(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Dataset metadata that is specific to text extraction
- class google.cloud.automl_v1.types.TextExtractionEvaluationMetrics(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Model evaluation metrics for text extraction problems.
- confidence_metrics_entries¶
Output only. Metrics that have confidence thresholds. Precision-recall curve can be derived from it.
- Type
MutableSequence[google.cloud.automl_v1.types.TextExtractionEvaluationMetrics.ConfidenceMetricsEntry]
- class ConfidenceMetricsEntry(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Metrics for a single confidence threshold.
- confidence_threshold¶
Output only. The confidence threshold value used to compute the metrics. Only annotations with score of at least this threshold are considered to be ones the model would return.
- Type
- class google.cloud.automl_v1.types.TextExtractionModelMetadata(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Model metadata that is specific to text extraction.
- class google.cloud.automl_v1.types.TextSegment(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A contiguous part of a text (string), assuming it has an UTF-8 NFC encoding.
- start_offset¶
Required. Zero-based character index of the first character of the text segment (counting characters from the beginning of the text).
- Type
- class google.cloud.automl_v1.types.TextSentimentAnnotation(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Contains annotation details specific to text sentiment.
- sentiment¶
Output only. The sentiment with the semantic, as given to the [AutoMl.ImportData][google.cloud.automl.v1.AutoMl.ImportData] when populating the dataset from which the model used for the prediction had been trained. The sentiment values are between 0 and Dataset.text_sentiment_dataset_metadata.sentiment_max (inclusive), with higher value meaning more positive sentiment. They are completely relative, i.e. 0 means least positive sentiment and sentiment_max means the most positive from the sentiments present in the train data. Therefore e.g. if train data had only negative sentiment, then sentiment_max, would be still negative (although least negative). The sentiment shouldn’t be confused with “score” or “magnitude” from the previous Natural Language Sentiment Analysis API.
- Type
- class google.cloud.automl_v1.types.TextSentimentDatasetMetadata(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Dataset metadata for text sentiment.
- sentiment_max¶
Required. A sentiment is expressed as an integer ordinal, where higher value means a more positive sentiment. The range of sentiments that will be used is between 0 and sentiment_max (inclusive on both ends), and all the values in the range must be represented in the dataset before a model can be created. sentiment_max value must be between 1 and 10 (inclusive).
- Type
- class google.cloud.automl_v1.types.TextSentimentEvaluationMetrics(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Model evaluation metrics for text sentiment problems.
- mean_absolute_error¶
Output only. Mean absolute error. Only set for the overall model evaluation, not for evaluation of a single annotation spec.
- Type
- mean_squared_error¶
Output only. Mean squared error. Only set for the overall model evaluation, not for evaluation of a single annotation spec.
- Type
- linear_kappa¶
Output only. Linear weighted kappa. Only set for the overall model evaluation, not for evaluation of a single annotation spec.
- Type
- quadratic_kappa¶
Output only. Quadratic weighted kappa. Only set for the overall model evaluation, not for evaluation of a single annotation spec.
- Type
- confusion_matrix¶
Output only. Confusion matrix of the evaluation. Only set for the overall model evaluation, not for evaluation of a single annotation spec.
- class google.cloud.automl_v1.types.TextSentimentModelMetadata(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Model metadata that is specific to text sentiment.
- class google.cloud.automl_v1.types.TextSnippet(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A representation of a text snippet.
- content¶
Required. The content of the text snippet as a string. Up to 250000 characters long.
- Type
- mime_type¶
Optional. The format of [content][google.cloud.automl.v1.TextSnippet.content]. Currently the only two allowed values are “text/html” and “text/plain”. If left blank, the format is automatically determined from the type of the uploaded [content][google.cloud.automl.v1.TextSnippet.content].
- Type
- class google.cloud.automl_v1.types.TranslationAnnotation(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Annotation details specific to translation.
- translated_content¶
Output only . The translated content.
- class google.cloud.automl_v1.types.TranslationDatasetMetadata(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Dataset metadata that is specific to translation.
- class google.cloud.automl_v1.types.TranslationEvaluationMetrics(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Evaluation metrics for the dataset.
- class google.cloud.automl_v1.types.TranslationModelMetadata(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Model metadata that is specific to translation.
- base_model¶
The resource name of the model to use as a baseline to train the custom model. If unset, we use the default base model provided by Google Translate. Format:
projects/{project_id}/locations/{location_id}/models/{model_id}
- Type
- source_language_code¶
Output only. Inferred from the dataset. The source language (The BCP-47 language code) that is used for training.
- Type
- class google.cloud.automl_v1.types.UndeployModelOperationMetadata(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Details of UndeployModel operation.
- class google.cloud.automl_v1.types.UndeployModelRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Request message for [AutoMl.UndeployModel][google.cloud.automl.v1.AutoMl.UndeployModel].
- class google.cloud.automl_v1.types.UpdateDatasetRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Request message for [AutoMl.UpdateDataset][google.cloud.automl.v1.AutoMl.UpdateDataset]
- dataset¶
Required. The dataset which replaces the resource on the server.
- update_mask¶
Required. The update mask applies to the resource.
- class google.cloud.automl_v1.types.UpdateModelRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Request message for [AutoMl.UpdateModel][google.cloud.automl.v1.AutoMl.UpdateModel]
- model¶
Required. The model which replaces the resource on the server.
- update_mask¶
Required. The update mask applies to the resource.