Class: Google::Apis::MlV1::GoogleCloudMlV1Version
- Inherits:
-
Object
- Object
- Google::Apis::MlV1::GoogleCloudMlV1Version
- Includes:
- Core::Hashable, Core::JsonObjectSupport
- Defined in:
- generated/google/apis/ml_v1/classes.rb,
generated/google/apis/ml_v1/representations.rb,
generated/google/apis/ml_v1/representations.rb
Overview
Represents a version of the model. Each version is a trained model deployed in the cloud, ready to handle prediction requests. A model can have multiple versions. You can get information about all of the versions of a given model by calling projects.models.versions.list.
Instance Attribute Summary collapse
-
#accelerator_config ⇒ Google::Apis::MlV1::GoogleCloudMlV1AcceleratorConfig
Represents a hardware accelerator request config.
-
#auto_scaling ⇒ Google::Apis::MlV1::GoogleCloudMlV1AutoScaling
Options for automatically scaling a model.
-
#create_time ⇒ String
Output only.
-
#deployment_uri ⇒ String
Required.
-
#description ⇒ String
Optional.
-
#error_message ⇒ String
Output only.
-
#etag ⇒ String
etag
is used for optimistic concurrency control as a way to help prevent simultaneous updates of a model from overwriting each other. -
#explanation_config ⇒ Google::Apis::MlV1::GoogleCloudMlV1ExplanationConfig
Message holding configuration options for explaining model predictions.
-
#framework ⇒ String
Optional.
-
#is_default ⇒ Boolean
(also: #is_default?)
Output only.
-
#labels ⇒ Hash<String,String>
Optional.
-
#last_use_time ⇒ String
Output only.
-
#machine_type ⇒ String
Optional.
-
#manual_scaling ⇒ Google::Apis::MlV1::GoogleCloudMlV1ManualScaling
Options for manually scaling a model.
-
#name ⇒ String
Required.
-
#package_uris ⇒ Array<String>
Optional.
-
#prediction_class ⇒ String
Optional.
-
#python_version ⇒ String
Required.
-
#request_logging_config ⇒ Google::Apis::MlV1::GoogleCloudMlV1RequestLoggingConfig
Configuration for logging request-response pairs to a BigQuery table.
-
#runtime_version ⇒ String
Required.
-
#service_account ⇒ String
Optional.
-
#state ⇒ String
Output only.
Instance Method Summary collapse
-
#initialize(**args) ⇒ GoogleCloudMlV1Version
constructor
A new instance of GoogleCloudMlV1Version.
-
#update!(**args) ⇒ Object
Update properties of this object.
Methods included from Core::JsonObjectSupport
Methods included from Core::Hashable
Constructor Details
#initialize(**args) ⇒ GoogleCloudMlV1Version
Returns a new instance of GoogleCloudMlV1Version.
2932 2933 2934 |
# File 'generated/google/apis/ml_v1/classes.rb', line 2932 def initialize(**args) update!(**args) end |
Instance Attribute Details
#accelerator_config ⇒ Google::Apis::MlV1::GoogleCloudMlV1AcceleratorConfig
Represents a hardware accelerator request config.
Note that the AcceleratorConfig can be used in both Jobs and Versions.
Learn more about accelerators for training and
accelerators for online
prediction.
Corresponds to the JSON property acceleratorConfig
2686 2687 2688 |
# File 'generated/google/apis/ml_v1/classes.rb', line 2686 def accelerator_config @accelerator_config end |
#auto_scaling ⇒ Google::Apis::MlV1::GoogleCloudMlV1AutoScaling
Options for automatically scaling a model.
Corresponds to the JSON property autoScaling
2691 2692 2693 |
# File 'generated/google/apis/ml_v1/classes.rb', line 2691 def auto_scaling @auto_scaling end |
#create_time ⇒ String
Output only. The time the version was created.
Corresponds to the JSON property createTime
2696 2697 2698 |
# File 'generated/google/apis/ml_v1/classes.rb', line 2696 def create_time @create_time end |
#deployment_uri ⇒ String
Required. The Cloud Storage location of the trained model used to
create the version. See the
guide to model
deployment for more
information.
When passing Version to
projects.models.versions.create
the model service uses the specified location as the source of the model.
Once deployed, the model version is hosted by the prediction service, so
this location is useful only as a historical record.
The total number of model files can't exceed 1000.
Corresponds to the JSON property deploymentUri
2711 2712 2713 |
# File 'generated/google/apis/ml_v1/classes.rb', line 2711 def deployment_uri @deployment_uri end |
#description ⇒ String
Optional. The description specified for the version when it was created.
Corresponds to the JSON property description
2716 2717 2718 |
# File 'generated/google/apis/ml_v1/classes.rb', line 2716 def description @description end |
#error_message ⇒ String
Output only. The details of a failure or a cancellation.
Corresponds to the JSON property errorMessage
2721 2722 2723 |
# File 'generated/google/apis/ml_v1/classes.rb', line 2721 def @error_message end |
#etag ⇒ String
etag
is used for optimistic concurrency control as a way to help
prevent simultaneous updates of a model from overwriting each other.
It is strongly suggested that systems make use of the etag
in the
read-modify-write cycle to perform model updates in order to avoid race
conditions: An etag
is returned in the response to GetVersion
, and
systems are expected to put that etag in the request to UpdateVersion
to
ensure that their change will be applied to the model as intended.
Corresponds to the JSON property etag
NOTE: Values are automatically base64 encoded/decoded in the client library.
2733 2734 2735 |
# File 'generated/google/apis/ml_v1/classes.rb', line 2733 def etag @etag end |
#explanation_config ⇒ Google::Apis::MlV1::GoogleCloudMlV1ExplanationConfig
Message holding configuration options for explaining model predictions.
There are two feature attribution methods supported for TensorFlow models:
integrated gradients and sampled Shapley.
Learn more about feature
attributions.
Corresponds to the JSON property explanationConfig
2742 2743 2744 |
# File 'generated/google/apis/ml_v1/classes.rb', line 2742 def explanation_config @explanation_config end |
#framework ⇒ String
Optional. The machine learning framework AI Platform uses to train
this version of the model. Valid values are TENSORFLOW
, SCIKIT_LEARN
,
XGBOOST
. If you do not specify a framework, AI Platform
will analyze files in the deployment_uri to determine a framework. If you
choose SCIKIT_LEARN
or XGBOOST
, you must also set the runtime version
of the model to 1.4 or greater.
Do not specify a framework if you're deploying a custom
prediction routine.
If you specify a Compute Engine (N1) machine
type in the
machineType
field, you must specify TENSORFLOW
for the framework.
Corresponds to the JSON property framework
2758 2759 2760 |
# File 'generated/google/apis/ml_v1/classes.rb', line 2758 def framework @framework end |
#is_default ⇒ Boolean Also known as: is_default?
Output only. If true, this version will be used to handle prediction
requests that do not specify a version.
You can change the default version by calling
projects.methods.versions.setDefault.
Corresponds to the JSON property isDefault
2766 2767 2768 |
# File 'generated/google/apis/ml_v1/classes.rb', line 2766 def is_default @is_default end |
#labels ⇒ Hash<String,String>
Optional. One or more labels that you can add, to organize your model
versions. Each label is a key-value pair, where both the key and the value
are arbitrary strings that you supply.
For more information, see the documentation on
using labels.
Corresponds to the JSON property labels
2776 2777 2778 |
# File 'generated/google/apis/ml_v1/classes.rb', line 2776 def labels @labels end |
#last_use_time ⇒ String
Output only. The time the version was last used for prediction.
Corresponds to the JSON property lastUseTime
2781 2782 2783 |
# File 'generated/google/apis/ml_v1/classes.rb', line 2781 def last_use_time @last_use_time end |
#machine_type ⇒ String
Optional. The type of machine on which to serve the model. Currently only
applies to online prediction service. If this field is not specified, it
defaults to mls1-c1-m2
.
Online prediction supports the following machine types:
mls1-c1-m2
mls1-c4-m2
n1-standard-2
n1-standard-4
n1-standard-8
n1-standard-16
n1-standard-32
n1-highmem-2
n1-highmem-4
n1-highmem-8
n1-highmem-16
n1-highmem-32
n1-highcpu-2
n1-highcpu-4
n1-highcpu-8
n1-highcpu-16
n1-highcpu-32
mls1-c1-m2
is generally available. All other machine types are available in beta. Learn more about the differences between machine types. Corresponds to the JSON propertymachineType
2809 2810 2811 |
# File 'generated/google/apis/ml_v1/classes.rb', line 2809 def machine_type @machine_type end |
#manual_scaling ⇒ Google::Apis::MlV1::GoogleCloudMlV1ManualScaling
Options for manually scaling a model.
Corresponds to the JSON property manualScaling
2814 2815 2816 |
# File 'generated/google/apis/ml_v1/classes.rb', line 2814 def manual_scaling @manual_scaling end |
#name ⇒ String
Required. The name specified for the version when it was created.
The version name must be unique within the model it is created in.
Corresponds to the JSON property name
2820 2821 2822 |
# File 'generated/google/apis/ml_v1/classes.rb', line 2820 def name @name end |
#package_uris ⇒ Array<String>
Optional. Cloud Storage paths (gs://…
) of packages for custom
prediction routines
or scikit-learn pipelines with custom
code.
For a custom prediction routine, one of these packages must contain your
Predictor class (see
predictionClass
). Additionally,
include any dependencies used by your Predictor or scikit-learn pipeline
uses that are not already included in your selected runtime
version.
If you specify this field, you must also set
runtimeVersion
to 1.4 or greater.
Corresponds to the JSON property packageUris
2836 2837 2838 |
# File 'generated/google/apis/ml_v1/classes.rb', line 2836 def package_uris @package_uris end |
#prediction_class ⇒ String
Optional. The fully qualified name
(module_name.class_name) of a class that implements
the Predictor interface described in this reference field. The module
containing this class should be included in a package provided to the
packageUris
field.
Specify this field if and only if you are deploying a custom prediction
routine (beta).
If you specify this field, you must set
runtimeVersion
to 1.4 or greater and
you must set machineType
to a legacy (MLS1)
machine type.
The following code sample provides the Predictor interface:
class Predictor(object):
"""Interface for constructing custom predictors."""
def predict(self, instances, **kwargs):
"""Performs custom prediction.
Instances are the decoded values from the request. They have already
been deserialized from JSON.
Args:
instances: A list of prediction input instances.
**kwargs: A dictionary of keyword args provided as additional
fields on the predict request body.
Returns:
A list of outputs containing the prediction results. This list must
be JSON serializable.
"""
raise NotImplementedError()
def from_path(cls, model_dir):
"""Creates an instance of Predictor using the given path.
Loading of the predictor should be done in this method.
Args:
model_dir: The local directory that contains the exported model
file along with any additional files uploaded when creating the
version resource.
Returns:
An instance implementing this Predictor class.
"""
raise NotImplementedError()
Learn more about the Predictor interface and custom prediction
routines.
Corresponds to the JSON property predictionClass
2883 2884 2885 |
# File 'generated/google/apis/ml_v1/classes.rb', line 2883 def prediction_class @prediction_class end |
#python_version ⇒ String
Required. The version of Python used in prediction. The following Python versions are available:
- Python '3.7' is available when
runtime_version
is set to '1.15' or later. - Python '3.5' is available when
runtime_version
is set to a version from '1.4' to '1.14'. - Python '2.7' is available when
runtime_version
is set to '1.15' or earlier. Read more about the Python versions available for each runtime version. Corresponds to the JSON propertypythonVersion
2897 2898 2899 |
# File 'generated/google/apis/ml_v1/classes.rb', line 2897 def python_version @python_version end |
#request_logging_config ⇒ Google::Apis::MlV1::GoogleCloudMlV1RequestLoggingConfig
Configuration for logging request-response pairs to a BigQuery table.
Online prediction requests to a model version and the responses to these
requests are converted to raw strings and saved to the specified BigQuery
table. Logging is constrained by BigQuery quotas and
limits. If your project exceeds BigQuery quotas or limits,
AI Platform Prediction does not log request-response pairs, but it continues
to serve predictions.
If you are using continuous
evaluation, you do not need to
specify this configuration manually. Setting up continuous evaluation
automatically enables logging of request-response pairs.
Corresponds to the JSON property requestLoggingConfig
2912 2913 2914 |
# File 'generated/google/apis/ml_v1/classes.rb', line 2912 def request_logging_config @request_logging_config end |
#runtime_version ⇒ String
Required. The AI Platform runtime version to use for this deployment.
For more information, see the
runtime version list and
how to manage runtime versions.
Corresponds to the JSON property runtimeVersion
2920 2921 2922 |
# File 'generated/google/apis/ml_v1/classes.rb', line 2920 def runtime_version @runtime_version end |
#service_account ⇒ String
Optional. Specifies the service account for resource access control.
Corresponds to the JSON property serviceAccount
2925 2926 2927 |
# File 'generated/google/apis/ml_v1/classes.rb', line 2925 def service_account @service_account end |
#state ⇒ String
Output only. The state of a version.
Corresponds to the JSON property state
2930 2931 2932 |
# File 'generated/google/apis/ml_v1/classes.rb', line 2930 def state @state end |
Instance Method Details
#update!(**args) ⇒ Object
Update properties of this object
2937 2938 2939 2940 2941 2942 2943 2944 2945 2946 2947 2948 2949 2950 2951 2952 2953 2954 2955 2956 2957 2958 2959 2960 |
# File 'generated/google/apis/ml_v1/classes.rb', line 2937 def update!(**args) @accelerator_config = args[:accelerator_config] if args.key?(:accelerator_config) @auto_scaling = args[:auto_scaling] if args.key?(:auto_scaling) @create_time = args[:create_time] if args.key?(:create_time) @deployment_uri = args[:deployment_uri] if args.key?(:deployment_uri) @description = args[:description] if args.key?(:description) @error_message = args[:error_message] if args.key?(:error_message) @etag = args[:etag] if args.key?(:etag) @explanation_config = args[:explanation_config] if args.key?(:explanation_config) @framework = args[:framework] if args.key?(:framework) @is_default = args[:is_default] if args.key?(:is_default) @labels = args[:labels] if args.key?(:labels) @last_use_time = args[:last_use_time] if args.key?(:last_use_time) @machine_type = args[:machine_type] if args.key?(:machine_type) @manual_scaling = args[:manual_scaling] if args.key?(:manual_scaling) @name = args[:name] if args.key?(:name) @package_uris = args[:package_uris] if args.key?(:package_uris) @prediction_class = args[:prediction_class] if args.key?(:prediction_class) @python_version = args[:python_version] if args.key?(:python_version) @request_logging_config = args[:request_logging_config] if args.key?(:request_logging_config) @runtime_version = args[:runtime_version] if args.key?(:runtime_version) @service_account = args[:service_account] if args.key?(:service_account) @state = args[:state] if args.key?(:state) end |