Class: Google::Apis::MlV1::GoogleCloudMlV1Version

Inherits:
Object
  • Object
show all
Includes:
Core::Hashable, Core::JsonObjectSupport
Defined in:
generated/google/apis/ml_v1/classes.rb,
generated/google/apis/ml_v1/representations.rb,
generated/google/apis/ml_v1/representations.rb

Overview

Represents a version of the model. Each version is a trained model deployed in the cloud, ready to handle prediction requests. A model can have multiple versions. You can get information about all of the versions of a given model by calling projects.models.versions.list.

Instance Attribute Summary collapse

Instance Method Summary collapse

Methods included from Core::JsonObjectSupport

#to_json

Methods included from Core::Hashable

process_value, #to_h

Constructor Details

#initialize(**args) ⇒ GoogleCloudMlV1Version

Returns a new instance of GoogleCloudMlV1Version



1893
1894
1895
# File 'generated/google/apis/ml_v1/classes.rb', line 1893

def initialize(**args)
   update!(**args)
end

Instance Attribute Details

#accelerator_configGoogle::Apis::MlV1::GoogleCloudMlV1AcceleratorConfig

Represents a hardware accelerator request config. Note that the AcceleratorConfig can be used in both Jobs and Versions. Learn more about accelerators for training and accelerators for online prediction. Corresponds to the JSON property acceleratorConfig



1660
1661
1662
# File 'generated/google/apis/ml_v1/classes.rb', line 1660

def accelerator_config
  @accelerator_config
end

#auto_scalingGoogle::Apis::MlV1::GoogleCloudMlV1AutoScaling

Options for automatically scaling a model. Corresponds to the JSON property autoScaling



1665
1666
1667
# File 'generated/google/apis/ml_v1/classes.rb', line 1665

def auto_scaling
  @auto_scaling
end

#create_timeString

Output only. The time the version was created. Corresponds to the JSON property createTime

Returns:

  • (String)


1670
1671
1672
# File 'generated/google/apis/ml_v1/classes.rb', line 1670

def create_time
  @create_time
end

#deployment_uriString

Required. The Cloud Storage location of the trained model used to create the version. See the guide to model deployment for more information. When passing Version to projects.models.versions.create the model service uses the specified location as the source of the model. Once deployed, the model version is hosted by the prediction service, so this location is useful only as a historical record. The total number of model files can't exceed 1000. Corresponds to the JSON property deploymentUri

Returns:

  • (String)


1686
1687
1688
# File 'generated/google/apis/ml_v1/classes.rb', line 1686

def deployment_uri
  @deployment_uri
end

#descriptionString

Optional. The description specified for the version when it was created. Corresponds to the JSON property description

Returns:

  • (String)


1691
1692
1693
# File 'generated/google/apis/ml_v1/classes.rb', line 1691

def description
  @description
end

#error_messageString

Output only. The details of a failure or a cancellation. Corresponds to the JSON property errorMessage

Returns:

  • (String)


1696
1697
1698
# File 'generated/google/apis/ml_v1/classes.rb', line 1696

def error_message
  @error_message
end

#etagString

etag is used for optimistic concurrency control as a way to help prevent simultaneous updates of a model from overwriting each other. It is strongly suggested that systems make use of the etag in the read-modify-write cycle to perform model updates in order to avoid race conditions: An etag is returned in the response to GetVersion, and systems are expected to put that etag in the request to UpdateVersion to ensure that their change will be applied to the model as intended. Corresponds to the JSON property etag NOTE: Values are automatically base64 encoded/decoded in the client library.

Returns:

  • (String)


1708
1709
1710
# File 'generated/google/apis/ml_v1/classes.rb', line 1708

def etag
  @etag
end

#frameworkString

Optional. The machine learning framework AI Platform uses to train this version of the model. Valid values are TENSORFLOW, SCIKIT_LEARN, XGBOOST. If you do not specify a framework, AI Platform will analyze files in the deployment_uri to determine a framework. If you choose SCIKIT_LEARN or XGBOOST, you must also set the runtime version of the model to 1.4 or greater. Do not specify a framework if you're deploying a custom prediction routine. If you specify a Compute Engine (N1) machine type in the machineType field, you must specify TENSORFLOW for the framework. Corresponds to the JSON property framework

Returns:

  • (String)


1724
1725
1726
# File 'generated/google/apis/ml_v1/classes.rb', line 1724

def framework
  @framework
end

#is_defaultBoolean Also known as: is_default?

Output only. If true, this version will be used to handle prediction requests that do not specify a version. You can change the default version by calling projects.methods.versions.setDefault. Corresponds to the JSON property isDefault

Returns:

  • (Boolean)


1733
1734
1735
# File 'generated/google/apis/ml_v1/classes.rb', line 1733

def is_default
  @is_default
end

#labelsHash<String,String>

Optional. One or more labels that you can add, to organize your model versions. Each label is a key-value pair, where both the key and the value are arbitrary strings that you supply. For more information, see the documentation on using labels. Corresponds to the JSON property labels

Returns:

  • (Hash<String,String>)


1743
1744
1745
# File 'generated/google/apis/ml_v1/classes.rb', line 1743

def labels
  @labels
end

#last_use_timeString

Output only. The time the version was last used for prediction. Corresponds to the JSON property lastUseTime

Returns:

  • (String)


1748
1749
1750
# File 'generated/google/apis/ml_v1/classes.rb', line 1748

def last_use_time
  @last_use_time
end

#machine_typeString

Optional. The type of machine on which to serve the model. Currently only applies to online prediction service. If this field is not specified, it defaults to mls1-c1-m2. Online prediction supports the following machine types:

  • mls1-c1-m2
  • mls1-c4-m2
  • n1-standard-2
  • n1-standard-4
  • n1-standard-8
  • n1-standard-16
  • n1-standard-32
  • n1-highmem-2
  • n1-highmem-4
  • n1-highmem-8
  • n1-highmem-16
  • n1-highmem-32
  • n1-highcpu-2
  • n1-highcpu-4
  • n1-highcpu-8
  • n1-highcpu-16
  • n1-highcpu-32 mls1-c1-m2 is generally available. All other machine types are available in beta. Learn more about the differences between machine types. Corresponds to the JSON property machineType

Returns:

  • (String)


1776
1777
1778
# File 'generated/google/apis/ml_v1/classes.rb', line 1776

def machine_type
  @machine_type
end

#manual_scalingGoogle::Apis::MlV1::GoogleCloudMlV1ManualScaling

Options for manually scaling a model. Corresponds to the JSON property manualScaling



1781
1782
1783
# File 'generated/google/apis/ml_v1/classes.rb', line 1781

def manual_scaling
  @manual_scaling
end

#nameString

Required. The name specified for the version when it was created. The version name must be unique within the model it is created in. Corresponds to the JSON property name

Returns:

  • (String)


1787
1788
1789
# File 'generated/google/apis/ml_v1/classes.rb', line 1787

def name
  @name
end

#package_urisArray<String>

Optional. Cloud Storage paths (gs://…) of packages for custom prediction routines or scikit-learn pipelines with custom code. For a custom prediction routine, one of these packages must contain your Predictor class (see predictionClass). Additionally, include any dependencies used by your Predictor or scikit-learn pipeline uses that are not already included in your selected runtime version. If you specify this field, you must also set runtimeVersion to 1.4 or greater. Corresponds to the JSON property packageUris

Returns:

  • (Array<String>)


1803
1804
1805
# File 'generated/google/apis/ml_v1/classes.rb', line 1803

def package_uris
  @package_uris
end

#prediction_classString

Optional. The fully qualified name (module_name.class_name) of a class that implements the Predictor interface described in this reference field. The module containing this class should be included in a package provided to the packageUris field. Specify this field if and only if you are deploying a custom prediction routine (beta). If you specify this field, you must set runtimeVersion to 1.4 or greater and you must set machineType to a legacy (MLS1) machine type. The following code sample provides the Predictor interface:

class Predictor(object):
"""Interface for constructing custom predictors."""
def predict(self, instances, **kwargs):
"""Performs custom prediction.
Instances are the decoded values from the request. They have already
been deserialized from JSON.
Args:
instances: A list of prediction input instances.
**kwargs: A dictionary of keyword args provided as additional
fields on the predict request body.
Returns:
A list of outputs containing the prediction results. This list must
be JSON serializable.
"""
raise NotImplementedError()
def from_path(cls, model_dir):
"""Creates an instance of Predictor using the given path.
Loading of the predictor should be done in this method.
Args:
model_dir: The local directory that contains the exported model
file along with any additional files uploaded when creating the
version resource.
Returns:
An instance implementing this Predictor class.
"""
raise NotImplementedError()

Learn more about the Predictor interface and custom prediction routines. Corresponds to the JSON property predictionClass

Returns:

  • (String)


1850
1851
1852
# File 'generated/google/apis/ml_v1/classes.rb', line 1850

def prediction_class
  @prediction_class
end

#python_versionString

Optional. The version of Python used in prediction. If not set, the default version is '2.7'. Python '3.5' is available when runtime_version is set to '1.4' and above. Python '2.7' works with all supported runtime versions. Corresponds to the JSON property pythonVersion

Returns:

  • (String)


1857
1858
1859
# File 'generated/google/apis/ml_v1/classes.rb', line 1857

def python_version
  @python_version
end

#request_logging_configGoogle::Apis::MlV1::GoogleCloudMlV1RequestLoggingConfig

Configuration for logging request-response pairs to a BigQuery table. Online prediction requests to a model version and the responses to these requests are converted to raw strings and saved to the specified BigQuery table. Logging is constrained by BigQuery quotas and limits. If your project exceeds BigQuery quotas or limits, AI Platform Prediction does not log request-response pairs, but it continues to serve predictions. If you are using continuous evaluation, you do not need to specify this configuration manually. Setting up continuous evaluation automatically enables logging of request-response pairs. Corresponds to the JSON property requestLoggingConfig



1872
1873
1874
# File 'generated/google/apis/ml_v1/classes.rb', line 1872

def request_logging_config
  @request_logging_config
end

#runtime_versionString

Optional. The AI Platform runtime version to use for this deployment. If not set, AI Platform uses the default stable version, 1.0. For more information, see the runtime version list and how to manage runtime versions. Corresponds to the JSON property runtimeVersion

Returns:

  • (String)


1881
1882
1883
# File 'generated/google/apis/ml_v1/classes.rb', line 1881

def runtime_version
  @runtime_version
end

#service_accountString

Optional. Specifies the service account for resource access control. Corresponds to the JSON property serviceAccount

Returns:

  • (String)


1886
1887
1888
# File 'generated/google/apis/ml_v1/classes.rb', line 1886

def 
  @service_account
end

#stateString

Output only. The state of a version. Corresponds to the JSON property state

Returns:

  • (String)


1891
1892
1893
# File 'generated/google/apis/ml_v1/classes.rb', line 1891

def state
  @state
end

Instance Method Details

#update!(**args) ⇒ Object

Update properties of this object



1898
1899
1900
1901
1902
1903
1904
1905
1906
1907
1908
1909
1910
1911
1912
1913
1914
1915
1916
1917
1918
1919
1920
# File 'generated/google/apis/ml_v1/classes.rb', line 1898

def update!(**args)
  @accelerator_config = args[:accelerator_config] if args.key?(:accelerator_config)
  @auto_scaling = args[:auto_scaling] if args.key?(:auto_scaling)
  @create_time = args[:create_time] if args.key?(:create_time)
  @deployment_uri = args[:deployment_uri] if args.key?(:deployment_uri)
  @description = args[:description] if args.key?(:description)
  @error_message = args[:error_message] if args.key?(:error_message)
  @etag = args[:etag] if args.key?(:etag)
  @framework = args[:framework] if args.key?(:framework)
  @is_default = args[:is_default] if args.key?(:is_default)
  @labels = args[:labels] if args.key?(:labels)
  @last_use_time = args[:last_use_time] if args.key?(:last_use_time)
  @machine_type = args[:machine_type] if args.key?(:machine_type)
  @manual_scaling = args[:manual_scaling] if args.key?(:manual_scaling)
  @name = args[:name] if args.key?(:name)
  @package_uris = args[:package_uris] if args.key?(:package_uris)
  @prediction_class = args[:prediction_class] if args.key?(:prediction_class)
  @python_version = args[:python_version] if args.key?(:python_version)
  @request_logging_config = args[:request_logging_config] if args.key?(:request_logging_config)
  @runtime_version = args[:runtime_version] if args.key?(:runtime_version)
  @service_account = args[:service_account] if args.key?(:service_account)
  @state = args[:state] if args.key?(:state)
end