Class: Google::Apis::MlV1::GoogleCloudMlV1Version

Inherits:
Object
  • Object
show all
Includes:
Core::Hashable, Core::JsonObjectSupport
Defined in:
lib/google/apis/ml_v1/classes.rb,
lib/google/apis/ml_v1/representations.rb,
lib/google/apis/ml_v1/representations.rb

Overview

Represents a version of the model. Each version is a trained model deployed in the cloud, ready to handle prediction requests. A model can have multiple versions. You can get information about all of the versions of a given model by calling projects.models.versions.list.

Instance Attribute Summary collapse

Instance Method Summary collapse

Constructor Details

#initialize(**args) ⇒ GoogleCloudMlV1Version

Returns a new instance of GoogleCloudMlV1Version.



3230
3231
3232
# File 'lib/google/apis/ml_v1/classes.rb', line 3230

def initialize(**args)
   update!(**args)
end

Instance Attribute Details

#accelerator_configGoogle::Apis::MlV1::GoogleCloudMlV1AcceleratorConfig

Represents a hardware accelerator request config. Note that the AcceleratorConfig can be used in both Jobs and Versions. Learn more about accelerators for training and accelerators for online prediction. Corresponds to the JSON property acceleratorConfig



2998
2999
3000
# File 'lib/google/apis/ml_v1/classes.rb', line 2998

def accelerator_config
  @accelerator_config
end

#auto_scalingGoogle::Apis::MlV1::GoogleCloudMlV1AutoScaling

Options for automatically scaling a model. Corresponds to the JSON property autoScaling



3003
3004
3005
# File 'lib/google/apis/ml_v1/classes.rb', line 3003

def auto_scaling
  @auto_scaling
end

#containerGoogle::Apis::MlV1::GoogleCloudMlV1ContainerSpec

Specification of a custom container for serving predictions. This message is a subset of the Kubernetes Container v1 core specification. Corresponds to the JSON property container



3010
3011
3012
# File 'lib/google/apis/ml_v1/classes.rb', line 3010

def container
  @container
end

#create_timeString

Output only. The time the version was created. Corresponds to the JSON property createTime

Returns:

  • (String)


3015
3016
3017
# File 'lib/google/apis/ml_v1/classes.rb', line 3015

def create_time
  @create_time
end

#deployment_uriString

The Cloud Storage URI of a directory containing trained model artifacts to be used to create the model version. See the guide to deploying models for more information. The total number of files under this directory must not exceed 1000. During projects. models.versions.create, AI Platform Prediction copies all files from the specified directory to a location managed by the service. From then on, AI Platform Prediction uses these copies of the model artifacts to serve predictions, not the original files in Cloud Storage, so this location is useful only as a historical record. If you specify container, then this field is optional. Otherwise, it is required. Learn how to use this field with a custom container. Corresponds to the JSON property deploymentUri

Returns:

  • (String)


3031
3032
3033
# File 'lib/google/apis/ml_v1/classes.rb', line 3031

def deployment_uri
  @deployment_uri
end

#descriptionString

Optional. The description specified for the version when it was created. Corresponds to the JSON property description

Returns:

  • (String)


3036
3037
3038
# File 'lib/google/apis/ml_v1/classes.rb', line 3036

def description
  @description
end

#error_messageString

Output only. The details of a failure or a cancellation. Corresponds to the JSON property errorMessage

Returns:

  • (String)


3041
3042
3043
# File 'lib/google/apis/ml_v1/classes.rb', line 3041

def error_message
  @error_message
end

#etagString

etag is used for optimistic concurrency control as a way to help prevent simultaneous updates of a model from overwriting each other. It is strongly suggested that systems make use of the etag in the read-modify-write cycle to perform model updates in order to avoid race conditions: An etag is returned in the response to GetVersion, and systems are expected to put that etag in the request to UpdateVersion to ensure that their change will be applied to the model as intended. Corresponds to the JSON property etag NOTE: Values are automatically base64 encoded/decoded in the client library.

Returns:

  • (String)


3053
3054
3055
# File 'lib/google/apis/ml_v1/classes.rb', line 3053

def etag
  @etag
end

#explanation_configGoogle::Apis::MlV1::GoogleCloudMlV1ExplanationConfig

Message holding configuration options for explaining model predictions. There are three feature attribution methods supported for TensorFlow models: integrated gradients, sampled Shapley, and XRAI. Learn more about feature attributions. Corresponds to the JSON property explanationConfig



3061
3062
3063
# File 'lib/google/apis/ml_v1/classes.rb', line 3061

def explanation_config
  @explanation_config
end

#frameworkString

Optional. The machine learning framework AI Platform uses to train this version of the model. Valid values are TENSORFLOW, SCIKIT_LEARN, XGBOOST. If you do not specify a framework, AI Platform will analyze files in the deployment_uri to determine a framework. If you choose SCIKIT_LEARN or XGBOOST, you must also set the runtime version of the model to 1.4 or greater. Do not specify a framework if you're deploying a custom prediction routine or if you're using a custom container. Corresponds to the JSON property framework

Returns:

  • (String)


3073
3074
3075
# File 'lib/google/apis/ml_v1/classes.rb', line 3073

def framework
  @framework
end

#is_defaultBoolean Also known as: is_default?

Output only. If true, this version will be used to handle prediction requests that do not specify a version. You can change the default version by calling projects.methods.versions.setDefault. Corresponds to the JSON property isDefault

Returns:

  • (Boolean)


3080
3081
3082
# File 'lib/google/apis/ml_v1/classes.rb', line 3080

def is_default
  @is_default
end

#labelsHash<String,String>

Optional. One or more labels that you can add, to organize your model versions. Each label is a key-value pair, where both the key and the value are arbitrary strings that you supply. For more information, see the documentation on using labels. Corresponds to the JSON property labels

Returns:

  • (Hash<String,String>)


3089
3090
3091
# File 'lib/google/apis/ml_v1/classes.rb', line 3089

def labels
  @labels
end

#last_migration_model_idString

Output only. The AI Platform (Unified) Model ID for the last model migration. Corresponds to the JSON property lastMigrationModelId

Returns:

  • (String)


3097
3098
3099
# File 'lib/google/apis/ml_v1/classes.rb', line 3097

def last_migration_model_id
  @last_migration_model_id
end

#last_migration_timeString

Output only. The last time this version was successfully migrated to AI Platform (Unified). Corresponds to the JSON property lastMigrationTime

Returns:

  • (String)


3104
3105
3106
# File 'lib/google/apis/ml_v1/classes.rb', line 3104

def last_migration_time
  @last_migration_time
end

#last_use_timeString

Output only. The time the version was last used for prediction. Corresponds to the JSON property lastUseTime

Returns:

  • (String)


3109
3110
3111
# File 'lib/google/apis/ml_v1/classes.rb', line 3109

def last_use_time
  @last_use_time
end

#machine_typeString

Optional. The type of machine on which to serve the model. Currently only applies to online prediction service. To learn about valid values for this field, read Choosing a machine type for online prediction. If this field is not specified and you are using a regional endpoint, then the machine type defaults to n1-standard-2. If this field is not specified and you are using the global endpoint (ml. googleapis.com), then the machine type defaults to mls1-c1-m2. Corresponds to the JSON property machineType

Returns:

  • (String)


3121
3122
3123
# File 'lib/google/apis/ml_v1/classes.rb', line 3121

def machine_type
  @machine_type
end

#manual_scalingGoogle::Apis::MlV1::GoogleCloudMlV1ManualScaling

Options for manually scaling a model. Corresponds to the JSON property manualScaling



3126
3127
3128
# File 'lib/google/apis/ml_v1/classes.rb', line 3126

def manual_scaling
  @manual_scaling
end

#nameString

Required. The name specified for the version when it was created. The version name must be unique within the model it is created in. Corresponds to the JSON property name

Returns:

  • (String)


3132
3133
3134
# File 'lib/google/apis/ml_v1/classes.rb', line 3132

def name
  @name
end

#package_urisArray<String>

Optional. Cloud Storage paths (gs://…) of packages for custom prediction routines or scikit- learn pipelines with custom code. For a custom prediction routine, one of these packages must contain your Predictor class (see predictionClass). Additionally, include any dependencies used by your Predictor or scikit-learn pipeline uses that are not already included in your selected runtime version. If you specify this field, you must also set runtimeVersion to 1.4 or greater. Corresponds to the JSON property packageUris

Returns:

  • (Array<String>)


3146
3147
3148
# File 'lib/google/apis/ml_v1/classes.rb', line 3146

def package_uris
  @package_uris
end

#prediction_classString

Optional. The fully qualified name (module_name.class_name) of a class that implements the Predictor interface described in this reference field. The module containing this class should be included in a package provided to the packageUris field. Specify this field if and only if you are deploying a custom prediction routine (beta). If you specify this field, you must set runtimeVersion to 1.4 or greater and you must set machineType to a legacy (MLS1) machine type. The following code sample provides the Predictor interface: class Predictor(object): """Interface for constructing custom predictors.""" def predict(self, instances, **kwargs): """Performs custom prediction. Instances are the decoded values from the request. They have already been deserialized from JSON. Args: instances: A list of prediction input instances. **kwargs: A dictionary of keyword args provided as additional fields on the predict request body. Returns: A list of outputs containing the prediction results. This list must be JSON serializable. """ raise NotImplementedError() @classmethod def from_path(cls, model_dir): """ Creates an instance of Predictor using the given path. Loading of the predictor should be done in this method. Args: model_dir: The local directory that contains the exported model file along with any additional files uploaded when creating the version resource. Returns: An instance implementing this Predictor class. """ raise NotImplementedError() Learn more about the Predictor interface and custom prediction routines. Corresponds to the JSON property predictionClass

Returns:

  • (String)


3174
3175
3176
# File 'lib/google/apis/ml_v1/classes.rb', line 3174

def prediction_class
  @prediction_class
end

#python_versionString

Required. The version of Python used in prediction. The following Python versions are available: * Python '3.7' is available when runtime_version is set to '1.15' or later. * Python '3.5' is available when runtime_version is set to a version from '1.4' to '1.14'. * Python '2.7' is available when runtime_version is set to '1.15' or earlier. Read more about the Python versions available for each runtime version. Corresponds to the JSON property pythonVersion

Returns:

  • (String)


3185
3186
3187
# File 'lib/google/apis/ml_v1/classes.rb', line 3185

def python_version
  @python_version
end

#request_logging_configGoogle::Apis::MlV1::GoogleCloudMlV1RequestLoggingConfig

Configuration for logging request-response pairs to a BigQuery table. Online prediction requests to a model version and the responses to these requests are converted to raw strings and saved to the specified BigQuery table. Logging is constrained by BigQuery quotas and limits. If your project exceeds BigQuery quotas or limits, AI Platform Prediction does not log request- response pairs, but it continues to serve predictions. If you are using continuous evaluation, you do not need to specify this configuration manually. Setting up continuous evaluation automatically enables logging of request-response pairs. Corresponds to the JSON property requestLoggingConfig



3198
3199
3200
# File 'lib/google/apis/ml_v1/classes.rb', line 3198

def request_logging_config
  @request_logging_config
end

#routesGoogle::Apis::MlV1::GoogleCloudMlV1RouteMap

Specifies HTTP paths served by a custom container. AI Platform Prediction sends requests to these paths on the container; the custom container must run an HTTP server that responds to these requests with appropriate responses. Read Custom container requirements for details on how to create your container image to meet these requirements. Corresponds to the JSON property routes



3208
3209
3210
# File 'lib/google/apis/ml_v1/classes.rb', line 3208

def routes
  @routes
end

#runtime_versionString

Required. The AI Platform runtime version to use for this deployment. For more information, see the runtime version list and how to manage runtime versions. Corresponds to the JSON property runtimeVersion

Returns:

  • (String)


3215
3216
3217
# File 'lib/google/apis/ml_v1/classes.rb', line 3215

def runtime_version
  @runtime_version
end

#service_accountString

Optional. Specifies the service account for resource access control. If you specify this field, then you must also specify either the containerSpec or the predictionClass field. Learn more about using a custom service account. Corresponds to the JSON property serviceAccount

Returns:

  • (String)


3223
3224
3225
# File 'lib/google/apis/ml_v1/classes.rb', line 3223

def 
  @service_account
end

#stateString

Output only. The state of a version. Corresponds to the JSON property state

Returns:

  • (String)


3228
3229
3230
# File 'lib/google/apis/ml_v1/classes.rb', line 3228

def state
  @state
end

Instance Method Details

#update!(**args) ⇒ Object

Update properties of this object



3235
3236
3237
3238
3239
3240
3241
3242
3243
3244
3245
3246
3247
3248
3249
3250
3251
3252
3253
3254
3255
3256
3257
3258
3259
3260
3261
3262
# File 'lib/google/apis/ml_v1/classes.rb', line 3235

def update!(**args)
  @accelerator_config = args[:accelerator_config] if args.key?(:accelerator_config)
  @auto_scaling = args[:auto_scaling] if args.key?(:auto_scaling)
  @container = args[:container] if args.key?(:container)
  @create_time = args[:create_time] if args.key?(:create_time)
  @deployment_uri = args[:deployment_uri] if args.key?(:deployment_uri)
  @description = args[:description] if args.key?(:description)
  @error_message = args[:error_message] if args.key?(:error_message)
  @etag = args[:etag] if args.key?(:etag)
  @explanation_config = args[:explanation_config] if args.key?(:explanation_config)
  @framework = args[:framework] if args.key?(:framework)
  @is_default = args[:is_default] if args.key?(:is_default)
  @labels = args[:labels] if args.key?(:labels)
  @last_migration_model_id = args[:last_migration_model_id] if args.key?(:last_migration_model_id)
  @last_migration_time = args[:last_migration_time] if args.key?(:last_migration_time)
  @last_use_time = args[:last_use_time] if args.key?(:last_use_time)
  @machine_type = args[:machine_type] if args.key?(:machine_type)
  @manual_scaling = args[:manual_scaling] if args.key?(:manual_scaling)
  @name = args[:name] if args.key?(:name)
  @package_uris = args[:package_uris] if args.key?(:package_uris)
  @prediction_class = args[:prediction_class] if args.key?(:prediction_class)
  @python_version = args[:python_version] if args.key?(:python_version)
  @request_logging_config = args[:request_logging_config] if args.key?(:request_logging_config)
  @routes = args[:routes] if args.key?(:routes)
  @runtime_version = args[:runtime_version] if args.key?(:runtime_version)
  @service_account = args[:service_account] if args.key?(:service_account)
  @state = args[:state] if args.key?(:state)
end