Class: Google::Apis::MlV1::GoogleCloudMlV1TrainingInput
- Inherits:
-
Object
- Object
- Google::Apis::MlV1::GoogleCloudMlV1TrainingInput
- Includes:
- Core::Hashable, Core::JsonObjectSupport
- Defined in:
- generated/google/apis/ml_v1/classes.rb,
generated/google/apis/ml_v1/representations.rb,
generated/google/apis/ml_v1/representations.rb
Overview
Represents input parameters for a training job. When using the gcloud command to submit your training job, you can specify the input parameters as command-line arguments and/or in a YAML configuration file referenced from the --config command-line argument. For details, see the guide to submitting a training job.
Instance Attribute Summary collapse
-
#args ⇒ Array<String>
Optional.
-
#hyperparameters ⇒ Google::Apis::MlV1::GoogleCloudMlV1HyperparameterSpec
Represents a set of hyperparameters to optimize.
-
#job_dir ⇒ String
Optional.
-
#master_config ⇒ Google::Apis::MlV1::GoogleCloudMlV1ReplicaConfig
Represents the configuration for a replica in a cluster.
-
#master_type ⇒ String
Optional.
-
#max_running_time ⇒ String
Optional.
-
#package_uris ⇒ Array<String>
Required.
-
#parameter_server_config ⇒ Google::Apis::MlV1::GoogleCloudMlV1ReplicaConfig
Represents the configuration for a replica in a cluster.
-
#parameter_server_count ⇒ Fixnum
Optional.
-
#parameter_server_type ⇒ String
Optional.
-
#python_module ⇒ String
Required.
-
#python_version ⇒ String
Optional.
-
#region ⇒ String
Required.
-
#runtime_version ⇒ String
Optional.
-
#scale_tier ⇒ String
Required.
-
#worker_config ⇒ Google::Apis::MlV1::GoogleCloudMlV1ReplicaConfig
Represents the configuration for a replica in a cluster.
-
#worker_count ⇒ Fixnum
Optional.
-
#worker_type ⇒ String
Optional.
Instance Method Summary collapse
-
#initialize(**args) ⇒ GoogleCloudMlV1TrainingInput
constructor
A new instance of GoogleCloudMlV1TrainingInput.
-
#update!(**args) ⇒ Object
Update properties of this object.
Methods included from Core::JsonObjectSupport
Methods included from Core::Hashable
Constructor Details
#initialize(**args) ⇒ GoogleCloudMlV1TrainingInput
Returns a new instance of GoogleCloudMlV1TrainingInput
1464 1465 1466 |
# File 'generated/google/apis/ml_v1/classes.rb', line 1464 def initialize(**args) update!(**args) end |
Instance Attribute Details
#args ⇒ Array<String>
Optional. Command line arguments to pass to the program.
Corresponds to the JSON property args
1226 1227 1228 |
# File 'generated/google/apis/ml_v1/classes.rb', line 1226 def args @args end |
#hyperparameters ⇒ Google::Apis::MlV1::GoogleCloudMlV1HyperparameterSpec
Represents a set of hyperparameters to optimize.
Corresponds to the JSON property hyperparameters
1231 1232 1233 |
# File 'generated/google/apis/ml_v1/classes.rb', line 1231 def hyperparameters @hyperparameters end |
#job_dir ⇒ String
Optional. A Google Cloud Storage path in which to store training outputs
and other data needed for training. This path is passed to your TensorFlow
program as the '--job-dir' command-line argument. The benefit of specifying
this field is that Cloud ML validates the path for use in training.
Corresponds to the JSON property jobDir
1239 1240 1241 |
# File 'generated/google/apis/ml_v1/classes.rb', line 1239 def job_dir @job_dir end |
#master_config ⇒ Google::Apis::MlV1::GoogleCloudMlV1ReplicaConfig
Represents the configuration for a replica in a cluster.
Corresponds to the JSON property masterConfig
1244 1245 1246 |
# File 'generated/google/apis/ml_v1/classes.rb', line 1244 def master_config @master_config end |
#master_type ⇒ String
Optional. Specifies the type of virtual machine to use for your training job's master worker. The following types are supported:
- standard
- A basic machine configuration suitable for training simple models with small to moderate datasets.
- large_model
- A machine with a lot of memory, specially suited for parameter servers when your model is large (having many hidden layers or layers with very large numbers of nodes).
- complex_model_s
- A machine suitable for the master and workers of the cluster when your model requires more computation than the standard machine can handle satisfactorily.
- complex_model_m
- A machine with roughly twice the number of cores and roughly double the memory of complex_model_s.
- complex_model_l
- A machine with roughly twice the number of cores and roughly double the memory of complex_model_m.
- standard_gpu
- A machine equivalent to standard that also includes a single NVIDIA Tesla K80 GPU. See more about using GPUs to train your model.
- complex_model_m_gpu
- A machine equivalent to complex_model_m that also includes four NVIDIA Tesla K80 GPUs.
- complex_model_l_gpu
- A machine equivalent to complex_model_l that also includes eight NVIDIA Tesla K80 GPUs.
- standard_p100
- A machine equivalent to standard that also includes a single NVIDIA Tesla P100 GPU.
- complex_model_m_p100
- A machine equivalent to complex_model_m that also includes four NVIDIA Tesla P100 GPUs.
- standard_v100
- A machine equivalent to standard that also includes a single NVIDIA Tesla V100 GPU.
- large_model_v100
- A machine equivalent to large_model that also includes a single NVIDIA Tesla V100 GPU.
- complex_model_m_v100
- A machine equivalent to complex_model_m that also includes four NVIDIA Tesla V100 GPUs.
- complex_model_l_v100
- A machine equivalent to complex_model_l that also includes eight NVIDIA Tesla V100 GPUs.
- cloud_tpu
- A TPU VM including one Cloud TPU. See more about using TPUs to train your model.
You may also use certain Compute Engine machine types directly in this field. The following types are supported:
n1-standard-4n1-standard-8n1-standard-16n1-standard-32n1-standard-64n1-standard-96n1-highmem-2n1-highmem-4n1-highmem-8n1-highmem-16n1-highmem-32n1-highmem-64n1-highmem-96n1-highcpu-16n1-highcpu-32n1-highcpu-64n1-highcpu-96See more about using Compute Engine machine types. You must set this value whenscaleTieris set toCUSTOM. Corresponds to the JSON propertymasterType
1355 1356 1357 |
# File 'generated/google/apis/ml_v1/classes.rb', line 1355 def master_type @master_type end |
#max_running_time ⇒ String
Optional. The maximum job running time. The default is 7 days.
Corresponds to the JSON property maxRunningTime
1360 1361 1362 |
# File 'generated/google/apis/ml_v1/classes.rb', line 1360 def max_running_time @max_running_time end |
#package_uris ⇒ Array<String>
Required. The Google Cloud Storage location of the packages with
the training program and any additional dependencies.
The maximum number of package URIs is 100.
Corresponds to the JSON property packageUris
1367 1368 1369 |
# File 'generated/google/apis/ml_v1/classes.rb', line 1367 def package_uris @package_uris end |
#parameter_server_config ⇒ Google::Apis::MlV1::GoogleCloudMlV1ReplicaConfig
Represents the configuration for a replica in a cluster.
Corresponds to the JSON property parameterServerConfig
1372 1373 1374 |
# File 'generated/google/apis/ml_v1/classes.rb', line 1372 def parameter_server_config @parameter_server_config end |
#parameter_server_count ⇒ Fixnum
Optional. The number of parameter server replicas to use for the training
job. Each replica in the cluster will be of the type specified in
parameter_server_type.
This value can only be used when scale_tier is set to CUSTOM.If you
set this value, you must also set parameter_server_type.
The default value is zero.
Corresponds to the JSON property parameterServerCount
1382 1383 1384 |
# File 'generated/google/apis/ml_v1/classes.rb', line 1382 def parameter_server_count @parameter_server_count end |
#parameter_server_type ⇒ String
Optional. Specifies the type of virtual machine to use for your training
job's parameter server.
The supported values are the same as those described in the entry for
master_type.
This value must be consistent with the category of machine type that
masterType uses. In other words, both must be AI Platform machine
types or both must be Compute Engine machine types.
This value must be present when scaleTier is set to CUSTOM and
parameter_server_count is greater than zero.
Corresponds to the JSON property parameterServerType
1395 1396 1397 |
# File 'generated/google/apis/ml_v1/classes.rb', line 1395 def parameter_server_type @parameter_server_type end |
#python_module ⇒ String
Required. The Python module name to run after installing the packages.
Corresponds to the JSON property pythonModule
1400 1401 1402 |
# File 'generated/google/apis/ml_v1/classes.rb', line 1400 def python_module @python_module end |
#python_version ⇒ String
Optional. The version of Python used in training. If not set, the default
version is '2.7'. Python '3.5' is available when runtime_version is set
to '1.4' and above. Python '2.7' works with all supported
runtime versions.
Corresponds to the JSON property pythonVersion
1408 1409 1410 |
# File 'generated/google/apis/ml_v1/classes.rb', line 1408 def python_version @python_version end |
#region ⇒ String
Required. The Google Compute Engine region to run the training job in.
See the available regions
for AI Platform services.
Corresponds to the JSON property region
1415 1416 1417 |
# File 'generated/google/apis/ml_v1/classes.rb', line 1415 def region @region end |
#runtime_version ⇒ String
Optional. The AI Platform runtime version to use for training. If not
set, AI Platform uses the default stable version, 1.0. For more
information, see the
runtime version list
and
how to manage runtime versions.
Corresponds to the JSON property runtimeVersion
1425 1426 1427 |
# File 'generated/google/apis/ml_v1/classes.rb', line 1425 def runtime_version @runtime_version end |
#scale_tier ⇒ String
Required. Specifies the machine types, the number of replicas for workers
and parameter servers.
Corresponds to the JSON property scaleTier
1431 1432 1433 |
# File 'generated/google/apis/ml_v1/classes.rb', line 1431 def scale_tier @scale_tier end |
#worker_config ⇒ Google::Apis::MlV1::GoogleCloudMlV1ReplicaConfig
Represents the configuration for a replica in a cluster.
Corresponds to the JSON property workerConfig
1436 1437 1438 |
# File 'generated/google/apis/ml_v1/classes.rb', line 1436 def worker_config @worker_config end |
#worker_count ⇒ Fixnum
Optional. The number of worker replicas to use for the training job. Each
replica in the cluster will be of the type specified in worker_type.
This value can only be used when scale_tier is set to CUSTOM. If you
set this value, you must also set worker_type.
The default value is zero.
Corresponds to the JSON property workerCount
1445 1446 1447 |
# File 'generated/google/apis/ml_v1/classes.rb', line 1445 def worker_count @worker_count end |
#worker_type ⇒ String
Optional. Specifies the type of virtual machine to use for your training
job's worker nodes.
The supported values are the same as those described in the entry for
masterType.
This value must be consistent with the category of machine type that
masterType uses. In other words, both must be AI Platform machine
types or both must be Compute Engine machine types.
If you use cloud_tpu for this value, see special instructions for
configuring a custom TPU
machine.
This value must be present when scaleTier is set to CUSTOM and
workerCount is greater than zero.
Corresponds to the JSON property workerType
1462 1463 1464 |
# File 'generated/google/apis/ml_v1/classes.rb', line 1462 def worker_type @worker_type end |
Instance Method Details
#update!(**args) ⇒ Object
Update properties of this object
1469 1470 1471 1472 1473 1474 1475 1476 1477 1478 1479 1480 1481 1482 1483 1484 1485 1486 1487 1488 |
# File 'generated/google/apis/ml_v1/classes.rb', line 1469 def update!(**args) @args = args[:args] if args.key?(:args) @hyperparameters = args[:hyperparameters] if args.key?(:hyperparameters) @job_dir = args[:job_dir] if args.key?(:job_dir) @master_config = args[:master_config] if args.key?(:master_config) @master_type = args[:master_type] if args.key?(:master_type) @max_running_time = args[:max_running_time] if args.key?(:max_running_time) @package_uris = args[:package_uris] if args.key?(:package_uris) @parameter_server_config = args[:parameter_server_config] if args.key?(:parameter_server_config) @parameter_server_count = args[:parameter_server_count] if args.key?(:parameter_server_count) @parameter_server_type = args[:parameter_server_type] if args.key?(:parameter_server_type) @python_module = args[:python_module] if args.key?(:python_module) @python_version = args[:python_version] if args.key?(:python_version) @region = args[:region] if args.key?(:region) @runtime_version = args[:runtime_version] if args.key?(:runtime_version) @scale_tier = args[:scale_tier] if args.key?(:scale_tier) @worker_config = args[:worker_config] if args.key?(:worker_config) @worker_count = args[:worker_count] if args.key?(:worker_count) @worker_type = args[:worker_type] if args.key?(:worker_type) end |