Class: Google::Apis::MlV1::GoogleCloudMlV1TrainingInput
- Inherits:
-
Object
- Object
- Google::Apis::MlV1::GoogleCloudMlV1TrainingInput
- Includes:
- Core::Hashable, Core::JsonObjectSupport
- Defined in:
- generated/google/apis/ml_v1/classes.rb,
generated/google/apis/ml_v1/representations.rb,
generated/google/apis/ml_v1/representations.rb
Overview
Represents input parameters for a training job. When using the gcloud command to submit your training job, you can specify the input parameters as command-line arguments and/or in a YAML configuration file referenced from the --config command-line argument. For details, see the guide to submitting a training job.
Instance Attribute Summary collapse
-
#args ⇒ Array<String>
Optional.
-
#hyperparameters ⇒ Google::Apis::MlV1::GoogleCloudMlV1HyperparameterSpec
Represents a set of hyperparameters to optimize.
-
#job_dir ⇒ String
Optional.
-
#master_type ⇒ String
Optional.
-
#package_uris ⇒ Array<String>
Required.
-
#parameter_server_count ⇒ Fixnum
Optional.
-
#parameter_server_type ⇒ String
Optional.
-
#python_module ⇒ String
Required.
-
#python_version ⇒ String
Optional.
-
#region ⇒ String
Required.
-
#runtime_version ⇒ String
Optional.
-
#scale_tier ⇒ String
Required.
-
#worker_count ⇒ Fixnum
Optional.
-
#worker_type ⇒ String
Optional.
Instance Method Summary collapse
-
#initialize(**args) ⇒ GoogleCloudMlV1TrainingInput
constructor
A new instance of GoogleCloudMlV1TrainingInput.
-
#update!(**args) ⇒ Object
Update properties of this object.
Methods included from Core::JsonObjectSupport
Methods included from Core::Hashable
Constructor Details
#initialize(**args) ⇒ GoogleCloudMlV1TrainingInput
Returns a new instance of GoogleCloudMlV1TrainingInput
1170 1171 1172 |
# File 'generated/google/apis/ml_v1/classes.rb', line 1170 def initialize(**args) update!(**args) end |
Instance Attribute Details
#args ⇒ Array<String>
Optional. Command line arguments to pass to the program.
Corresponds to the JSON property args
1003 1004 1005 |
# File 'generated/google/apis/ml_v1/classes.rb', line 1003 def args @args end |
#hyperparameters ⇒ Google::Apis::MlV1::GoogleCloudMlV1HyperparameterSpec
Represents a set of hyperparameters to optimize.
Corresponds to the JSON property hyperparameters
1008 1009 1010 |
# File 'generated/google/apis/ml_v1/classes.rb', line 1008 def hyperparameters @hyperparameters end |
#job_dir ⇒ String
Optional. A Google Cloud Storage path in which to store training outputs
and other data needed for training. This path is passed to your TensorFlow
program as the '--job-dir' command-line argument. The benefit of specifying
this field is that Cloud ML validates the path for use in training.
Corresponds to the JSON property jobDir
1016 1017 1018 |
# File 'generated/google/apis/ml_v1/classes.rb', line 1016 def job_dir @job_dir end |
#master_type ⇒ String
Optional. Specifies the type of virtual machine to use for your training job's master worker. The following types are supported:
- standard
- A basic machine configuration suitable for training simple models with small to moderate datasets.
- large_model
- A machine with a lot of memory, specially suited for parameter servers when your model is large (having many hidden layers or layers with very large numbers of nodes).
- complex_model_s
- A machine suitable for the master and workers of the cluster when your model requires more computation than the standard machine can handle satisfactorily.
- complex_model_m
-
A machine with roughly twice the number of cores and roughly double the
memory of
complex_model_s
. - complex_model_l
-
A machine with roughly twice the number of cores and roughly double the
memory of
complex_model_m
. - standard_gpu
-
A machine equivalent to
standard
that also includes a single NVIDIA Tesla K80 GPU. See more about using GPUs to train your model. - complex_model_m_gpu
-
A machine equivalent to
complex_model_m
that also includes four NVIDIA Tesla K80 GPUs. - complex_model_l_gpu
-
A machine equivalent to
complex_model_l
that also includes eight NVIDIA Tesla K80 GPUs. - standard_p100
-
A machine equivalent to
standard
that also includes a single NVIDIA Tesla P100 GPU. The availability of these GPUs is in the Beta launch stage. - complex_model_m_p100
-
A machine equivalent to
complex_model_m
that also includes four NVIDIA Tesla P100 GPUs. The availability of these GPUs is in the Beta launch stage. - standard_tpu
- A TPU VM including one Cloud TPU. The availability of Cloud TPU is in Beta launch stage. See more about using TPUs to train your model.
You must set this value when scaleTier
is set to CUSTOM
.
Corresponds to the JSON property masterType
1092 1093 1094 |
# File 'generated/google/apis/ml_v1/classes.rb', line 1092 def master_type @master_type end |
#package_uris ⇒ Array<String>
Required. The Google Cloud Storage location of the packages with
the training program and any additional dependencies.
The maximum number of package URIs is 100.
Corresponds to the JSON property packageUris
1099 1100 1101 |
# File 'generated/google/apis/ml_v1/classes.rb', line 1099 def package_uris @package_uris end |
#parameter_server_count ⇒ Fixnum
Optional. The number of parameter server replicas to use for the training
job. Each replica in the cluster will be of the type specified in
parameter_server_type
.
This value can only be used when scale_tier
is set to CUSTOM
.If you
set this value, you must also set parameter_server_type
.
Corresponds to the JSON property parameterServerCount
1108 1109 1110 |
# File 'generated/google/apis/ml_v1/classes.rb', line 1108 def parameter_server_count @parameter_server_count end |
#parameter_server_type ⇒ String
Optional. Specifies the type of virtual machine to use for your training
job's parameter server.
The supported values are the same as those described in the entry for
master_type
.
This value must be present when scaleTier
is set to CUSTOM
and
parameter_server_count
is greater than zero.
Corresponds to the JSON property parameterServerType
1118 1119 1120 |
# File 'generated/google/apis/ml_v1/classes.rb', line 1118 def parameter_server_type @parameter_server_type end |
#python_module ⇒ String
Required. The Python module name to run after installing the packages.
Corresponds to the JSON property pythonModule
1123 1124 1125 |
# File 'generated/google/apis/ml_v1/classes.rb', line 1123 def python_module @python_module end |
#python_version ⇒ String
Optional. The version of Python used in training. If not set, the default
version is '2.7'. Python '3.5' is available when runtime_version
is set
to '1.4' and above. Python '2.7' works with all supported runtime versions.
Corresponds to the JSON property pythonVersion
1130 1131 1132 |
# File 'generated/google/apis/ml_v1/classes.rb', line 1130 def python_version @python_version end |
#region ⇒ String
Required. The Google Compute Engine region to run the training job in.
See the available regions
for ML Engine services.
Corresponds to the JSON property region
1137 1138 1139 |
# File 'generated/google/apis/ml_v1/classes.rb', line 1137 def region @region end |
#runtime_version ⇒ String
Optional. The Google Cloud ML runtime version to use for training. If not
set, Google Cloud ML will choose a stable version, which is defined in the
documentation of runtime version list.
Corresponds to the JSON property runtimeVersion
1144 1145 1146 |
# File 'generated/google/apis/ml_v1/classes.rb', line 1144 def runtime_version @runtime_version end |
#scale_tier ⇒ String
Required. Specifies the machine types, the number of replicas for workers
and parameter servers.
Corresponds to the JSON property scaleTier
1150 1151 1152 |
# File 'generated/google/apis/ml_v1/classes.rb', line 1150 def scale_tier @scale_tier end |
#worker_count ⇒ Fixnum
Optional. The number of worker replicas to use for the training job. Each
replica in the cluster will be of the type specified in worker_type
.
This value can only be used when scale_tier
is set to CUSTOM
. If you
set this value, you must also set worker_type
.
Corresponds to the JSON property workerCount
1158 1159 1160 |
# File 'generated/google/apis/ml_v1/classes.rb', line 1158 def worker_count @worker_count end |
#worker_type ⇒ String
Optional. Specifies the type of virtual machine to use for your training
job's worker nodes.
The supported values are the same as those described in the entry for
masterType
.
This value must be present when scaleTier
is set to CUSTOM
and
workerCount
is greater than zero.
Corresponds to the JSON property workerType
1168 1169 1170 |
# File 'generated/google/apis/ml_v1/classes.rb', line 1168 def worker_type @worker_type end |
Instance Method Details
#update!(**args) ⇒ Object
Update properties of this object
1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 |
# File 'generated/google/apis/ml_v1/classes.rb', line 1175 def update!(**args) @args = args[:args] if args.key?(:args) @hyperparameters = args[:hyperparameters] if args.key?(:hyperparameters) @job_dir = args[:job_dir] if args.key?(:job_dir) @master_type = args[:master_type] if args.key?(:master_type) @package_uris = args[:package_uris] if args.key?(:package_uris) @parameter_server_count = args[:parameter_server_count] if args.key?(:parameter_server_count) @parameter_server_type = args[:parameter_server_type] if args.key?(:parameter_server_type) @python_module = args[:python_module] if args.key?(:python_module) @python_version = args[:python_version] if args.key?(:python_version) @region = args[:region] if args.key?(:region) @runtime_version = args[:runtime_version] if args.key?(:runtime_version) @scale_tier = args[:scale_tier] if args.key?(:scale_tier) @worker_count = args[:worker_count] if args.key?(:worker_count) @worker_type = args[:worker_type] if args.key?(:worker_type) end |