Class: Google::Apis::MlV1::GoogleCloudMlV1TrainingInput
- Inherits:
-
Object
- Object
- Google::Apis::MlV1::GoogleCloudMlV1TrainingInput
- Includes:
- Core::Hashable, Core::JsonObjectSupport
- Defined in:
- lib/google/apis/ml_v1/classes.rb,
lib/google/apis/ml_v1/representations.rb,
lib/google/apis/ml_v1/representations.rb
Overview
Represents input parameters for a training job. When using the gcloud command to submit your training job, you can specify the input parameters as command- line arguments and/or in a YAML configuration file referenced from the -- config command-line argument. For details, see the guide to submitting a training job.
Instance Attribute Summary collapse
-
#args ⇒ Array<String>
Optional.
-
#encryption_config ⇒ Google::Apis::MlV1::GoogleCloudMlV1EncryptionConfig
Represents a custom encryption key configuration that can be applied to a resource.
-
#evaluator_config ⇒ Google::Apis::MlV1::GoogleCloudMlV1ReplicaConfig
Represents the configuration for a replica in a cluster.
-
#evaluator_count ⇒ Fixnum
Optional.
-
#evaluator_type ⇒ String
Optional.
-
#hyperparameters ⇒ Google::Apis::MlV1::GoogleCloudMlV1HyperparameterSpec
Represents a set of hyperparameters to optimize.
-
#job_dir ⇒ String
Optional.
-
#master_config ⇒ Google::Apis::MlV1::GoogleCloudMlV1ReplicaConfig
Represents the configuration for a replica in a cluster.
-
#master_type ⇒ String
Optional.
-
#network ⇒ String
Optional.
-
#package_uris ⇒ Array<String>
Required.
-
#parameter_server_config ⇒ Google::Apis::MlV1::GoogleCloudMlV1ReplicaConfig
Represents the configuration for a replica in a cluster.
-
#parameter_server_count ⇒ Fixnum
Optional.
-
#parameter_server_type ⇒ String
Optional.
-
#python_module ⇒ String
Required.
-
#python_version ⇒ String
Optional.
-
#region ⇒ String
Required.
-
#runtime_version ⇒ String
Optional.
-
#scale_tier ⇒ String
Required.
-
#scheduling ⇒ Google::Apis::MlV1::GoogleCloudMlV1Scheduling
All parameters related to scheduling of training jobs.
-
#service_account ⇒ String
Optional.
-
#use_chief_in_tf_config ⇒ Boolean
(also: #use_chief_in_tf_config?)
Optional.
-
#worker_config ⇒ Google::Apis::MlV1::GoogleCloudMlV1ReplicaConfig
Represents the configuration for a replica in a cluster.
-
#worker_count ⇒ Fixnum
Optional.
-
#worker_type ⇒ String
Optional.
Instance Method Summary collapse
-
#initialize(**args) ⇒ GoogleCloudMlV1TrainingInput
constructor
A new instance of GoogleCloudMlV1TrainingInput.
-
#update!(**args) ⇒ Object
Update properties of this object.
Constructor Details
#initialize(**args) ⇒ GoogleCloudMlV1TrainingInput
Returns a new instance of GoogleCloudMlV1TrainingInput.
2804 2805 2806 |
# File 'lib/google/apis/ml_v1/classes.rb', line 2804 def initialize(**args) update!(**args) end |
Instance Attribute Details
#args ⇒ Array<String>
Optional. Command-line arguments passed to the training application when it
starts. If your job uses a custom container, then the arguments are passed to
the container's ENTRYPOINT command.
Corresponds to the JSON property args
2606 2607 2608 |
# File 'lib/google/apis/ml_v1/classes.rb', line 2606 def args @args end |
#encryption_config ⇒ Google::Apis::MlV1::GoogleCloudMlV1EncryptionConfig
Represents a custom encryption key configuration that can be applied to a
resource.
Corresponds to the JSON property encryptionConfig
2612 2613 2614 |
# File 'lib/google/apis/ml_v1/classes.rb', line 2612 def encryption_config @encryption_config end |
#evaluator_config ⇒ Google::Apis::MlV1::GoogleCloudMlV1ReplicaConfig
Represents the configuration for a replica in a cluster.
Corresponds to the JSON property evaluatorConfig
2617 2618 2619 |
# File 'lib/google/apis/ml_v1/classes.rb', line 2617 def evaluator_config @evaluator_config end |
#evaluator_count ⇒ Fixnum
Optional. The number of evaluator replicas to use for the training job. Each
replica in the cluster will be of the type specified in evaluator_type. This
value can only be used when scale_tier is set to CUSTOM. If you set this
value, you must also set evaluator_type. The default value is zero.
Corresponds to the JSON property evaluatorCount
2625 2626 2627 |
# File 'lib/google/apis/ml_v1/classes.rb', line 2625 def evaluator_count @evaluator_count end |
#evaluator_type ⇒ String
Optional. Specifies the type of virtual machine to use for your training job's
evaluator nodes. The supported values are the same as those described in the
entry for masterType. This value must be consistent with the category of
machine type that masterType uses. In other words, both must be Compute
Engine machine types or both must be legacy machine types. This value must be
present when scaleTier is set to CUSTOM and evaluatorCount is greater
than zero.
Corresponds to the JSON property evaluatorType
2636 2637 2638 |
# File 'lib/google/apis/ml_v1/classes.rb', line 2636 def evaluator_type @evaluator_type end |
#hyperparameters ⇒ Google::Apis::MlV1::GoogleCloudMlV1HyperparameterSpec
Represents a set of hyperparameters to optimize.
Corresponds to the JSON property hyperparameters
2641 2642 2643 |
# File 'lib/google/apis/ml_v1/classes.rb', line 2641 def hyperparameters @hyperparameters end |
#job_dir ⇒ String
Optional. A Google Cloud Storage path in which to store training outputs and
other data needed for training. This path is passed to your TensorFlow program
as the '--job-dir' command-line argument. The benefit of specifying this field
is that Cloud ML validates the path for use in training.
Corresponds to the JSON property jobDir
2649 2650 2651 |
# File 'lib/google/apis/ml_v1/classes.rb', line 2649 def job_dir @job_dir end |
#master_config ⇒ Google::Apis::MlV1::GoogleCloudMlV1ReplicaConfig
Represents the configuration for a replica in a cluster.
Corresponds to the JSON property masterConfig
2654 2655 2656 |
# File 'lib/google/apis/ml_v1/classes.rb', line 2654 def master_config @master_config end |
#master_type ⇒ String
Optional. Specifies the type of virtual machine to use for your training job's
master worker. You must specify this field when scaleTier is set to CUSTOM.
You can use certain Compute Engine machine types directly in this field. See
the list of compatible Compute Engine machine types. Alternatively, you can use
the certain legacy machine types in this field. See the list of legacy
machine types.
Finally, if you want to use a TPU for training, specify cloud_tpu in this
field. Learn more about the special configuration options for training with
TPUs.
Corresponds to the JSON property masterType
2668 2669 2670 |
# File 'lib/google/apis/ml_v1/classes.rb', line 2668 def master_type @master_type end |
#network ⇒ String
Optional. The full name of the Compute Engine network to
which the Job is peered. For example, projects/12345/global/networks/myVPC.
The format of this field is projects/project/global/networks/network`,
whereprojectis a project number (like12345) andnetworkis network
name. Private services access must already be configured for the network. If
left unspecified, the Job is not peered with any network. [Learn about using
VPC Network Peering.](/ai-platform/training/docs/vpc-peering).
Corresponds to the JSON propertynetwork`
2679 2680 2681 |
# File 'lib/google/apis/ml_v1/classes.rb', line 2679 def network @network end |
#package_uris ⇒ Array<String>
Required. The Google Cloud Storage location of the packages with the training
program and any additional dependencies. The maximum number of package URIs is
100.
Corresponds to the JSON property packageUris
2686 2687 2688 |
# File 'lib/google/apis/ml_v1/classes.rb', line 2686 def package_uris @package_uris end |
#parameter_server_config ⇒ Google::Apis::MlV1::GoogleCloudMlV1ReplicaConfig
Represents the configuration for a replica in a cluster.
Corresponds to the JSON property parameterServerConfig
2691 2692 2693 |
# File 'lib/google/apis/ml_v1/classes.rb', line 2691 def parameter_server_config @parameter_server_config end |
#parameter_server_count ⇒ Fixnum
Optional. The number of parameter server replicas to use for the training job.
Each replica in the cluster will be of the type specified in
parameter_server_type. This value can only be used when scale_tier is set
to CUSTOM. If you set this value, you must also set parameter_server_type.
The default value is zero.
Corresponds to the JSON property parameterServerCount
2700 2701 2702 |
# File 'lib/google/apis/ml_v1/classes.rb', line 2700 def parameter_server_count @parameter_server_count end |
#parameter_server_type ⇒ String
Optional. Specifies the type of virtual machine to use for your training job's
parameter server. The supported values are the same as those described in the
entry for master_type. This value must be consistent with the category of
machine type that masterType uses. In other words, both must be Compute
Engine machine types or both must be legacy machine types. This value must be
present when scaleTier is set to CUSTOM and parameter_server_count is
greater than zero.
Corresponds to the JSON property parameterServerType
2711 2712 2713 |
# File 'lib/google/apis/ml_v1/classes.rb', line 2711 def parameter_server_type @parameter_server_type end |
#python_module ⇒ String
Required. The Python module name to run after installing the packages.
Corresponds to the JSON property pythonModule
2716 2717 2718 |
# File 'lib/google/apis/ml_v1/classes.rb', line 2716 def python_module @python_module end |
#python_version ⇒ String
Optional. The version of Python used in training. You must either specify this
field or specify masterConfig.imageUri. The following Python versions are
available: * Python '3.7' is available when runtime_version is set to '1.15'
or later. * Python '3.5' is available when runtime_version is set to a
version from '1.4' to '1.14'. * Python '2.7' is available when
runtime_version is set to '1.15' or earlier. Read more about the Python
versions available for each runtime version.
Corresponds to the JSON property pythonVersion
2728 2729 2730 |
# File 'lib/google/apis/ml_v1/classes.rb', line 2728 def python_version @python_version end |
#region ⇒ String
Required. The region to run the training job in. See the available regions for AI Platform Training.
Corresponds to the JSON property region
2734 2735 2736 |
# File 'lib/google/apis/ml_v1/classes.rb', line 2734 def region @region end |
#runtime_version ⇒ String
Optional. The AI Platform runtime version to use for training. You must either
specify this field or specify masterConfig.imageUri. For more information,
see the runtime version list
and learn how to manage runtime versions.
Corresponds to the JSON property runtimeVersion
2743 2744 2745 |
# File 'lib/google/apis/ml_v1/classes.rb', line 2743 def runtime_version @runtime_version end |
#scale_tier ⇒ String
Required. Specifies the machine types, the number of replicas for workers and
parameter servers.
Corresponds to the JSON property scaleTier
2749 2750 2751 |
# File 'lib/google/apis/ml_v1/classes.rb', line 2749 def scale_tier @scale_tier end |
#scheduling ⇒ Google::Apis::MlV1::GoogleCloudMlV1Scheduling
All parameters related to scheduling of training jobs.
Corresponds to the JSON property scheduling
2754 2755 2756 |
# File 'lib/google/apis/ml_v1/classes.rb', line 2754 def scheduling @scheduling end |
#service_account ⇒ String
Optional. The email address of a service account to use when running the
training appplication. You must have the iam.serviceAccounts.actAs
permission for the specified service account. In addition, the AI Platform
Training Google-managed service account must have the roles/iam.
serviceAccountAdmin role for the specified service account. Learn more about
configuring a service account. If not specified, the AI Platform Training Google-managed service
account is used by default.
Corresponds to the JSON property serviceAccount
2766 2767 2768 |
# File 'lib/google/apis/ml_v1/classes.rb', line 2766 def service_account @service_account end |
#use_chief_in_tf_config ⇒ Boolean Also known as: use_chief_in_tf_config?
Optional. Use chief instead of master in the TF_CONFIG environment
variable when training with a custom container. Defaults to false. Learn
more about this field. This field has no effect for training jobs that
don't use a custom container.
Corresponds to the JSON property useChiefInTfConfig
2775 2776 2777 |
# File 'lib/google/apis/ml_v1/classes.rb', line 2775 def use_chief_in_tf_config @use_chief_in_tf_config end |
#worker_config ⇒ Google::Apis::MlV1::GoogleCloudMlV1ReplicaConfig
Represents the configuration for a replica in a cluster.
Corresponds to the JSON property workerConfig
2781 2782 2783 |
# File 'lib/google/apis/ml_v1/classes.rb', line 2781 def worker_config @worker_config end |
#worker_count ⇒ Fixnum
Optional. The number of worker replicas to use for the training job. Each
replica in the cluster will be of the type specified in worker_type. This
value can only be used when scale_tier is set to CUSTOM. If you set this
value, you must also set worker_type. The default value is zero.
Corresponds to the JSON property workerCount
2789 2790 2791 |
# File 'lib/google/apis/ml_v1/classes.rb', line 2789 def worker_count @worker_count end |
#worker_type ⇒ String
Optional. Specifies the type of virtual machine to use for your training job's
worker nodes. The supported values are the same as those described in the
entry for masterType. This value must be consistent with the category of
machine type that masterType uses. In other words, both must be Compute
Engine machine types or both must be legacy machine types. If you use
cloud_tpu for this value, see special instructions for configuring a custom
TPU machine. This value must be present when scaleTier
is set to CUSTOM and workerCount is greater than zero.
Corresponds to the JSON property workerType
2802 2803 2804 |
# File 'lib/google/apis/ml_v1/classes.rb', line 2802 def worker_type @worker_type end |
Instance Method Details
#update!(**args) ⇒ Object
Update properties of this object
2809 2810 2811 2812 2813 2814 2815 2816 2817 2818 2819 2820 2821 2822 2823 2824 2825 2826 2827 2828 2829 2830 2831 2832 2833 2834 2835 |
# File 'lib/google/apis/ml_v1/classes.rb', line 2809 def update!(**args) @args = args[:args] if args.key?(:args) @encryption_config = args[:encryption_config] if args.key?(:encryption_config) @evaluator_config = args[:evaluator_config] if args.key?(:evaluator_config) @evaluator_count = args[:evaluator_count] if args.key?(:evaluator_count) @evaluator_type = args[:evaluator_type] if args.key?(:evaluator_type) @hyperparameters = args[:hyperparameters] if args.key?(:hyperparameters) @job_dir = args[:job_dir] if args.key?(:job_dir) @master_config = args[:master_config] if args.key?(:master_config) @master_type = args[:master_type] if args.key?(:master_type) @network = args[:network] if args.key?(:network) @package_uris = args[:package_uris] if args.key?(:package_uris) @parameter_server_config = args[:parameter_server_config] if args.key?(:parameter_server_config) @parameter_server_count = args[:parameter_server_count] if args.key?(:parameter_server_count) @parameter_server_type = args[:parameter_server_type] if args.key?(:parameter_server_type) @python_module = args[:python_module] if args.key?(:python_module) @python_version = args[:python_version] if args.key?(:python_version) @region = args[:region] if args.key?(:region) @runtime_version = args[:runtime_version] if args.key?(:runtime_version) @scale_tier = args[:scale_tier] if args.key?(:scale_tier) @scheduling = args[:scheduling] if args.key?(:scheduling) @service_account = args[:service_account] if args.key?(:service_account) @use_chief_in_tf_config = args[:use_chief_in_tf_config] if args.key?(:use_chief_in_tf_config) @worker_config = args[:worker_config] if args.key?(:worker_config) @worker_count = args[:worker_count] if args.key?(:worker_count) @worker_type = args[:worker_type] if args.key?(:worker_type) end |