Class: Google::Apis::MlV1::GoogleCloudMlV1TrainingInput

Inherits:
Object
  • Object
show all
Includes:
Core::Hashable, Core::JsonObjectSupport
Defined in:
lib/google/apis/ml_v1/classes.rb,
lib/google/apis/ml_v1/representations.rb,
lib/google/apis/ml_v1/representations.rb

Overview

Represents input parameters for a training job. When using the gcloud command to submit your training job, you can specify the input parameters as command- line arguments and/or in a YAML configuration file referenced from the -- config command-line argument. For details, see the guide to submitting a training job.

Instance Attribute Summary collapse

Instance Method Summary collapse

Constructor Details

#initialize(**args) ⇒ GoogleCloudMlV1TrainingInput

Returns a new instance of GoogleCloudMlV1TrainingInput.



2812
2813
2814
# File 'lib/google/apis/ml_v1/classes.rb', line 2812

def initialize(**args)
   update!(**args)
end

Instance Attribute Details

#argsArray<String>

Optional. Command-line arguments passed to the training application when it starts. If your job uses a custom container, then the arguments are passed to the container's ENTRYPOINT command. Corresponds to the JSON property args

Returns:

  • (Array<String>)


2606
2607
2608
# File 'lib/google/apis/ml_v1/classes.rb', line 2606

def args
  @args
end

#encryption_configGoogle::Apis::MlV1::GoogleCloudMlV1EncryptionConfig

Represents a custom encryption key configuration that can be applied to a resource. Corresponds to the JSON property encryptionConfig



2612
2613
2614
# File 'lib/google/apis/ml_v1/classes.rb', line 2612

def encryption_config
  @encryption_config
end

#evaluator_configGoogle::Apis::MlV1::GoogleCloudMlV1ReplicaConfig

Represents the configuration for a replica in a cluster. Corresponds to the JSON property evaluatorConfig



2617
2618
2619
# File 'lib/google/apis/ml_v1/classes.rb', line 2617

def evaluator_config
  @evaluator_config
end

#evaluator_countFixnum

Optional. The number of evaluator replicas to use for the training job. Each replica in the cluster will be of the type specified in evaluator_type. This value can only be used when scale_tier is set to CUSTOM. If you set this value, you must also set evaluator_type. The default value is zero. Corresponds to the JSON property evaluatorCount

Returns:

  • (Fixnum)


2625
2626
2627
# File 'lib/google/apis/ml_v1/classes.rb', line 2625

def evaluator_count
  @evaluator_count
end

#evaluator_typeString

Optional. Specifies the type of virtual machine to use for your training job's evaluator nodes. The supported values are the same as those described in the entry for masterType. This value must be consistent with the category of machine type that masterType uses. In other words, both must be Compute Engine machine types or both must be legacy machine types. This value must be present when scaleTier is set to CUSTOM and evaluatorCount is greater than zero. Corresponds to the JSON property evaluatorType

Returns:

  • (String)


2636
2637
2638
# File 'lib/google/apis/ml_v1/classes.rb', line 2636

def evaluator_type
  @evaluator_type
end

#hyperparametersGoogle::Apis::MlV1::GoogleCloudMlV1HyperparameterSpec

Represents a set of hyperparameters to optimize. Corresponds to the JSON property hyperparameters



2641
2642
2643
# File 'lib/google/apis/ml_v1/classes.rb', line 2641

def hyperparameters
  @hyperparameters
end

#job_dirString

Optional. A Google Cloud Storage path in which to store training outputs and other data needed for training. This path is passed to your TensorFlow program as the '--job-dir' command-line argument. The benefit of specifying this field is that Cloud ML validates the path for use in training. Corresponds to the JSON property jobDir

Returns:

  • (String)


2649
2650
2651
# File 'lib/google/apis/ml_v1/classes.rb', line 2649

def job_dir
  @job_dir
end

#master_configGoogle::Apis::MlV1::GoogleCloudMlV1ReplicaConfig

Represents the configuration for a replica in a cluster. Corresponds to the JSON property masterConfig



2654
2655
2656
# File 'lib/google/apis/ml_v1/classes.rb', line 2654

def master_config
  @master_config
end

#master_typeString

Optional. Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when scaleTier is set to CUSTOM. You can use certain Compute Engine machine types directly in this field. The following types are supported: - n1-standard-4 - n1-standard-8 - n1- standard-16 - n1-standard-32 - n1-standard-64 - n1-standard-96 - n1- highmem-2 - n1-highmem-4 - n1-highmem-8 - n1-highmem-16 - n1-highmem- 32 - n1-highmem-64 - n1-highmem-96 - n1-highcpu-16 - n1-highcpu-32 - n1-highcpu-64 - n1-highcpu-96 Learn more about using Compute Engine machine types. Alternatively, you can use the following legacy machine types: - standard - large_model - complex_model_s - complex_model_m - complex_model_l - standard_gpu - complex_model_m_gpu - complex_model_l_gpu - standard_p100

Returns:

  • (String)


2676
2677
2678
# File 'lib/google/apis/ml_v1/classes.rb', line 2676

def master_type
  @master_type
end

#networkString

Optional. The full name of the Compute Engine network to which the Job is peered. For example, projects/12345/global/networks/myVPC. The format of this field is projects/project/global/networks/network`, whereprojectis a project number (like12345) andnetworkis network name. Private services access must already be configured for the network. If left unspecified, the Job is not peered with any network. [Learn about using VPC Network Peering.](/ai-platform/training/docs/vpc-peering). Corresponds to the JSON propertynetwork`

Returns:

  • (String)


2687
2688
2689
# File 'lib/google/apis/ml_v1/classes.rb', line 2687

def network
  @network
end

#package_urisArray<String>

Required. The Google Cloud Storage location of the packages with the training program and any additional dependencies. The maximum number of package URIs is 100. Corresponds to the JSON property packageUris

Returns:

  • (Array<String>)


2694
2695
2696
# File 'lib/google/apis/ml_v1/classes.rb', line 2694

def package_uris
  @package_uris
end

#parameter_server_configGoogle::Apis::MlV1::GoogleCloudMlV1ReplicaConfig

Represents the configuration for a replica in a cluster. Corresponds to the JSON property parameterServerConfig



2699
2700
2701
# File 'lib/google/apis/ml_v1/classes.rb', line 2699

def parameter_server_config
  @parameter_server_config
end

#parameter_server_countFixnum

Optional. The number of parameter server replicas to use for the training job. Each replica in the cluster will be of the type specified in parameter_server_type. This value can only be used when scale_tier is set to CUSTOM. If you set this value, you must also set parameter_server_type. The default value is zero. Corresponds to the JSON property parameterServerCount

Returns:

  • (Fixnum)


2708
2709
2710
# File 'lib/google/apis/ml_v1/classes.rb', line 2708

def parameter_server_count
  @parameter_server_count
end

#parameter_server_typeString

Optional. Specifies the type of virtual machine to use for your training job's parameter server. The supported values are the same as those described in the entry for master_type. This value must be consistent with the category of machine type that masterType uses. In other words, both must be Compute Engine machine types or both must be legacy machine types. This value must be present when scaleTier is set to CUSTOM and parameter_server_count is greater than zero. Corresponds to the JSON property parameterServerType

Returns:

  • (String)


2719
2720
2721
# File 'lib/google/apis/ml_v1/classes.rb', line 2719

def parameter_server_type
  @parameter_server_type
end

#python_moduleString

Required. The Python module name to run after installing the packages. Corresponds to the JSON property pythonModule

Returns:

  • (String)


2724
2725
2726
# File 'lib/google/apis/ml_v1/classes.rb', line 2724

def python_module
  @python_module
end

#python_versionString

Optional. The version of Python used in training. You must either specify this field or specify masterConfig.imageUri. The following Python versions are available: * Python '3.7' is available when runtime_version is set to '1.15' or later. * Python '3.5' is available when runtime_version is set to a version from '1.4' to '1.14'. * Python '2.7' is available when runtime_version is set to '1.15' or earlier. Read more about the Python versions available for each runtime version. Corresponds to the JSON property pythonVersion

Returns:

  • (String)


2736
2737
2738
# File 'lib/google/apis/ml_v1/classes.rb', line 2736

def python_version
  @python_version
end

#regionString

Required. The region to run the training job in. See the available regions for AI Platform Training. Corresponds to the JSON property region

Returns:

  • (String)


2742
2743
2744
# File 'lib/google/apis/ml_v1/classes.rb', line 2742

def region
  @region
end

#runtime_versionString

Optional. The AI Platform runtime version to use for training. You must either specify this field or specify masterConfig.imageUri. For more information, see the runtime version list and learn how to manage runtime versions. Corresponds to the JSON property runtimeVersion

Returns:

  • (String)


2751
2752
2753
# File 'lib/google/apis/ml_v1/classes.rb', line 2751

def runtime_version
  @runtime_version
end

#scale_tierString

Required. Specifies the machine types, the number of replicas for workers and parameter servers. Corresponds to the JSON property scaleTier

Returns:

  • (String)


2757
2758
2759
# File 'lib/google/apis/ml_v1/classes.rb', line 2757

def scale_tier
  @scale_tier
end

#schedulingGoogle::Apis::MlV1::GoogleCloudMlV1Scheduling

All parameters related to scheduling of training jobs. Corresponds to the JSON property scheduling



2762
2763
2764
# File 'lib/google/apis/ml_v1/classes.rb', line 2762

def scheduling
  @scheduling
end

#service_accountString

Optional. The email address of a service account to use when running the training appplication. You must have the iam.serviceAccounts.actAs permission for the specified service account. In addition, the AI Platform Training Google-managed service account must have the roles/iam. serviceAccountAdmin role for the specified service account. Learn more about configuring a service account. If not specified, the AI Platform Training Google-managed service account is used by default. Corresponds to the JSON property serviceAccount

Returns:

  • (String)


2774
2775
2776
# File 'lib/google/apis/ml_v1/classes.rb', line 2774

def 
  @service_account
end

#use_chief_in_tf_configBoolean Also known as: use_chief_in_tf_config?

Optional. Use chief instead of master in the TF_CONFIG environment variable when training with a custom container. Defaults to false. Learn more about this field. This field has no effect for training jobs that don't use a custom container. Corresponds to the JSON property useChiefInTfConfig

Returns:

  • (Boolean)


2783
2784
2785
# File 'lib/google/apis/ml_v1/classes.rb', line 2783

def use_chief_in_tf_config
  @use_chief_in_tf_config
end

#worker_configGoogle::Apis::MlV1::GoogleCloudMlV1ReplicaConfig

Represents the configuration for a replica in a cluster. Corresponds to the JSON property workerConfig



2789
2790
2791
# File 'lib/google/apis/ml_v1/classes.rb', line 2789

def worker_config
  @worker_config
end

#worker_countFixnum

Optional. The number of worker replicas to use for the training job. Each replica in the cluster will be of the type specified in worker_type. This value can only be used when scale_tier is set to CUSTOM. If you set this value, you must also set worker_type. The default value is zero. Corresponds to the JSON property workerCount

Returns:

  • (Fixnum)


2797
2798
2799
# File 'lib/google/apis/ml_v1/classes.rb', line 2797

def worker_count
  @worker_count
end

#worker_typeString

Optional. Specifies the type of virtual machine to use for your training job's worker nodes. The supported values are the same as those described in the entry for masterType. This value must be consistent with the category of machine type that masterType uses. In other words, both must be Compute Engine machine types or both must be legacy machine types. If you use cloud_tpu for this value, see special instructions for configuring a custom TPU machine. This value must be present when scaleTier is set to CUSTOM and workerCount is greater than zero. Corresponds to the JSON property workerType

Returns:

  • (String)


2810
2811
2812
# File 'lib/google/apis/ml_v1/classes.rb', line 2810

def worker_type
  @worker_type
end

Instance Method Details

#update!(**args) ⇒ Object

Update properties of this object



2817
2818
2819
2820
2821
2822
2823
2824
2825
2826
2827
2828
2829
2830
2831
2832
2833
2834
2835
2836
2837
2838
2839
2840
2841
2842
2843
# File 'lib/google/apis/ml_v1/classes.rb', line 2817

def update!(**args)
  @args = args[:args] if args.key?(:args)
  @encryption_config = args[:encryption_config] if args.key?(:encryption_config)
  @evaluator_config = args[:evaluator_config] if args.key?(:evaluator_config)
  @evaluator_count = args[:evaluator_count] if args.key?(:evaluator_count)
  @evaluator_type = args[:evaluator_type] if args.key?(:evaluator_type)
  @hyperparameters = args[:hyperparameters] if args.key?(:hyperparameters)
  @job_dir = args[:job_dir] if args.key?(:job_dir)
  @master_config = args[:master_config] if args.key?(:master_config)
  @master_type = args[:master_type] if args.key?(:master_type)
  @network = args[:network] if args.key?(:network)
  @package_uris = args[:package_uris] if args.key?(:package_uris)
  @parameter_server_config = args[:parameter_server_config] if args.key?(:parameter_server_config)
  @parameter_server_count = args[:parameter_server_count] if args.key?(:parameter_server_count)
  @parameter_server_type = args[:parameter_server_type] if args.key?(:parameter_server_type)
  @python_module = args[:python_module] if args.key?(:python_module)
  @python_version = args[:python_version] if args.key?(:python_version)
  @region = args[:region] if args.key?(:region)
  @runtime_version = args[:runtime_version] if args.key?(:runtime_version)
  @scale_tier = args[:scale_tier] if args.key?(:scale_tier)
  @scheduling = args[:scheduling] if args.key?(:scheduling)
  @service_account = args[:service_account] if args.key?(:service_account)
  @use_chief_in_tf_config = args[:use_chief_in_tf_config] if args.key?(:use_chief_in_tf_config)
  @worker_config = args[:worker_config] if args.key?(:worker_config)
  @worker_count = args[:worker_count] if args.key?(:worker_count)
  @worker_type = args[:worker_type] if args.key?(:worker_type)
end