Class: Google::Apis::DataflowV1b3::WorkerPool
- Inherits:
-
Object
- Object
- Google::Apis::DataflowV1b3::WorkerPool
- Includes:
- Core::Hashable, Core::JsonObjectSupport
- Defined in:
- generated/google/apis/dataflow_v1b3/classes.rb,
generated/google/apis/dataflow_v1b3/representations.rb,
generated/google/apis/dataflow_v1b3/representations.rb
Overview
Describes one particular pool of Cloud Dataflow workers to be instantiated by the Cloud Dataflow service in order to perform the computations required by a job. Note that a workflow job may use multiple pools, in order to match the various computational requirements of the various stages of the job.
Instance Attribute Summary collapse
-
#autoscaling_settings ⇒ Google::Apis::DataflowV1b3::AutoscalingSettings
Settings for WorkerPool autoscaling.
-
#data_disks ⇒ Array<Google::Apis::DataflowV1b3::Disk>
Data disks that are used by a VM in this workflow.
-
#default_package_set ⇒ String
The default package set to install.
-
#disk_size_gb ⇒ Fixnum
Size of root disk for VMs, in GB.
-
#disk_source_image ⇒ String
Fully qualified source image for disks.
-
#disk_type ⇒ String
Type of root disk for VMs.
-
#ip_configuration ⇒ String
Configuration for VM IPs.
-
#kind ⇒ String
The kind of the worker pool; currently only
harness
andshuffle
are supported. -
#machine_type ⇒ String
Machine type (e.g. "n1-standard-1").
-
#metadata ⇒ Hash<String,String>
Metadata to set on the Google Compute Engine VMs.
-
#network ⇒ String
Network to which VMs will be assigned.
-
#num_threads_per_worker ⇒ Fixnum
The number of threads per worker harness.
-
#num_workers ⇒ Fixnum
Number of Google Compute Engine workers in this pool needed to execute the job.
-
#on_host_maintenance ⇒ String
The action to take on host maintenance, as defined by the Google Compute Engine API.
-
#packages ⇒ Array<Google::Apis::DataflowV1b3::Package>
Packages to be installed on workers.
-
#pool_args ⇒ Hash<String,Object>
Extra arguments for this worker pool.
-
#subnetwork ⇒ String
Subnetwork to which VMs will be assigned, if desired.
-
#taskrunner_settings ⇒ Google::Apis::DataflowV1b3::TaskRunnerSettings
Taskrunner configuration settings.
-
#teardown_policy ⇒ String
Sets the policy for determining when to turndown worker pool.
-
#worker_harness_container_image ⇒ String
Required.
-
#zone ⇒ String
Zone to run the worker pools in.
Instance Method Summary collapse
-
#initialize(**args) ⇒ WorkerPool
constructor
A new instance of WorkerPool.
-
#update!(**args) ⇒ Object
Update properties of this object.
Methods included from Core::JsonObjectSupport
Methods included from Core::Hashable
Constructor Details
#initialize(**args) ⇒ WorkerPool
Returns a new instance of WorkerPool
5118 5119 5120 |
# File 'generated/google/apis/dataflow_v1b3/classes.rb', line 5118 def initialize(**args) update!(**args) end |
Instance Attribute Details
#autoscaling_settings ⇒ Google::Apis::DataflowV1b3::AutoscalingSettings
Settings for WorkerPool autoscaling.
Corresponds to the JSON property autoscalingSettings
4988 4989 4990 |
# File 'generated/google/apis/dataflow_v1b3/classes.rb', line 4988 def autoscaling_settings @autoscaling_settings end |
#data_disks ⇒ Array<Google::Apis::DataflowV1b3::Disk>
Data disks that are used by a VM in this workflow.
Corresponds to the JSON property dataDisks
4993 4994 4995 |
# File 'generated/google/apis/dataflow_v1b3/classes.rb', line 4993 def data_disks @data_disks end |
#default_package_set ⇒ String
The default package set to install. This allows the service to
select a default set of packages which are useful to worker
harnesses written in a particular language.
Corresponds to the JSON property defaultPackageSet
5000 5001 5002 |
# File 'generated/google/apis/dataflow_v1b3/classes.rb', line 5000 def default_package_set @default_package_set end |
#disk_size_gb ⇒ Fixnum
Size of root disk for VMs, in GB. If zero or unspecified, the service will
attempt to choose a reasonable default.
Corresponds to the JSON property diskSizeGb
5006 5007 5008 |
# File 'generated/google/apis/dataflow_v1b3/classes.rb', line 5006 def disk_size_gb @disk_size_gb end |
#disk_source_image ⇒ String
Fully qualified source image for disks.
Corresponds to the JSON property diskSourceImage
5011 5012 5013 |
# File 'generated/google/apis/dataflow_v1b3/classes.rb', line 5011 def disk_source_image @disk_source_image end |
#disk_type ⇒ String
Type of root disk for VMs. If empty or unspecified, the service will
attempt to choose a reasonable default.
Corresponds to the JSON property diskType
5017 5018 5019 |
# File 'generated/google/apis/dataflow_v1b3/classes.rb', line 5017 def disk_type @disk_type end |
#ip_configuration ⇒ String
Configuration for VM IPs.
Corresponds to the JSON property ipConfiguration
5022 5023 5024 |
# File 'generated/google/apis/dataflow_v1b3/classes.rb', line 5022 def ip_configuration @ip_configuration end |
#kind ⇒ String
The kind of the worker pool; currently only harness
and shuffle
are supported.
Corresponds to the JSON property kind
5028 5029 5030 |
# File 'generated/google/apis/dataflow_v1b3/classes.rb', line 5028 def kind @kind end |
#machine_type ⇒ String
Machine type (e.g. "n1-standard-1"). If empty or unspecified, the
service will attempt to choose a reasonable default.
Corresponds to the JSON property machineType
5034 5035 5036 |
# File 'generated/google/apis/dataflow_v1b3/classes.rb', line 5034 def machine_type @machine_type end |
#metadata ⇒ Hash<String,String>
Metadata to set on the Google Compute Engine VMs.
Corresponds to the JSON property metadata
5039 5040 5041 |
# File 'generated/google/apis/dataflow_v1b3/classes.rb', line 5039 def @metadata end |
#network ⇒ String
Network to which VMs will be assigned. If empty or unspecified,
the service will use the network "default".
Corresponds to the JSON property network
5045 5046 5047 |
# File 'generated/google/apis/dataflow_v1b3/classes.rb', line 5045 def network @network end |
#num_threads_per_worker ⇒ Fixnum
The number of threads per worker harness. If empty or unspecified, the
service will choose a number of threads (according to the number of cores
on the selected machine type for batch, or 1 by convention for streaming).
Corresponds to the JSON property numThreadsPerWorker
5052 5053 5054 |
# File 'generated/google/apis/dataflow_v1b3/classes.rb', line 5052 def num_threads_per_worker @num_threads_per_worker end |
#num_workers ⇒ Fixnum
Number of Google Compute Engine workers in this pool needed to
execute the job. If zero or unspecified, the service will
attempt to choose a reasonable default.
Corresponds to the JSON property numWorkers
5059 5060 5061 |
# File 'generated/google/apis/dataflow_v1b3/classes.rb', line 5059 def num_workers @num_workers end |
#on_host_maintenance ⇒ String
The action to take on host maintenance, as defined by the Google
Compute Engine API.
Corresponds to the JSON property onHostMaintenance
5065 5066 5067 |
# File 'generated/google/apis/dataflow_v1b3/classes.rb', line 5065 def on_host_maintenance @on_host_maintenance end |
#packages ⇒ Array<Google::Apis::DataflowV1b3::Package>
Packages to be installed on workers.
Corresponds to the JSON property packages
5070 5071 5072 |
# File 'generated/google/apis/dataflow_v1b3/classes.rb', line 5070 def packages @packages end |
#pool_args ⇒ Hash<String,Object>
Extra arguments for this worker pool.
Corresponds to the JSON property poolArgs
5075 5076 5077 |
# File 'generated/google/apis/dataflow_v1b3/classes.rb', line 5075 def pool_args @pool_args end |
#subnetwork ⇒ String
Subnetwork to which VMs will be assigned, if desired. Expected to be of
the form "regions/REGION/subnetworks/SUBNETWORK".
Corresponds to the JSON property subnetwork
5081 5082 5083 |
# File 'generated/google/apis/dataflow_v1b3/classes.rb', line 5081 def subnetwork @subnetwork end |
#taskrunner_settings ⇒ Google::Apis::DataflowV1b3::TaskRunnerSettings
Taskrunner configuration settings.
Corresponds to the JSON property taskrunnerSettings
5086 5087 5088 |
# File 'generated/google/apis/dataflow_v1b3/classes.rb', line 5086 def taskrunner_settings @taskrunner_settings end |
#teardown_policy ⇒ String
Sets the policy for determining when to turndown worker pool.
Allowed values are: TEARDOWN_ALWAYS
, TEARDOWN_ON_SUCCESS
, and
TEARDOWN_NEVER
.
TEARDOWN_ALWAYS
means workers are always torn down regardless of whether
the job succeeds. TEARDOWN_ON_SUCCESS
means workers are torn down
if the job succeeds. TEARDOWN_NEVER
means the workers are never torn
down.
If the workers are not torn down by the service, they will
continue to run and use Google Compute Engine VM resources in the
user's project until they are explicitly terminated by the user.
Because of this, Google recommends using the TEARDOWN_ALWAYS
policy except for small, manually supervised test jobs.
If unknown or unspecified, the service will attempt to choose a reasonable
default.
Corresponds to the JSON property teardownPolicy
5104 5105 5106 |
# File 'generated/google/apis/dataflow_v1b3/classes.rb', line 5104 def teardown_policy @teardown_policy end |
#worker_harness_container_image ⇒ String
Required. Docker container image that executes the Cloud Dataflow worker
harness, residing in Google Container Registry.
Corresponds to the JSON property workerHarnessContainerImage
5110 5111 5112 |
# File 'generated/google/apis/dataflow_v1b3/classes.rb', line 5110 def worker_harness_container_image @worker_harness_container_image end |
#zone ⇒ String
Zone to run the worker pools in. If empty or unspecified, the service
will attempt to choose a reasonable default.
Corresponds to the JSON property zone
5116 5117 5118 |
# File 'generated/google/apis/dataflow_v1b3/classes.rb', line 5116 def zone @zone end |
Instance Method Details
#update!(**args) ⇒ Object
Update properties of this object
5123 5124 5125 5126 5127 5128 5129 5130 5131 5132 5133 5134 5135 5136 5137 5138 5139 5140 5141 5142 5143 5144 5145 |
# File 'generated/google/apis/dataflow_v1b3/classes.rb', line 5123 def update!(**args) @autoscaling_settings = args[:autoscaling_settings] if args.key?(:autoscaling_settings) @data_disks = args[:data_disks] if args.key?(:data_disks) @default_package_set = args[:default_package_set] if args.key?(:default_package_set) @disk_size_gb = args[:disk_size_gb] if args.key?(:disk_size_gb) @disk_source_image = args[:disk_source_image] if args.key?(:disk_source_image) @disk_type = args[:disk_type] if args.key?(:disk_type) @ip_configuration = args[:ip_configuration] if args.key?(:ip_configuration) @kind = args[:kind] if args.key?(:kind) @machine_type = args[:machine_type] if args.key?(:machine_type) @metadata = args[:metadata] if args.key?(:metadata) @network = args[:network] if args.key?(:network) @num_threads_per_worker = args[:num_threads_per_worker] if args.key?(:num_threads_per_worker) @num_workers = args[:num_workers] if args.key?(:num_workers) @on_host_maintenance = args[:on_host_maintenance] if args.key?(:on_host_maintenance) @packages = args[:packages] if args.key?(:packages) @pool_args = args[:pool_args] if args.key?(:pool_args) @subnetwork = args[:subnetwork] if args.key?(:subnetwork) @taskrunner_settings = args[:taskrunner_settings] if args.key?(:taskrunner_settings) @teardown_policy = args[:teardown_policy] if args.key?(:teardown_policy) @worker_harness_container_image = args[:worker_harness_container_image] if args.key?(:worker_harness_container_image) @zone = args[:zone] if args.key?(:zone) end |