Class: Google::Apis::DataprocV1::SparkStandaloneAutoscalingConfig
- Inherits:
-
Object
- Object
- Google::Apis::DataprocV1::SparkStandaloneAutoscalingConfig
- Includes:
- Core::Hashable, Core::JsonObjectSupport
- Defined in:
- lib/google/apis/dataproc_v1/classes.rb,
lib/google/apis/dataproc_v1/representations.rb,
lib/google/apis/dataproc_v1/representations.rb
Overview
Basic autoscaling configurations for Spark Standalone.
Instance Attribute Summary collapse
-
#graceful_decommission_timeout ⇒ String
Required.
-
#remove_only_idle_workers ⇒ Boolean
(also: #remove_only_idle_workers?)
Optional.
-
#scale_down_factor ⇒ Float
Required.
-
#scale_down_min_worker_fraction ⇒ Float
Optional.
-
#scale_up_factor ⇒ Float
Required.
-
#scale_up_min_worker_fraction ⇒ Float
Optional.
Instance Method Summary collapse
-
#initialize(**args) ⇒ SparkStandaloneAutoscalingConfig
constructor
A new instance of SparkStandaloneAutoscalingConfig.
-
#update!(**args) ⇒ Object
Update properties of this object.
Constructor Details
#initialize(**args) ⇒ SparkStandaloneAutoscalingConfig
Returns a new instance of SparkStandaloneAutoscalingConfig.
5491 5492 5493 |
# File 'lib/google/apis/dataproc_v1/classes.rb', line 5491 def initialize(**args) update!(**args) end |
Instance Attribute Details
#graceful_decommission_timeout ⇒ String
Required. Timeout for Spark graceful decommissioning of spark workers.
Specifies the duration to wait for spark worker to complete spark
decommissioning tasks before forcefully removing workers. Only applicable to
downscaling operations.Bounds: 0s, 1d.
Corresponds to the JSON property gracefulDecommissionTimeout
5447 5448 5449 |
# File 'lib/google/apis/dataproc_v1/classes.rb', line 5447 def graceful_decommission_timeout @graceful_decommission_timeout end |
#remove_only_idle_workers ⇒ Boolean Also known as: remove_only_idle_workers?
Optional. Remove only idle workers when scaling down cluster
Corresponds to the JSON property removeOnlyIdleWorkers
5452 5453 5454 |
# File 'lib/google/apis/dataproc_v1/classes.rb', line 5452 def remove_only_idle_workers @remove_only_idle_workers end |
#scale_down_factor ⇒ Float
Required. Fraction of required executors to remove from Spark Serverless
clusters. A scale-down factor of 1.0 will result in scaling down so that there
are no more executors for the Spark Job.(more aggressive scaling). A scale-
down factor closer to 0 will result in a smaller magnitude of scaling donw (
less aggressive scaling).Bounds: 0.0, 1.0.
Corresponds to the JSON property scaleDownFactor
5462 5463 5464 |
# File 'lib/google/apis/dataproc_v1/classes.rb', line 5462 def scale_down_factor @scale_down_factor end |
#scale_down_min_worker_fraction ⇒ Float
Optional. Minimum scale-down threshold as a fraction of total cluster size
before scaling occurs. For example, in a 20-worker cluster, a threshold of 0.1
means the autoscaler must recommend at least a 2 worker scale-down for the
cluster to scale. A threshold of 0 means the autoscaler will scale down on any
recommended change.Bounds: 0.0, 1.0. Default: 0.0.
Corresponds to the JSON property scaleDownMinWorkerFraction
5471 5472 5473 |
# File 'lib/google/apis/dataproc_v1/classes.rb', line 5471 def scale_down_min_worker_fraction @scale_down_min_worker_fraction end |
#scale_up_factor ⇒ Float
Required. Fraction of required workers to add to Spark Standalone clusters. A
scale-up factor of 1.0 will result in scaling up so that there are no more
required workers for the Spark Job (more aggressive scaling). A scale-up
factor closer to 0 will result in a smaller magnitude of scaling up (less
aggressive scaling).Bounds: 0.0, 1.0.
Corresponds to the JSON property scaleUpFactor
5480 5481 5482 |
# File 'lib/google/apis/dataproc_v1/classes.rb', line 5480 def scale_up_factor @scale_up_factor end |
#scale_up_min_worker_fraction ⇒ Float
Optional. Minimum scale-up threshold as a fraction of total cluster size
before scaling occurs. For example, in a 20-worker cluster, a threshold of 0.1
means the autoscaler must recommend at least a 2-worker scale-up for the
cluster to scale. A threshold of 0 means the autoscaler will scale up on any
recommended change.Bounds: 0.0, 1.0. Default: 0.0.
Corresponds to the JSON property scaleUpMinWorkerFraction
5489 5490 5491 |
# File 'lib/google/apis/dataproc_v1/classes.rb', line 5489 def scale_up_min_worker_fraction @scale_up_min_worker_fraction end |
Instance Method Details
#update!(**args) ⇒ Object
Update properties of this object
5496 5497 5498 5499 5500 5501 5502 5503 |
# File 'lib/google/apis/dataproc_v1/classes.rb', line 5496 def update!(**args) @graceful_decommission_timeout = args[:graceful_decommission_timeout] if args.key?(:graceful_decommission_timeout) @remove_only_idle_workers = args[:remove_only_idle_workers] if args.key?(:remove_only_idle_workers) @scale_down_factor = args[:scale_down_factor] if args.key?(:scale_down_factor) @scale_down_min_worker_fraction = args[:scale_down_min_worker_fraction] if args.key?(:scale_down_min_worker_fraction) @scale_up_factor = args[:scale_up_factor] if args.key?(:scale_up_factor) @scale_up_min_worker_fraction = args[:scale_up_min_worker_fraction] if args.key?(:scale_up_min_worker_fraction) end |