Class: Google::Apis::DataprocV1::SparkStandaloneAutoscalingConfig
- Inherits:
-
Object
- Object
- Google::Apis::DataprocV1::SparkStandaloneAutoscalingConfig
- Includes:
- Core::Hashable, Core::JsonObjectSupport
- Defined in:
- lib/google/apis/dataproc_v1/classes.rb,
lib/google/apis/dataproc_v1/representations.rb,
lib/google/apis/dataproc_v1/representations.rb
Overview
Basic autoscaling configurations for Spark Standalone.
Instance Attribute Summary collapse
-
#graceful_decommission_timeout ⇒ String
Required.
-
#remove_only_idle_workers ⇒ Boolean
(also: #remove_only_idle_workers?)
Optional.
-
#scale_down_factor ⇒ Float
Required.
-
#scale_down_min_worker_fraction ⇒ Float
Optional.
-
#scale_up_factor ⇒ Float
Required.
-
#scale_up_min_worker_fraction ⇒ Float
Optional.
Instance Method Summary collapse
-
#initialize(**args) ⇒ SparkStandaloneAutoscalingConfig
constructor
A new instance of SparkStandaloneAutoscalingConfig.
-
#update!(**args) ⇒ Object
Update properties of this object.
Constructor Details
#initialize(**args) ⇒ SparkStandaloneAutoscalingConfig
Returns a new instance of SparkStandaloneAutoscalingConfig.
5719 5720 5721 |
# File 'lib/google/apis/dataproc_v1/classes.rb', line 5719 def initialize(**args) update!(**args) end |
Instance Attribute Details
#graceful_decommission_timeout ⇒ String
Required. Timeout for Spark graceful decommissioning of spark workers.
Specifies the duration to wait for spark worker to complete spark
decommissioning tasks before forcefully removing workers. Only applicable to
downscaling operations.Bounds: 0s, 1d.
Corresponds to the JSON property gracefulDecommissionTimeout
5675 5676 5677 |
# File 'lib/google/apis/dataproc_v1/classes.rb', line 5675 def graceful_decommission_timeout @graceful_decommission_timeout end |
#remove_only_idle_workers ⇒ Boolean Also known as: remove_only_idle_workers?
Optional. Remove only idle workers when scaling down cluster
Corresponds to the JSON property removeOnlyIdleWorkers
5680 5681 5682 |
# File 'lib/google/apis/dataproc_v1/classes.rb', line 5680 def remove_only_idle_workers @remove_only_idle_workers end |
#scale_down_factor ⇒ Float
Required. Fraction of required executors to remove from Spark Serverless
clusters. A scale-down factor of 1.0 will result in scaling down so that there
are no more executors for the Spark Job.(more aggressive scaling). A scale-
down factor closer to 0 will result in a smaller magnitude of scaling donw (
less aggressive scaling).Bounds: 0.0, 1.0.
Corresponds to the JSON property scaleDownFactor
5690 5691 5692 |
# File 'lib/google/apis/dataproc_v1/classes.rb', line 5690 def scale_down_factor @scale_down_factor end |
#scale_down_min_worker_fraction ⇒ Float
Optional. Minimum scale-down threshold as a fraction of total cluster size
before scaling occurs. For example, in a 20-worker cluster, a threshold of 0.1
means the autoscaler must recommend at least a 2 worker scale-down for the
cluster to scale. A threshold of 0 means the autoscaler will scale down on any
recommended change.Bounds: 0.0, 1.0. Default: 0.0.
Corresponds to the JSON property scaleDownMinWorkerFraction
5699 5700 5701 |
# File 'lib/google/apis/dataproc_v1/classes.rb', line 5699 def scale_down_min_worker_fraction @scale_down_min_worker_fraction end |
#scale_up_factor ⇒ Float
Required. Fraction of required workers to add to Spark Standalone clusters. A
scale-up factor of 1.0 will result in scaling up so that there are no more
required workers for the Spark Job (more aggressive scaling). A scale-up
factor closer to 0 will result in a smaller magnitude of scaling up (less
aggressive scaling).Bounds: 0.0, 1.0.
Corresponds to the JSON property scaleUpFactor
5708 5709 5710 |
# File 'lib/google/apis/dataproc_v1/classes.rb', line 5708 def scale_up_factor @scale_up_factor end |
#scale_up_min_worker_fraction ⇒ Float
Optional. Minimum scale-up threshold as a fraction of total cluster size
before scaling occurs. For example, in a 20-worker cluster, a threshold of 0.1
means the autoscaler must recommend at least a 2-worker scale-up for the
cluster to scale. A threshold of 0 means the autoscaler will scale up on any
recommended change.Bounds: 0.0, 1.0. Default: 0.0.
Corresponds to the JSON property scaleUpMinWorkerFraction
5717 5718 5719 |
# File 'lib/google/apis/dataproc_v1/classes.rb', line 5717 def scale_up_min_worker_fraction @scale_up_min_worker_fraction end |
Instance Method Details
#update!(**args) ⇒ Object
Update properties of this object
5724 5725 5726 5727 5728 5729 5730 5731 |
# File 'lib/google/apis/dataproc_v1/classes.rb', line 5724 def update!(**args) @graceful_decommission_timeout = args[:graceful_decommission_timeout] if args.key?(:graceful_decommission_timeout) @remove_only_idle_workers = args[:remove_only_idle_workers] if args.key?(:remove_only_idle_workers) @scale_down_factor = args[:scale_down_factor] if args.key?(:scale_down_factor) @scale_down_min_worker_fraction = args[:scale_down_min_worker_fraction] if args.key?(:scale_down_min_worker_fraction) @scale_up_factor = args[:scale_up_factor] if args.key?(:scale_up_factor) @scale_up_min_worker_fraction = args[:scale_up_min_worker_fraction] if args.key?(:scale_up_min_worker_fraction) end |