Class: Google::Apis::DataprocV1::SparkStandaloneAutoscalingConfig
- Inherits:
-
Object
- Object
- Google::Apis::DataprocV1::SparkStandaloneAutoscalingConfig
- Includes:
- Core::Hashable, Core::JsonObjectSupport
- Defined in:
- lib/google/apis/dataproc_v1/classes.rb,
lib/google/apis/dataproc_v1/representations.rb,
lib/google/apis/dataproc_v1/representations.rb
Overview
Basic autoscaling configurations for Spark Standalone.
Instance Attribute Summary collapse
-
#graceful_decommission_timeout ⇒ String
Required.
-
#remove_only_idle_workers ⇒ Boolean
(also: #remove_only_idle_workers?)
Optional.
-
#scale_down_factor ⇒ Float
Required.
-
#scale_down_min_worker_fraction ⇒ Float
Optional.
-
#scale_up_factor ⇒ Float
Required.
-
#scale_up_min_worker_fraction ⇒ Float
Optional.
Instance Method Summary collapse
-
#initialize(**args) ⇒ SparkStandaloneAutoscalingConfig
constructor
A new instance of SparkStandaloneAutoscalingConfig.
-
#update!(**args) ⇒ Object
Update properties of this object.
Constructor Details
#initialize(**args) ⇒ SparkStandaloneAutoscalingConfig
Returns a new instance of SparkStandaloneAutoscalingConfig.
8878 8879 8880 |
# File 'lib/google/apis/dataproc_v1/classes.rb', line 8878 def initialize(**args) update!(**args) end |
Instance Attribute Details
#graceful_decommission_timeout ⇒ String
Required. Timeout for Spark graceful decommissioning of spark workers.
Specifies the duration to wait for spark worker to complete spark
decommissioning tasks before forcefully removing workers. Only applicable to
downscaling operations.Bounds: 0s, 1d.
Corresponds to the JSON property gracefulDecommissionTimeout
8834 8835 8836 |
# File 'lib/google/apis/dataproc_v1/classes.rb', line 8834 def graceful_decommission_timeout @graceful_decommission_timeout end |
#remove_only_idle_workers ⇒ Boolean Also known as: remove_only_idle_workers?
Optional. Remove only idle workers when scaling down cluster
Corresponds to the JSON property removeOnlyIdleWorkers
8839 8840 8841 |
# File 'lib/google/apis/dataproc_v1/classes.rb', line 8839 def remove_only_idle_workers @remove_only_idle_workers end |
#scale_down_factor ⇒ Float
Required. Fraction of required executors to remove from Spark Serverless
clusters. A scale-down factor of 1.0 will result in scaling down so that there
are no more executors for the Spark Job.(more aggressive scaling). A scale-
down factor closer to 0 will result in a smaller magnitude of scaling donw (
less aggressive scaling).Bounds: 0.0, 1.0.
Corresponds to the JSON property scaleDownFactor
8849 8850 8851 |
# File 'lib/google/apis/dataproc_v1/classes.rb', line 8849 def scale_down_factor @scale_down_factor end |
#scale_down_min_worker_fraction ⇒ Float
Optional. Minimum scale-down threshold as a fraction of total cluster size
before scaling occurs. For example, in a 20-worker cluster, a threshold of 0.1
means the autoscaler must recommend at least a 2 worker scale-down for the
cluster to scale. A threshold of 0 means the autoscaler will scale down on any
recommended change.Bounds: 0.0, 1.0. Default: 0.0.
Corresponds to the JSON property scaleDownMinWorkerFraction
8858 8859 8860 |
# File 'lib/google/apis/dataproc_v1/classes.rb', line 8858 def scale_down_min_worker_fraction @scale_down_min_worker_fraction end |
#scale_up_factor ⇒ Float
Required. Fraction of required workers to add to Spark Standalone clusters. A
scale-up factor of 1.0 will result in scaling up so that there are no more
required workers for the Spark Job (more aggressive scaling). A scale-up
factor closer to 0 will result in a smaller magnitude of scaling up (less
aggressive scaling).Bounds: 0.0, 1.0.
Corresponds to the JSON property scaleUpFactor
8867 8868 8869 |
# File 'lib/google/apis/dataproc_v1/classes.rb', line 8867 def scale_up_factor @scale_up_factor end |
#scale_up_min_worker_fraction ⇒ Float
Optional. Minimum scale-up threshold as a fraction of total cluster size
before scaling occurs. For example, in a 20-worker cluster, a threshold of 0.1
means the autoscaler must recommend at least a 2-worker scale-up for the
cluster to scale. A threshold of 0 means the autoscaler will scale up on any
recommended change.Bounds: 0.0, 1.0. Default: 0.0.
Corresponds to the JSON property scaleUpMinWorkerFraction
8876 8877 8878 |
# File 'lib/google/apis/dataproc_v1/classes.rb', line 8876 def scale_up_min_worker_fraction @scale_up_min_worker_fraction end |
Instance Method Details
#update!(**args) ⇒ Object
Update properties of this object
8883 8884 8885 8886 8887 8888 8889 8890 |
# File 'lib/google/apis/dataproc_v1/classes.rb', line 8883 def update!(**args) @graceful_decommission_timeout = args[:graceful_decommission_timeout] if args.key?(:graceful_decommission_timeout) @remove_only_idle_workers = args[:remove_only_idle_workers] if args.key?(:remove_only_idle_workers) @scale_down_factor = args[:scale_down_factor] if args.key?(:scale_down_factor) @scale_down_min_worker_fraction = args[:scale_down_min_worker_fraction] if args.key?(:scale_down_min_worker_fraction) @scale_up_factor = args[:scale_up_factor] if args.key?(:scale_up_factor) @scale_up_min_worker_fraction = args[:scale_up_min_worker_fraction] if args.key?(:scale_up_min_worker_fraction) end |