Show / Hide Table of Contents

Class SparkStandaloneAutoscalingConfig

Basic autoscaling configurations for Spark Standalone.

Inheritance
object
SparkStandaloneAutoscalingConfig
Implements
IDirectResponseSchema
Inherited Members
object.Equals(object)
object.Equals(object, object)
object.GetHashCode()
object.GetType()
object.MemberwiseClone()
object.ReferenceEquals(object, object)
object.ToString()
Namespace: Google.Apis.Dataproc.v1.Data
Assembly: Google.Apis.Dataproc.v1.dll
Syntax
public class SparkStandaloneAutoscalingConfig : IDirectResponseSchema

Properties

ETag

The ETag of the item.

Declaration
public virtual string ETag { get; set; }
Property Value
Type Description
string

GracefulDecommissionTimeout

Required. Timeout for Spark graceful decommissioning of spark workers. Specifies the duration to wait for spark worker to complete spark decommissioning tasks before forcefully removing workers. Only applicable to downscaling operations.Bounds: 0s, 1d.

Declaration
[JsonProperty("gracefulDecommissionTimeout")]
public virtual object GracefulDecommissionTimeout { get; set; }
Property Value
Type Description
object

RemoveOnlyIdleWorkers

Optional. Remove only idle workers when scaling down cluster

Declaration
[JsonProperty("removeOnlyIdleWorkers")]
public virtual bool? RemoveOnlyIdleWorkers { get; set; }
Property Value
Type Description
bool?

ScaleDownFactor

Required. Fraction of required executors to remove from Spark Serverless clusters. A scale-down factor of 1.0 will result in scaling down so that there are no more executors for the Spark Job.(more aggressive scaling). A scale-down factor closer to 0 will result in a smaller magnitude of scaling donw (less aggressive scaling).Bounds: 0.0, 1.0.

Declaration
[JsonProperty("scaleDownFactor")]
public virtual double? ScaleDownFactor { get; set; }
Property Value
Type Description
double?

ScaleDownMinWorkerFraction

Optional. Minimum scale-down threshold as a fraction of total cluster size before scaling occurs. For example, in a 20-worker cluster, a threshold of 0.1 means the autoscaler must recommend at least a 2 worker scale-down for the cluster to scale. A threshold of 0 means the autoscaler will scale down on any recommended change.Bounds: 0.0, 1.0. Default: 0.0.

Declaration
[JsonProperty("scaleDownMinWorkerFraction")]
public virtual double? ScaleDownMinWorkerFraction { get; set; }
Property Value
Type Description
double?

ScaleUpFactor

Required. Fraction of required workers to add to Spark Standalone clusters. A scale-up factor of 1.0 will result in scaling up so that there are no more required workers for the Spark Job (more aggressive scaling). A scale-up factor closer to 0 will result in a smaller magnitude of scaling up (less aggressive scaling).Bounds: 0.0, 1.0.

Declaration
[JsonProperty("scaleUpFactor")]
public virtual double? ScaleUpFactor { get; set; }
Property Value
Type Description
double?

ScaleUpMinWorkerFraction

Optional. Minimum scale-up threshold as a fraction of total cluster size before scaling occurs. For example, in a 20-worker cluster, a threshold of 0.1 means the autoscaler must recommend at least a 2-worker scale-up for the cluster to scale. A threshold of 0 means the autoscaler will scale up on any recommended change.Bounds: 0.0, 1.0. Default: 0.0.

Declaration
[JsonProperty("scaleUpMinWorkerFraction")]
public virtual double? ScaleUpMinWorkerFraction { get; set; }
Property Value
Type Description
double?

Implements

IDirectResponseSchema
In this article
Back to top Generated by DocFX