Show / Hide Table of Contents

Class ClusterConfig

The cluster config.

Inheritance
object
ClusterConfig
Implements
IDirectResponseSchema
Inherited Members
object.Equals(object)
object.Equals(object, object)
object.GetHashCode()
object.GetType()
object.MemberwiseClone()
object.ReferenceEquals(object, object)
object.ToString()
Namespace: Google.Apis.Dataproc.v1.Data
Assembly: Google.Apis.Dataproc.v1.dll
Syntax
public class ClusterConfig : IDirectResponseSchema

Properties

AutoscalingConfig

Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.

Declaration
[JsonProperty("autoscalingConfig")]
public virtual AutoscalingConfig AutoscalingConfig { get; set; }
Property Value
Type Description
AutoscalingConfig

AuxiliaryNodeGroups

Optional. The node group settings.

Declaration
[JsonProperty("auxiliaryNodeGroups")]
public virtual IList<AuxiliaryNodeGroup> AuxiliaryNodeGroups { get; set; }
Property Value
Type Description
IList<AuxiliaryNodeGroup>

ClusterType

Optional. The type of the cluster.

Declaration
[JsonProperty("clusterType")]
public virtual string ClusterType { get; set; }
Property Value
Type Description
string

ConfigBucket

Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.

Declaration
[JsonProperty("configBucket")]
public virtual string ConfigBucket { get; set; }
Property Value
Type Description
string

DataprocMetricConfig

Optional. The config for Dataproc metrics.

Declaration
[JsonProperty("dataprocMetricConfig")]
public virtual DataprocMetricConfig DataprocMetricConfig { get; set; }
Property Value
Type Description
DataprocMetricConfig

ETag

The ETag of the item.

Declaration
public virtual string ETag { get; set; }
Property Value
Type Description
string

EncryptionConfig

Optional. Encryption settings for the cluster.

Declaration
[JsonProperty("encryptionConfig")]
public virtual EncryptionConfig EncryptionConfig { get; set; }
Property Value
Type Description
EncryptionConfig

EndpointConfig

Optional. Port/endpoint configuration for this cluster

Declaration
[JsonProperty("endpointConfig")]
public virtual EndpointConfig EndpointConfig { get; set; }
Property Value
Type Description
EndpointConfig

GceClusterConfig

Optional. The shared Compute Engine config settings for all instances in a cluster.

Declaration
[JsonProperty("gceClusterConfig")]
public virtual GceClusterConfig GceClusterConfig { get; set; }
Property Value
Type Description
GceClusterConfig

GkeClusterConfig

Optional. BETA. The Kubernetes Engine config for Dataproc clusters deployed to The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. These config settings are mutually exclusive with Compute Engine-based options, such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.

Declaration
[JsonProperty("gkeClusterConfig")]
public virtual GkeClusterConfig GkeClusterConfig { get; set; }
Property Value
Type Description
GkeClusterConfig

InitializationActions

Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi

Declaration
[JsonProperty("initializationActions")]
public virtual IList<NodeInitializationAction> InitializationActions { get; set; }
Property Value
Type Description
IList<NodeInitializationAction>

LifecycleConfig

Optional. Lifecycle setting for the cluster.

Declaration
[JsonProperty("lifecycleConfig")]
public virtual LifecycleConfig LifecycleConfig { get; set; }
Property Value
Type Description
LifecycleConfig

MasterConfig

Optional. The Compute Engine config settings for the cluster's master instance.

Declaration
[JsonProperty("masterConfig")]
public virtual InstanceGroupConfig MasterConfig { get; set; }
Property Value
Type Description
InstanceGroupConfig

MetastoreConfig

Optional. Metastore configuration.

Declaration
[JsonProperty("metastoreConfig")]
public virtual MetastoreConfig MetastoreConfig { get; set; }
Property Value
Type Description
MetastoreConfig

SecondaryWorkerConfig

Optional. The Compute Engine config settings for a cluster's secondary worker instances

Declaration
[JsonProperty("secondaryWorkerConfig")]
public virtual InstanceGroupConfig SecondaryWorkerConfig { get; set; }
Property Value
Type Description
InstanceGroupConfig

SecurityConfig

Optional. Security settings for the cluster.

Declaration
[JsonProperty("securityConfig")]
public virtual SecurityConfig SecurityConfig { get; set; }
Property Value
Type Description
SecurityConfig

SoftwareConfig

Optional. The config settings for cluster software.

Declaration
[JsonProperty("softwareConfig")]
public virtual SoftwareConfig SoftwareConfig { get; set; }
Property Value
Type Description
SoftwareConfig

TempBucket

Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.

Declaration
[JsonProperty("tempBucket")]
public virtual string TempBucket { get; set; }
Property Value
Type Description
string

WorkerConfig

Optional. The Compute Engine config settings for the cluster's worker instances.

Declaration
[JsonProperty("workerConfig")]
public virtual InstanceGroupConfig WorkerConfig { get; set; }
Property Value
Type Description
InstanceGroupConfig

Implements

IDirectResponseSchema
In this article
Back to top Generated by DocFX