Types for Google Cloud Dataproc v1 API¶
- class google.cloud.dataproc_v1.types.AcceleratorConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Specifies the type and number of accelerator cards attached to the instances of an instance. See GPUs on Compute Engine.
- accelerator_type_uri¶
Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes.
Examples:
https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-t4
projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-t4
nvidia-tesla-t4
Auto Zone Exception: If you are using the Dataproc Auto Zone Placement feature, you must use the short name of the accelerator type resource, for example,
nvidia-tesla-t4
.- Type
- class google.cloud.dataproc_v1.types.AutoscalingConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Autoscaling Policy config associated with the cluster.
- policy_uri¶
Optional. The autoscaling policy used by the cluster.
Only resource names including projectid and location (region) are valid. Examples:
https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]
projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]
Note that the policy must be in the same project and Dataproc region.
- Type
- class google.cloud.dataproc_v1.types.AutoscalingPolicy(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Describes an autoscaling policy for Dataproc cluster autoscaler.
- id¶
Required. The policy id.
The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of between 3 and 50 characters.
- Type
- name¶
Output only. The “resource name” of the autoscaling policy, as described in https://cloud.google.com/apis/design/resource_names.
For
projects.regions.autoscalingPolicies
, the resource name of the policy has the following format:projects/{project_id}/regions/{region}/autoscalingPolicies/{policy_id}
For
projects.locations.autoscalingPolicies
, the resource name of the policy has the following format:projects/{project_id}/locations/{location}/autoscalingPolicies/{policy_id}
- Type
- worker_config¶
Required. Describes how the autoscaler will operate for primary workers.
- secondary_worker_config¶
Optional. Describes how the autoscaler will operate for secondary workers.
- labels¶
Optional. The labels to associate with this autoscaling policy. Label keys must contain 1 to 63 characters, and must conform to RFC 1035. Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035. No more than 32 labels can be associated with an autoscaling policy.
- class LabelsEntry(mapping=None, *, ignore_unknown_fields=False, **kwargs)¶
Bases:
proto.message.Message
- class google.cloud.dataproc_v1.types.AutotuningConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Autotuning configuration of the workload.
- scenarios¶
Optional. Scenarios for which tunings are applied.
- Type
MutableSequence[google.cloud.dataproc_v1.types.AutotuningConfig.Scenario]
- class Scenario(value)[source]¶
Bases:
proto.enums.Enum
Scenario represents a specific goal that autotuning will attempt to achieve by modifying workloads.
- Values:
- SCENARIO_UNSPECIFIED (0):
Default value.
- SCALING (2):
Scaling recommendations such as initialExecutors.
- BROADCAST_HASH_JOIN (3):
Adding hints for potential relation broadcasts.
- MEMORY (4):
Memory management for workloads.
- class google.cloud.dataproc_v1.types.AuxiliaryNodeGroup(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Node group identification and configuration information.
- node_group¶
Required. Node group configuration.
- class google.cloud.dataproc_v1.types.AuxiliaryServicesConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Auxiliary services configuration for a Cluster.
- metastore_config¶
Optional. The Hive Metastore configuration for this workload.
- spark_history_server_config¶
Optional. The Spark History Server configuration for the workload.
- class google.cloud.dataproc_v1.types.BasicAutoscalingAlgorithm(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Basic algorithm for autoscaling.
- cooldown_period¶
Optional. Duration between scaling events. A scaling period starts after the update operation from the previous event has completed.
Bounds: [2m, 1d]. Default: 2m.
- class google.cloud.dataproc_v1.types.BasicYarnAutoscalingConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Basic autoscaling configurations for YARN.
- graceful_decommission_timeout¶
Required. Timeout for YARN graceful decommissioning of Node Managers. Specifies the duration to wait for jobs to complete before forcefully removing workers (and potentially interrupting jobs). Only applicable to downscaling operations.
Bounds: [0s, 1d].
- scale_up_factor¶
Required. Fraction of average YARN pending memory in the last cooldown period for which to add workers. A scale-up factor of 1.0 will result in scaling up so that there is no pending memory remaining after the update (more aggressive scaling). A scale-up factor closer to 0 will result in a smaller magnitude of scaling up (less aggressive scaling). See How autoscaling works for more information.
Bounds: [0.0, 1.0].
- Type
- scale_down_factor¶
Required. Fraction of average YARN pending memory in the last cooldown period for which to remove workers. A scale-down factor of 1 will result in scaling down so that there is no available memory remaining after the update (more aggressive scaling). A scale-down factor of 0 disables removing workers, which can be beneficial for autoscaling a single job. See How autoscaling works for more information.
Bounds: [0.0, 1.0].
- Type
- scale_up_min_worker_fraction¶
Optional. Minimum scale-up threshold as a fraction of total cluster size before scaling occurs. For example, in a 20-worker cluster, a threshold of 0.1 means the autoscaler must recommend at least a 2-worker scale-up for the cluster to scale. A threshold of 0 means the autoscaler will scale up on any recommended change.
Bounds: [0.0, 1.0]. Default: 0.0.
- Type
- scale_down_min_worker_fraction¶
Optional. Minimum scale-down threshold as a fraction of total cluster size before scaling occurs. For example, in a 20-worker cluster, a threshold of 0.1 means the autoscaler must recommend at least a 2 worker scale-down for the cluster to scale. A threshold of 0 means the autoscaler will scale down on any recommended change.
Bounds: [0.0, 1.0]. Default: 0.0.
- Type
- class google.cloud.dataproc_v1.types.Batch(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A representation of a batch workload in the service.
This message has oneof fields (mutually exclusive fields). For each oneof, at most one member field can be set at the same time. Setting any member of the oneof automatically clears all other members.
- uuid¶
Output only. A batch UUID (Unique Universal Identifier). The service generates this value when it creates the batch.
- Type
- create_time¶
Output only. The time when the batch was created.
- runtime_info¶
Output only. Runtime information about batch execution.
- state¶
Output only. The state of the batch.
- state_message¶
Output only. Batch state details, such as a failure description if the state is
FAILED
.- Type
- state_time¶
Output only. The time when the batch entered a current state.
- labels¶
Optional. The labels to associate with this batch. Label keys must contain 1 to 63 characters, and must conform to RFC 1035. Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035. No more than 32 labels can be associated with a batch.
- runtime_config¶
Optional. Runtime configuration for the batch execution.
- environment_config¶
Optional. Environment configuration for the batch execution.
- state_history¶
Output only. Historical state information for the batch.
- Type
MutableSequence[google.cloud.dataproc_v1.types.Batch.StateHistory]
- class LabelsEntry(mapping=None, *, ignore_unknown_fields=False, **kwargs)¶
Bases:
proto.message.Message
- class State(value)[source]¶
Bases:
proto.enums.Enum
The batch state.
- Values:
- STATE_UNSPECIFIED (0):
The batch state is unknown.
- PENDING (1):
The batch is created before running.
- RUNNING (2):
The batch is running.
- CANCELLING (3):
The batch is cancelling.
- CANCELLED (4):
The batch cancellation was successful.
- SUCCEEDED (5):
The batch completed successfully.
- FAILED (6):
The batch is no longer running due to an error.
- class StateHistory(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Historical state information.
- state¶
Output only. The state of the batch at this point in history.
- state_start_time¶
Output only. The time when the batch entered the historical state.
- class google.cloud.dataproc_v1.types.BatchOperationMetadata(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Metadata describing the Batch operation.
- create_time¶
The time when the operation was created.
- done_time¶
The time when the operation finished.
- operation_type¶
The operation type.
- class BatchOperationType(value)[source]¶
Bases:
proto.enums.Enum
Operation type for Batch resources
- Values:
- BATCH_OPERATION_TYPE_UNSPECIFIED (0):
Batch operation type is unknown.
- BATCH (1):
Batch operation type.
- class LabelsEntry(mapping=None, *, ignore_unknown_fields=False, **kwargs)¶
Bases:
proto.message.Message
- class google.cloud.dataproc_v1.types.CancelJobRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A request to cancel a job.
- class google.cloud.dataproc_v1.types.Cluster(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Describes the identifying information, config, and status of a Dataproc cluster
- cluster_name¶
Required. The cluster name, which must be unique within a project. The name must start with a lowercase letter, and can contain up to 51 lowercase letters, numbers, and hyphens. It cannot end with a hyphen. The name of a deleted cluster can be reused.
- Type
- config¶
Optional. The cluster config for a cluster of Compute Engine Instances. Note that Dataproc may set default values, and values may change when clusters are updated.
Exactly one of ClusterConfig or VirtualClusterConfig must be specified.
- virtual_cluster_config¶
Optional. The virtual cluster config is used when creating a Dataproc cluster that does not directly control the underlying compute resources, for example, when creating a Dataproc-on-GKE cluster. Dataproc may set default values, and values may change when clusters are updated. Exactly one of [config][google.cloud.dataproc.v1.Cluster.config] or [virtual_cluster_config][google.cloud.dataproc.v1.Cluster.virtual_cluster_config] must be specified.
- labels¶
Optional. The labels to associate with this cluster. Label keys must contain 1 to 63 characters, and must conform to RFC 1035. Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035. No more than 32 labels can be associated with a cluster.
- status¶
Output only. Cluster status.
- status_history¶
Output only. The previous cluster status.
- Type
MutableSequence[google.cloud.dataproc_v1.types.ClusterStatus]
- cluster_uuid¶
Output only. A cluster UUID (Unique Universal Identifier). Dataproc generates this value when it creates the cluster.
- Type
- metrics¶
Output only. Contains cluster daemon metrics such as HDFS and YARN stats.
Beta Feature: This report is available for testing purposes only. It may be changed before final release.
- class LabelsEntry(mapping=None, *, ignore_unknown_fields=False, **kwargs)¶
Bases:
proto.message.Message
- class google.cloud.dataproc_v1.types.ClusterConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
The cluster config.
- config_bucket¶
Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster’s staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets). This field requires a Cloud Storage bucket name, not a ``gs://…`` URI to a Cloud Storage bucket.
- Type
- temp_bucket¶
Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster’s temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket (see Dataproc staging and temp buckets). This field requires a Cloud Storage bucket name, not a ``gs://…`` URI to a Cloud Storage bucket.
- Type
- gce_cluster_config¶
Optional. The shared Compute Engine config settings for all instances in a cluster.
- master_config¶
Optional. The Compute Engine config settings for the cluster’s master instance.
- worker_config¶
Optional. The Compute Engine config settings for the cluster’s worker instances.
- secondary_worker_config¶
Optional. The Compute Engine config settings for a cluster’s secondary worker instances
- software_config¶
Optional. The config settings for cluster software.
- initialization_actions¶
Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node’s
role
metadata to run an executable on a master or worker node, as shown below usingcurl
(you can also usewget
):ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
- Type
MutableSequence[google.cloud.dataproc_v1.types.NodeInitializationAction]
- encryption_config¶
Optional. Encryption settings for the cluster.
- autoscaling_config¶
Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
- security_config¶
Optional. Security settings for the cluster.
- lifecycle_config¶
Optional. Lifecycle setting for the cluster.
- endpoint_config¶
Optional. Port/endpoint configuration for this cluster
- metastore_config¶
Optional. Metastore configuration.
- dataproc_metric_config¶
Optional. The config for Dataproc metrics.
- auxiliary_node_groups¶
Optional. The node group settings.
- Type
MutableSequence[google.cloud.dataproc_v1.types.AuxiliaryNodeGroup]
- class google.cloud.dataproc_v1.types.ClusterMetrics(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Contains cluster daemon metrics, such as HDFS and YARN stats.
Beta Feature: This report is available for testing purposes only. It may be changed before final release.
- class HdfsMetricsEntry(mapping=None, *, ignore_unknown_fields=False, **kwargs)¶
Bases:
proto.message.Message
- class YarnMetricsEntry(mapping=None, *, ignore_unknown_fields=False, **kwargs)¶
Bases:
proto.message.Message
- class google.cloud.dataproc_v1.types.ClusterOperation(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
The cluster operation triggered by a workflow.
- class google.cloud.dataproc_v1.types.ClusterOperationMetadata(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Metadata describing the operation.
- status¶
Output only. Current operation status.
- status_history¶
Output only. The previous operation status.
- Type
MutableSequence[google.cloud.dataproc_v1.types.ClusterOperationStatus]
- class LabelsEntry(mapping=None, *, ignore_unknown_fields=False, **kwargs)¶
Bases:
proto.message.Message
- class google.cloud.dataproc_v1.types.ClusterOperationStatus(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
The status of the operation.
- state¶
Output only. A message containing the operation state.
- state_start_time¶
Output only. The time this state was entered.
- class State(value)[source]¶
Bases:
proto.enums.Enum
The operation state.
- Values:
- UNKNOWN (0):
Unused.
- PENDING (1):
The operation has been created.
- RUNNING (2):
The operation is running.
- DONE (3):
The operation is done; either cancelled or completed.
- class google.cloud.dataproc_v1.types.ClusterSelector(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A selector that chooses target cluster for jobs based on metadata.
- zone¶
Optional. The zone where workflow process executes. This parameter does not affect the selection of the cluster.
If unspecified, the zone of the first cluster matching the selector is used.
- Type
- cluster_labels¶
Required. The cluster labels. Cluster must have all labels to match.
- class ClusterLabelsEntry(mapping=None, *, ignore_unknown_fields=False, **kwargs)¶
Bases:
proto.message.Message
- class google.cloud.dataproc_v1.types.ClusterStatus(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
The status of a cluster and its instances.
- state¶
Output only. The cluster’s state.
- state_start_time¶
Output only. Time when this state was entered (see JSON representation of Timestamp).
- substate¶
Output only. Additional state information that includes status reported by the agent.
- class State(value)[source]¶
Bases:
proto.enums.Enum
The cluster state.
- Values:
- UNKNOWN (0):
The cluster state is unknown.
- CREATING (1):
The cluster is being created and set up. It is not ready for use.
- RUNNING (2):
The cluster is currently running and healthy. It is ready for use.
Note: The cluster state changes from “creating” to “running” status after the master node(s), first two primary worker nodes (and the last primary worker node if primary workers > 2) are running.
- ERROR (3):
The cluster encountered an error. It is not ready for use.
- ERROR_DUE_TO_UPDATE (9):
The cluster has encountered an error while being updated. Jobs can be submitted to the cluster, but the cluster cannot be updated.
- DELETING (4):
The cluster is being deleted. It cannot be used.
- UPDATING (5):
The cluster is being updated. It continues to accept and process jobs.
- STOPPING (6):
The cluster is being stopped. It cannot be used.
- STOPPED (7):
The cluster is currently stopped. It is not ready for use.
- STARTING (8):
The cluster is being started. It is not ready for use.
- REPAIRING (10):
The cluster is being repaired. It is not ready for use.
- class Substate(value)[source]¶
Bases:
proto.enums.Enum
The cluster substate.
- Values:
- UNSPECIFIED (0):
The cluster substate is unknown.
- UNHEALTHY (1):
The cluster is known to be in an unhealthy state (for example, critical daemons are not running or HDFS capacity is exhausted).
Applies to RUNNING state.
- STALE_STATUS (2):
The agent-reported status is out of date (may occur if Dataproc loses communication with Agent).
Applies to RUNNING state.
- class google.cloud.dataproc_v1.types.Component(value)[source]¶
Bases:
proto.enums.Enum
Cluster components that can be activated.
- Values:
- COMPONENT_UNSPECIFIED (0):
Unspecified component. Specifying this will cause Cluster creation to fail.
- ANACONDA (5):
The Anaconda component is no longer supported or applicable to [supported Dataproc on Compute Engine image versions] (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-version-clusters#supported-dataproc-image-versions). It cannot be activated on clusters created with supported Dataproc on Compute Engine image versions.
- DOCKER (13):
Docker
- DRUID (9):
The Druid query engine. (alpha)
- FLINK (14):
Flink
- HBASE (11):
HBase. (beta)
- HIVE_WEBHCAT (3):
The Hive Web HCatalog (the REST service for accessing HCatalog).
- HUDI (18):
Hudi.
- JUPYTER (1):
The Jupyter Notebook.
- PRESTO (6):
The Presto query engine.
- TRINO (17):
The Trino query engine.
- RANGER (12):
The Ranger service.
- SOLR (10):
The Solr service.
- ZEPPELIN (4):
The Zeppelin notebook.
- ZOOKEEPER (8):
The Zookeeper service.
- class google.cloud.dataproc_v1.types.ConfidentialInstanceConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Confidential Instance Config for clusters using Confidential VMs
- class google.cloud.dataproc_v1.types.CreateAutoscalingPolicyRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A request to create an autoscaling policy.
- parent¶
Required. The “resource name” of the region or location, as described in https://cloud.google.com/apis/design/resource_names.
For
projects.regions.autoscalingPolicies.create
, the resource name of the region has the following format:projects/{project_id}/regions/{region}
For
projects.locations.autoscalingPolicies.create
, the resource name of the location has the following format:projects/{project_id}/locations/{location}
- Type
- policy¶
Required. The autoscaling policy to create.
- class google.cloud.dataproc_v1.types.CreateBatchRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A request to create a batch workload.
- batch¶
Required. The batch to create.
- batch_id¶
Optional. The ID to use for the batch, which will become the final component of the batch’s resource name.
This value must be 4-63 characters. Valid characters are
/[a-z][0-9]-/
.- Type
- request_id¶
Optional. A unique ID used to identify the request. If the service receives two CreateBatchRequests with the same request_id, the second request is ignored and the Operation that corresponds to the first Batch created and stored in the backend is returned.
Recommendation: Set this value to a UUID.
The value must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters.
- Type
- class google.cloud.dataproc_v1.types.CreateClusterRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A request to create a cluster.
- project_id¶
Required. The ID of the Google Cloud Platform project that the cluster belongs to.
- Type
- cluster¶
Required. The cluster to create.
- request_id¶
Optional. A unique ID used to identify the request. If the server receives two CreateClusterRequests with the same id, then the second request will be ignored and the first [google.longrunning.Operation][google.longrunning.Operation] created and stored in the backend is returned.
It is recommended to always set this value to a UUID.
The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters.
- Type
- action_on_failed_primary_workers¶
Optional. Failure action when primary worker creation fails.
- class google.cloud.dataproc_v1.types.CreateNodeGroupRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A request to create a node group.
- parent¶
Required. The parent resource where this node group will be created. Format:
projects/{project}/regions/{region}/clusters/{cluster}
- Type
- node_group¶
Required. The node group to create.
- node_group_id¶
Optional. An optional node group ID. Generated if not specified.
The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of from 3 to 33 characters.
- Type
- request_id¶
Optional. A unique ID used to identify the request. If the server receives two CreateNodeGroupRequest with the same ID, the second request is ignored and the first [google.longrunning.Operation][google.longrunning.Operation] created and stored in the backend is returned.
Recommendation: Set this value to a UUID.
The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters.
- Type
- class google.cloud.dataproc_v1.types.CreateSessionRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A request to create a session.
- session¶
Required. The interactive session to create.
- session_id¶
Required. The ID to use for the session, which becomes the final component of the session’s resource name.
This value must be 4-63 characters. Valid characters are /[a-z][0-9]-/.
- Type
- request_id¶
Optional. A unique ID used to identify the request. If the service receives two CreateSessionRequestss with the same ID, the second request is ignored, and the first [Session][google.cloud.dataproc.v1.Session] is created and stored in the backend.
Recommendation: Set this value to a UUID.
The value must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters.
- Type
- class google.cloud.dataproc_v1.types.CreateSessionTemplateRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A request to create a session template.
- session_template¶
Required. The session template to create.
- class google.cloud.dataproc_v1.types.CreateWorkflowTemplateRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A request to create a workflow template.
- parent¶
Required. The resource name of the region or location, as described in https://cloud.google.com/apis/design/resource_names.
For
projects.regions.workflowTemplates.create
, the resource name of the region has the following format:projects/{project_id}/regions/{region}
For
projects.locations.workflowTemplates.create
, the resource name of the location has the following format:projects/{project_id}/locations/{location}
- Type
- template¶
Required. The Dataproc workflow template to create.
- class google.cloud.dataproc_v1.types.DataprocMetricConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Dataproc metric config.
- metrics¶
Required. Metrics sources to enable.
- Type
MutableSequence[google.cloud.dataproc_v1.types.DataprocMetricConfig.Metric]
- class Metric(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A Dataproc custom metric.
- metric_source¶
Required. A standard set of metrics is collected unless
metricOverrides
are specified for the metric source (see [Custom metrics] (https://cloud.google.com/dataproc/docs/guides/dataproc-metrics#custom_metrics) for more information).
- metric_overrides¶
Optional. Specify one or more [Custom metrics] (https://cloud.google.com/dataproc/docs/guides/dataproc-metrics#custom_metrics) to collect for the metric course (for the
SPARK
metric source (any [Spark metric] (https://spark.apache.org/docs/latest/monitoring.html#metrics) can be specified).Provide metrics in the following format: METRIC_SOURCE:INSTANCE:GROUP:METRIC Use camelcase as appropriate.
Examples:
yarn:ResourceManager:QueueMetrics:AppsCompleted spark:driver:DAGScheduler:job.allJobs sparkHistoryServer:JVM:Memory:NonHeapMemoryUsage.committed hiveserver2:JVM:Memory:NonHeapMemoryUsage.used
Notes:
Only the specified overridden metrics are collected for the metric source. For example, if one or more
spark:executive
metrics are listed as metric overrides, otherSPARK
metrics are not collected. The collection of the metrics for other enabled custom metric sources is unaffected. For example, if bothSPARK
anddYARN
metric sources are enabled, and overrides are provided for Spark metrics only, all YARN metrics are collected.
- Type
MutableSequence[str]
- class MetricSource(value)[source]¶
Bases:
proto.enums.Enum
A source for the collection of Dataproc custom metrics (see [Custom metrics] (https://cloud.google.com//dataproc/docs/guides/dataproc-metrics#custom_metrics)).
- Values:
- METRIC_SOURCE_UNSPECIFIED (0):
Required unspecified metric source.
- MONITORING_AGENT_DEFAULTS (1):
Monitoring agent metrics. If this source is enabled, Dataproc enables the monitoring agent in Compute Engine, and collects monitoring agent metrics, which are published with an
agent.googleapis.com
prefix.- HDFS (2):
HDFS metric source.
- SPARK (3):
Spark metric source.
- YARN (4):
YARN metric source.
- SPARK_HISTORY_SERVER (5):
Spark History Server metric source.
- HIVESERVER2 (6):
Hiveserver2 metric source.
- HIVEMETASTORE (7):
hivemetastore metric source
- FLINK (8):
flink metric source
- class google.cloud.dataproc_v1.types.DeleteAutoscalingPolicyRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A request to delete an autoscaling policy.
Autoscaling policies in use by one or more clusters will not be deleted.
- name¶
Required. The “resource name” of the autoscaling policy, as described in https://cloud.google.com/apis/design/resource_names.
For
projects.regions.autoscalingPolicies.delete
, the resource name of the policy has the following format:projects/{project_id}/regions/{region}/autoscalingPolicies/{policy_id}
For
projects.locations.autoscalingPolicies.delete
, the resource name of the policy has the following format:projects/{project_id}/locations/{location}/autoscalingPolicies/{policy_id}
- Type
- class google.cloud.dataproc_v1.types.DeleteBatchRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A request to delete a batch workload.
- class google.cloud.dataproc_v1.types.DeleteClusterRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A request to delete a cluster.
- project_id¶
Required. The ID of the Google Cloud Platform project that the cluster belongs to.
- Type
- cluster_uuid¶
Optional. Specifying the
cluster_uuid
means the RPC should fail (with error NOT_FOUND) if cluster with specified UUID does not exist.- Type
- request_id¶
Optional. A unique ID used to identify the request. If the server receives two DeleteClusterRequests with the same id, then the second request will be ignored and the first [google.longrunning.Operation][google.longrunning.Operation] created and stored in the backend is returned.
It is recommended to always set this value to a UUID.
The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters.
- Type
- class google.cloud.dataproc_v1.types.DeleteJobRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A request to delete a job.
- class google.cloud.dataproc_v1.types.DeleteSessionRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A request to delete a session.
- request_id¶
Optional. A unique ID used to identify the request. If the service receives two DeleteSessionRequests with the same ID, the second request is ignored.
Recommendation: Set this value to a UUID.
The value must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters.
- Type
- class google.cloud.dataproc_v1.types.DeleteSessionTemplateRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A request to delete a session template.
- class google.cloud.dataproc_v1.types.DeleteWorkflowTemplateRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A request to delete a workflow template.
Currently started workflows will remain running.
- name¶
Required. The resource name of the workflow template, as described in https://cloud.google.com/apis/design/resource_names.
For
projects.regions.workflowTemplates.delete
, the resource name of the template has the following format:projects/{project_id}/regions/{region}/workflowTemplates/{template_id}
For
projects.locations.workflowTemplates.instantiate
, the resource name of the template has the following format:projects/{project_id}/locations/{location}/workflowTemplates/{template_id}
- Type
- class google.cloud.dataproc_v1.types.DiagnoseClusterRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A request to collect cluster diagnostic information.
- project_id¶
Required. The ID of the Google Cloud Platform project that the cluster belongs to.
- Type
- tarball_gcs_dir¶
Optional. (Optional) The output Cloud Storage directory for the diagnostic tarball. If not specified, a task-specific directory in the cluster’s staging bucket will be used.
- Type
- tarball_access¶
Optional. (Optional) The access type to the diagnostic tarball. If not specified, falls back to default access of the bucket
- diagnosis_interval¶
Optional. Time interval in which diagnosis should be carried out on the cluster.
- Type
google.type.interval_pb2.Interval
- jobs¶
Optional. Specifies a list of jobs on which diagnosis is to be performed. Format: projects/{project}/regions/{region}/jobs/{job}
- Type
MutableSequence[str]
- yarn_application_ids¶
Optional. Specifies a list of yarn applications on which diagnosis is to be performed.
- Type
MutableSequence[str]
- class TarballAccess(value)[source]¶
Bases:
proto.enums.Enum
Defines who has access to the diagnostic tarball
- Values:
- TARBALL_ACCESS_UNSPECIFIED (0):
Tarball Access unspecified. Falls back to default access of the bucket
- GOOGLE_CLOUD_SUPPORT (1):
Google Cloud Support group has read access to the diagnostic tarball
- GOOGLE_DATAPROC_DIAGNOSE (2):
Google Cloud Dataproc Diagnose service account has read access to the diagnostic tarball
- class google.cloud.dataproc_v1.types.DiagnoseClusterResults(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
The location of diagnostic output.
- class google.cloud.dataproc_v1.types.DiskConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Specifies the config of disk options for a group of VM instances.
- boot_disk_type¶
Optional. Type of the boot disk (default is “pd-standard”). Valid values: “pd-balanced” (Persistent Disk Balanced Solid State Drive), “pd-ssd” (Persistent Disk Solid State Drive), or “pd-standard” (Persistent Disk Hard Disk Drive). See Disk types.
- Type
- num_local_ssds¶
Optional. Number of attached SSDs, from 0 to 8 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.
Note: Local SSD options may vary by machine type and number of vCPUs selected.
- Type
- local_ssd_interface¶
Optional. Interface type of local SSDs (default is “scsi”). Valid values: “scsi” (Small Computer System Interface), “nvme” (Non-Volatile Memory Express). See local SSD performance.
- Type
- boot_disk_provisioned_iops¶
Optional. Indicates how many IOPS to provision for the disk. This sets the number of I/O operations per second that the disk can handle. Note: This field is only supported if boot_disk_type is hyperdisk-balanced.
This field is a member of oneof
_boot_disk_provisioned_iops
.- Type
- boot_disk_provisioned_throughput¶
Optional. Indicates how much throughput to provision for the disk. This sets the number of throughput mb per second that the disk can handle. Values must be greater than or equal to 1. Note: This field is only supported if boot_disk_type is hyperdisk-balanced.
This field is a member of oneof
_boot_disk_provisioned_throughput
.- Type
- class google.cloud.dataproc_v1.types.DriverSchedulingConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Driver scheduling configuration.
- class google.cloud.dataproc_v1.types.EncryptionConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Encryption settings for the cluster.
- gce_pd_kms_key_name¶
Optional. The Cloud KMS key resource name to use for persistent disk encryption for all instances in the cluster. See [Use CMEK with cluster data] (https://cloud.google.com//dataproc/docs/concepts/configuring-clusters/customer-managed-encryption#use_cmek_with_cluster_data) for more information.
- Type
- kms_key¶
Optional. The Cloud KMS key resource name to use for cluster persistent disk and job argument encryption. See [Use CMEK with cluster data] (https://cloud.google.com//dataproc/docs/concepts/configuring-clusters/customer-managed-encryption#use_cmek_with_cluster_data) for more information.
When this key resource name is provided, the following job arguments of the following job types submitted to the cluster are encrypted using CMEK:
SparkSqlJob scriptVariables and queryList.queries
HiveJob scriptVariables and queryList.queries
PigJob scriptVariables and queryList.queries
PrestoJob scriptVariables and queryList.queries
- Type
- class google.cloud.dataproc_v1.types.EndpointConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Endpoint config for this cluster
- http_ports¶
Output only. The map of port descriptions to URLs. Will only be populated if enable_http_port_access is true.
- enable_http_port_access¶
Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.
- Type
- class HttpPortsEntry(mapping=None, *, ignore_unknown_fields=False, **kwargs)¶
Bases:
proto.message.Message
- class google.cloud.dataproc_v1.types.EnvironmentConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Environment configuration for a workload.
- execution_config¶
Optional. Execution configuration for a workload.
- peripherals_config¶
Optional. Peripherals configuration that workload has access to.
- class google.cloud.dataproc_v1.types.ExecutionConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Execution configuration for a workload.
This message has oneof fields (mutually exclusive fields). For each oneof, at most one member field can be set at the same time. Setting any member of the oneof automatically clears all other members.
- network_uri¶
Optional. Network URI to connect workload to.
This field is a member of oneof
network
.- Type
- subnetwork_uri¶
Optional. Subnetwork URI to connect workload to.
This field is a member of oneof
network
.- Type
- idle_ttl¶
Optional. Applies to sessions only. The duration to keep the session alive while it’s idling. Exceeding this threshold causes the session to terminate. This field cannot be set on a batch workload. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration). Defaults to 1 hour if not set. If both
ttl
andidle_ttl
are specified for an interactive session, the conditions are treated asOR
conditions: the workload will be terminated when it has been idle foridle_ttl
or whenttl
has been exceeded, whichever occurs first.
- ttl¶
Optional. The duration after which the workload will be terminated, specified as the JSON representation for Duration. When the workload exceeds this duration, it will be unconditionally terminated without waiting for ongoing work to finish. If
ttl
is not specified for a batch workload, the workload will be allowed to run until it exits naturally (or run forever without exiting). Ifttl
is not specified for an interactive session, it defaults to 24 hours. Ifttl
is not specified for a batch that uses 2.1+ runtime version, it defaults to 4 hours. Minimum value is 10 minutes; maximum value is 14 days. If bothttl
andidle_ttl
are specified (for an interactive session), the conditions are treated asOR
conditions: the workload will be terminated when it has been idle foridle_ttl
or whenttl
has been exceeded, whichever occurs first.
- staging_bucket¶
Optional. A Cloud Storage bucket used to stage workload dependencies, config files, and store workload output and other ephemeral data, such as Spark history files. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location according to the region where your workload is running, and then create and manage project-level, per-location staging and temporary buckets. This field requires a Cloud Storage bucket name, not a ``gs://…`` URI to a Cloud Storage bucket.
- Type
- class google.cloud.dataproc_v1.types.FailureAction(value)[source]¶
Bases:
proto.enums.Enum
Actions in response to failure of a resource associated with a cluster.
- Values:
- FAILURE_ACTION_UNSPECIFIED (0):
When FailureAction is unspecified, failure action defaults to NO_ACTION.
- NO_ACTION (1):
Take no action on failure to create a cluster resource. NO_ACTION is the default.
- DELETE (2):
Delete the failed cluster resource.
- class google.cloud.dataproc_v1.types.FlinkJob(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A Dataproc job for running Apache Flink applications on YARN.
This message has oneof fields (mutually exclusive fields). For each oneof, at most one member field can be set at the same time. Setting any member of the oneof automatically clears all other members.
- main_jar_file_uri¶
The HCFS URI of the jar file that contains the main class.
This field is a member of oneof
driver
.- Type
- main_class¶
The name of the driver’s main class. The jar file that contains the class must be in the default CLASSPATH or specified in [jarFileUris][google.cloud.dataproc.v1.FlinkJob.jar_file_uris].
This field is a member of oneof
driver
.- Type
- args¶
Optional. The arguments to pass to the driver. Do not include arguments, such as
--conf
, that can be set as job properties, since a collision might occur that causes an incorrect job submission.- Type
MutableSequence[str]
- jar_file_uris¶
Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Flink driver and tasks.
- Type
MutableSequence[str]
- savepoint_uri¶
Optional. HCFS URI of the savepoint, which contains the last saved progress for starting the current job.
- Type
- properties¶
Optional. A mapping of property names to values, used to configure Flink. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in
/etc/flink/conf/flink-defaults.conf
and classes in user code.
- logging_config¶
Optional. The runtime log config for job execution.
- class PropertiesEntry(mapping=None, *, ignore_unknown_fields=False, **kwargs)¶
Bases:
proto.message.Message
- class google.cloud.dataproc_v1.types.GceClusterConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Common config settings for resources of Compute Engine cluster instances, applicable to all instances in the cluster.
- zone_uri¶
Optional. The Compute Engine zone where the Dataproc cluster will be located. If omitted, the service will pick a zone in the cluster’s Compute Engine region. On a get request, zone will always be present.
A full URL, partial URI, or short name are valid. Examples:
https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]
projects/[project_id]/zones/[zone]
[zone]
- Type
- network_uri¶
Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither
network_uri
norsubnetwork_uri
is specified, the “default” network of the project is used, if it exists. Cannot be a “Custom Subnet Network” (see Using Subnetworks for more information).A full URL, partial URI, or short name are valid. Examples:
https://www.googleapis.com/compute/v1/projects/[project_id]/global/networks/default
projects/[project_id]/global/networks/default
default
- Type
- subnetwork_uri¶
Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.
A full URL, partial URI, or short name are valid. Examples:
https://www.googleapis.com/compute/v1/projects/[project_id]/regions/[region]/subnetworks/sub0
projects/[project_id]/regions/[region]/subnetworks/sub0
sub0
- Type
- internal_ip_only¶
Optional. This setting applies to subnetwork-enabled networks. It is set to
true
by default in clusters created with image versions 2.2.x.When set to
true
:All cluster VMs have internal IP addresses.
[Google Private Access] (https://cloud.google.com/vpc/docs/private-google-access) must be enabled to access Dataproc and other Google Cloud APIs.
Off-cluster dependencies must be configured to be accessible without external IP addresses.
When set to
false
:Cluster VMs are not restricted to internal IP addresses.
Ephemeral external IP addresses are assigned to each cluster VM.
This field is a member of oneof
_internal_ip_only
.- Type
- private_ipv6_google_access¶
Optional. The type of IPv6 access for a cluster.
- service_account¶
Optional. The Dataproc service account (also see VM Data Plane identity) used by Dataproc cluster VM instances to access Google Cloud Platform services.
If not specified, the Compute Engine default service account is used.
- Type
- service_account_scopes¶
Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included:
If no scopes are specified, the following defaults are also provided:
- Type
MutableSequence[str]
- tags¶
The Compute Engine network tags to add to all instances (see Tagging instances).
- Type
MutableSequence[str]
- metadata¶
Optional. The Compute Engine metadata entries to add to all instances (see Project and instance metadata).
- reservation_affinity¶
Optional. Reservation Affinity for consuming Zonal reservation.
- node_group_affinity¶
Optional. Node Group Affinity for sole-tenant clusters.
- shielded_instance_config¶
Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs.
- confidential_instance_config¶
Optional. Confidential Instance Config for clusters using Confidential VMs.
- class MetadataEntry(mapping=None, *, ignore_unknown_fields=False, **kwargs)¶
Bases:
proto.message.Message
- class PrivateIpv6GoogleAccess(value)[source]¶
Bases:
proto.enums.Enum
PrivateIpv6GoogleAccess
controls whether and how Dataproc cluster nodes can communicate with Google Services through gRPC over IPv6. These values are directly mapped to corresponding values in the Compute Engine Instance fields.- Values:
- PRIVATE_IPV6_GOOGLE_ACCESS_UNSPECIFIED (0):
If unspecified, Compute Engine default behavior will apply, which is the same as [INHERIT_FROM_SUBNETWORK][google.cloud.dataproc.v1.GceClusterConfig.PrivateIpv6GoogleAccess.INHERIT_FROM_SUBNETWORK].
- INHERIT_FROM_SUBNETWORK (1):
Private access to and from Google Services configuration inherited from the subnetwork configuration. This is the default Compute Engine behavior.
- OUTBOUND (2):
Enables outbound private IPv6 access to Google Services from the Dataproc cluster.
- BIDIRECTIONAL (3):
Enables bidirectional private IPv6 access between Google Services and the Dataproc cluster.
- class google.cloud.dataproc_v1.types.GetAutoscalingPolicyRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A request to fetch an autoscaling policy.
- name¶
Required. The “resource name” of the autoscaling policy, as described in https://cloud.google.com/apis/design/resource_names.
For
projects.regions.autoscalingPolicies.get
, the resource name of the policy has the following format:projects/{project_id}/regions/{region}/autoscalingPolicies/{policy_id}
For
projects.locations.autoscalingPolicies.get
, the resource name of the policy has the following format:projects/{project_id}/locations/{location}/autoscalingPolicies/{policy_id}
- Type
- class google.cloud.dataproc_v1.types.GetBatchRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A request to get the resource representation for a batch workload.
- class google.cloud.dataproc_v1.types.GetClusterRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Request to get the resource representation for a cluster in a project.
- project_id¶
Required. The ID of the Google Cloud Platform project that the cluster belongs to.
- Type
- class google.cloud.dataproc_v1.types.GetJobRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A request to get the resource representation for a job in a project.
- class google.cloud.dataproc_v1.types.GetNodeGroupRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A request to get a node group .
- class google.cloud.dataproc_v1.types.GetSessionRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A request to get the resource representation for a session.
- class google.cloud.dataproc_v1.types.GetSessionTemplateRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A request to get the resource representation for a session template.
- class google.cloud.dataproc_v1.types.GetWorkflowTemplateRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A request to fetch a workflow template.
- name¶
Required. The resource name of the workflow template, as described in https://cloud.google.com/apis/design/resource_names.
For
projects.regions.workflowTemplates.get
, the resource name of the template has the following format:projects/{project_id}/regions/{region}/workflowTemplates/{template_id}
For
projects.locations.workflowTemplates.get
, the resource name of the template has the following format:projects/{project_id}/locations/{location}/workflowTemplates/{template_id}
- Type
- class google.cloud.dataproc_v1.types.GkeClusterConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
The cluster’s GKE config.
- gke_cluster_target¶
Optional. A target GKE cluster to deploy to. It must be in the same project and region as the Dataproc cluster (the GKE cluster can be zonal or regional). Format: ‘projects/{project}/locations/{location}/clusters/{cluster_id}’
- Type
- node_pool_target¶
Optional. GKE node pools where workloads will be scheduled. At least one node pool must be assigned the
DEFAULT
[GkeNodePoolTarget.Role][google.cloud.dataproc.v1.GkeNodePoolTarget.Role]. If aGkeNodePoolTarget
is not specified, Dataproc constructs aDEFAULT
GkeNodePoolTarget
. Each role can be given to only oneGkeNodePoolTarget
. All node pools must have the same location settings.- Type
MutableSequence[google.cloud.dataproc_v1.types.GkeNodePoolTarget]
- class google.cloud.dataproc_v1.types.GkeNodePoolConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
The configuration of a GKE node pool used by a Dataproc-on-GKE cluster.
- config¶
Optional. The node pool configuration.
- locations¶
Optional. The list of Compute Engine zones where node pool nodes associated with a Dataproc on GKE virtual cluster will be located.
Note: All node pools associated with a virtual cluster must be located in the same region as the virtual cluster, and they must be located in the same zone within that region.
If a location is not specified during node pool creation, Dataproc on GKE will choose the zone.
- Type
MutableSequence[str]
- autoscaling¶
Optional. The autoscaler configuration for this node pool. The autoscaler is enabled only when a valid configuration is present.
- class GkeNodeConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Parameters that describe cluster nodes.
- machine_type¶
Optional. The name of a Compute Engine machine type.
- Type
- local_ssd_count¶
Optional. The number of local SSD disks to attach to the node, which is limited by the maximum number of disks allowable per zone (see Adding Local SSDs).
- Type
- preemptible¶
Optional. Whether the nodes are created as legacy [preemptible VM instances] (https://cloud.google.com/compute/docs/instances/preemptible). Also see [Spot][google.cloud.dataproc.v1.GkeNodePoolConfig.GkeNodeConfig.spot] VMs, preemptible VM instances without a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the
CONTROLLER
[role] (/dataproc/docs/reference/rest/v1/projects.regions.clusters#role) or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).- Type
- accelerators¶
Optional. A list of hardware accelerators to attach to each node.
- Type
MutableSequence[google.cloud.dataproc_v1.types.GkeNodePoolConfig.GkeNodePoolAcceleratorConfig]
- min_cpu_platform¶
Optional. Minimum CPU platform to be used by this instance. The instance may be scheduled on the specified or a newer CPU platform. Specify the friendly names of CPU platforms, such as “Intel Haswell”` or Intel Sandy Bridge”.
- Type
- boot_disk_kms_key¶
Optional. The [Customer Managed Encryption Key (CMEK)] (https://cloud.google.com/kubernetes-engine/docs/how-to/using-cmek) used to encrypt the boot disk attached to each node in the node pool. Specify the key using the following format: projects/KEY_PROJECT_ID/locations/LOCATION/keyRings/RING_NAME/cryptoKeys/KEY_NAME.
- Type
- spot¶
Optional. Whether the nodes are created as [Spot VM instances] (https://cloud.google.com/compute/docs/instances/spot). Spot VMs are the latest update to legacy [preemptible VMs][google.cloud.dataproc.v1.GkeNodePoolConfig.GkeNodeConfig.preemptible]. Spot VMs do not have a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the
CONTROLLER
role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).- Type
- class GkeNodePoolAcceleratorConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A GkeNodeConfigAcceleratorConfig represents a Hardware Accelerator request for a node pool.
- gpu_partition_size¶
Size of partitions to create on the GPU. Valid values are described in the NVIDIA mig user guide.
- Type
- class GkeNodePoolAutoscalingConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
GkeNodePoolAutoscaling contains information the cluster autoscaler needs to adjust the size of the node pool to the current cluster usage.
- min_node_count¶
The minimum number of nodes in the node pool. Must be >= 0 and <= max_node_count.
- Type
- class google.cloud.dataproc_v1.types.GkeNodePoolTarget(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
GKE node pools that Dataproc workloads run on.
- node_pool¶
Required. The target GKE node pool. Format: ‘projects/{project}/locations/{location}/clusters/{cluster}/nodePools/{node_pool}’
- Type
- roles¶
Required. The roles associated with the GKE node pool.
- Type
MutableSequence[google.cloud.dataproc_v1.types.GkeNodePoolTarget.Role]
- node_pool_config¶
Input only. The configuration for the GKE node pool. If specified, Dataproc attempts to create a node pool with the specified shape. If one with the same name already exists, it is verified against all specified fields. If a field differs, the virtual cluster creation will fail.
If omitted, any node pool with the specified name is used. If a node pool with the specified name does not exist, Dataproc create a node pool with default values.
This is an input only field. It will not be returned by the API.
- class Role(value)[source]¶
Bases:
proto.enums.Enum
Role
specifies the tasks that will run on the node pool. Roles can be specific to workloads. Exactly one [GkeNodePoolTarget][google.cloud.dataproc.v1.GkeNodePoolTarget] within the virtual cluster must have theDEFAULT
role, which is used to run all workloads that are not associated with a node pool.- Values:
- ROLE_UNSPECIFIED (0):
Role is unspecified.
- DEFAULT (1):
At least one node pool must have the
DEFAULT
role. Work assigned to a role that is not associated with a node pool is assigned to the node pool with theDEFAULT
role. For example, work assigned to theCONTROLLER
role will be assigned to the node pool with theDEFAULT
role if no node pool has theCONTROLLER
role.- CONTROLLER (2):
Run work associated with the Dataproc control plane (for example, controllers and webhooks). Very low resource requirements.
- SPARK_DRIVER (3):
Run work associated with a Spark driver of a job.
- SPARK_EXECUTOR (4):
Run work associated with a Spark executor of a job.
- class google.cloud.dataproc_v1.types.HadoopJob(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A Dataproc job for running Apache Hadoop MapReduce jobs on Apache Hadoop YARN.
This message has oneof fields (mutually exclusive fields). For each oneof, at most one member field can be set at the same time. Setting any member of the oneof automatically clears all other members.
- main_jar_file_uri¶
The HCFS URI of the jar file containing the main class. Examples:
‘gs://foo-bucket/analytics-binaries/extract-useful-metrics-mr.jar’ ‘hdfs:/tmp/test-samples/custom-wordcount.jar’ ‘file:///home/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar’
This field is a member of oneof
driver
.- Type
- main_class¶
The name of the driver’s main class. The jar file containing the class must be in the default CLASSPATH or specified in
jar_file_uris
.This field is a member of oneof
driver
.- Type
- args¶
Optional. The arguments to pass to the driver. Do not include arguments, such as
-libjars
or-Dfoo=bar
, that can be set as job properties, since a collision might occur that causes an incorrect job submission.- Type
MutableSequence[str]
- jar_file_uris¶
Optional. Jar file URIs to add to the CLASSPATHs of the Hadoop driver and tasks.
- Type
MutableSequence[str]
- file_uris¶
Optional. HCFS (Hadoop Compatible Filesystem) URIs of files to be copied to the working directory of Hadoop drivers and distributed tasks. Useful for naively parallel tasks.
- Type
MutableSequence[str]
- archive_uris¶
Optional. HCFS URIs of archives to be extracted in the working directory of Hadoop drivers and tasks. Supported file types:
.jar, .tar, .tar.gz, .tgz, or .zip.
- Type
MutableSequence[str]
- properties¶
Optional. A mapping of property names to values, used to configure Hadoop. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in
/etc/hadoop/conf/*-site
and classes in user code.
- logging_config¶
Optional. The runtime log config for job execution.
- class PropertiesEntry(mapping=None, *, ignore_unknown_fields=False, **kwargs)¶
Bases:
proto.message.Message
- class google.cloud.dataproc_v1.types.HiveJob(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A Dataproc job for running Apache Hive queries on YARN.
This message has oneof fields (mutually exclusive fields). For each oneof, at most one member field can be set at the same time. Setting any member of the oneof automatically clears all other members.
- query_file_uri¶
The HCFS URI of the script that contains Hive queries.
This field is a member of oneof
queries
.- Type
- continue_on_failure¶
Optional. Whether to continue executing queries if a query fails. The default value is
false
. Setting totrue
can be useful when executing independent parallel queries.- Type
- script_variables¶
Optional. Mapping of query variable names to values (equivalent to the Hive command:
SET name="value";
).
- properties¶
Optional. A mapping of property names and values, used to configure Hive. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in
/etc/hadoop/conf/*-site.xml
, /etc/hive/conf/hive-site.xml, and classes in user code.
- jar_file_uris¶
Optional. HCFS URIs of jar files to add to the CLASSPATH of the Hive server and Hadoop MapReduce (MR) tasks. Can contain Hive SerDes and UDFs.
- Type
MutableSequence[str]
- class PropertiesEntry(mapping=None, *, ignore_unknown_fields=False, **kwargs)¶
Bases:
proto.message.Message
- class ScriptVariablesEntry(mapping=None, *, ignore_unknown_fields=False, **kwargs)¶
Bases:
proto.message.Message
- class google.cloud.dataproc_v1.types.IdentityConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Identity related configuration, including service account based secure multi-tenancy user mappings.
- user_service_account_mapping¶
Required. Map of user to service account.
- class UserServiceAccountMappingEntry(mapping=None, *, ignore_unknown_fields=False, **kwargs)¶
Bases:
proto.message.Message
- class google.cloud.dataproc_v1.types.InstanceFlexibilityPolicy(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Instance flexibility Policy allowing a mixture of VM shapes and provisioning models.
- provisioning_model_mix¶
Optional. Defines how the Group selects the provisioning model to ensure required reliability.
- instance_selection_list¶
Optional. List of instance selection options that the group will use when creating new VMs.
- Type
MutableSequence[google.cloud.dataproc_v1.types.InstanceFlexibilityPolicy.InstanceSelection]
- instance_selection_results¶
Output only. A list of instance selection results in the group.
- Type
MutableSequence[google.cloud.dataproc_v1.types.InstanceFlexibilityPolicy.InstanceSelectionResult]
- class InstanceSelection(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Defines machines types and a rank to which the machines types belong.
- rank¶
Optional. Preference of this instance selection. Lower number means higher preference. Dataproc will first try to create a VM based on the machine-type with priority rank and fallback to next rank based on availability. Machine types and instance selections with the same priority have the same preference.
- Type
- class InstanceSelectionResult(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Defines a mapping from machine types to the number of VMs that are created with each machine type.
- class ProvisioningModelMix(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Defines how Dataproc should create VMs with a mixture of provisioning models.
- standard_capacity_base¶
Optional. The base capacity that will always use Standard VMs to avoid risk of more preemption than the minimum capacity you need. Dataproc will create only standard VMs until it reaches standard_capacity_base, then it will start using standard_capacity_percent_above_base to mix Spot with Standard VMs. eg. If 15 instances are requested and standard_capacity_base is 5, Dataproc will create 5 standard VMs and then start mixing spot and standard VMs for remaining 10 instances.
This field is a member of oneof
_standard_capacity_base
.- Type
- standard_capacity_percent_above_base¶
Optional. The percentage of target capacity that should use Standard VM. The remaining percentage will use Spot VMs. The percentage applies only to the capacity above standard_capacity_base. eg. If 15 instances are requested and standard_capacity_base is 5 and standard_capacity_percent_above_base is 30, Dataproc will create 5 standard VMs and then start mixing spot and standard VMs for remaining 10 instances. The mix will be 30% standard and 70% spot.
This field is a member of oneof
_standard_capacity_percent_above_base
.- Type
- class google.cloud.dataproc_v1.types.InstanceGroupAutoscalingPolicyConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Configuration for the size bounds of an instance group, including its proportional size to other groups.
- min_instances¶
Optional. Minimum number of instances for this group.
Primary workers - Bounds: [2, max_instances]. Default: 2. Secondary workers - Bounds: [0, max_instances]. Default: 0.
- Type
- max_instances¶
Required. Maximum number of instances for this group. Required for primary workers. Note that by default, clusters will not use secondary workers. Required for secondary workers if the minimum secondary instances is set.
Primary workers - Bounds: [min_instances, ). Secondary workers - Bounds: [min_instances, ). Default: 0.
- Type
- weight¶
Optional. Weight for the instance group, which is used to determine the fraction of total workers in the cluster from this instance group. For example, if primary workers have weight 2, and secondary workers have weight 1, the cluster will have approximately 2 primary workers for each secondary worker.
The cluster may not reach the specified balance if constrained by min/max bounds or other autoscaling settings. For example, if
max_instances
for secondary workers is 0, then only primary workers will be added. The cluster can also be out of balance when created.If weight is not set on any instance group, the cluster will default to equal weight for all groups: the cluster will attempt to maintain an equal number of workers in each group within the configured size bounds for each group. If weight is set for one group only, the cluster will default to zero weight on the unset group. For example if weight is set only on primary workers, the cluster will use primary workers only and no secondary workers.
- Type
- class google.cloud.dataproc_v1.types.InstanceGroupConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
The config settings for Compute Engine resources in an instance group, such as a master or worker group.
- num_instances¶
Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.
- Type
- instance_names¶
Output only. The list of instance names. Dataproc derives the names from
cluster_name
,num_instances
, and the instance group.- Type
MutableSequence[str]
- instance_references¶
Output only. List of references to Compute Engine instances.
- Type
MutableSequence[google.cloud.dataproc_v1.types.InstanceReference]
- image_uri¶
Optional. The Compute Engine image resource used for cluster instances.
The URI can represent an image or image family.
Image examples:
https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/[image-id]
projects/[project_id]/global/images/[image-id]
image-id
Image family examples. Dataproc will use the most recent image from the family:
https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/family/[custom-image-family-name]
projects/[project_id]/global/images/family/[custom-image-family-name]
If the URI is unspecified, it will be inferred from
SoftwareConfig.image_version
or the system default.- Type
- machine_type_uri¶
Optional. The Compute Engine machine type used for cluster instances.
A full URL, partial URI, or short name are valid. Examples:
https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2
projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2
n1-standard-2
Auto Zone Exception: If you are using the Dataproc Auto Zone Placement feature, you must use the short name of the machine type resource, for example,
n1-standard-2
.- Type
- disk_config¶
Optional. Disk option config settings.
- is_preemptible¶
Output only. Specifies that this instance group contains preemptible instances.
- Type
- preemptibility¶
Optional. Specifies the preemptibility of the instance group.
The default value for master and worker groups is
NON_PREEMPTIBLE
. This default cannot be changed.The default value for secondary instances is
PREEMPTIBLE
.
- managed_group_config¶
Output only. The config for Compute Engine Instance Group Manager that manages this group. This is only used for preemptible instance groups.
- accelerators¶
Optional. The Compute Engine accelerator configuration for these instances.
- Type
MutableSequence[google.cloud.dataproc_v1.types.AcceleratorConfig]
- min_cpu_platform¶
Optional. Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform.
- Type
- min_num_instances¶
Optional. The minimum number of primary worker instances to create. If
min_num_instances
is set, cluster creation will succeed if the number of primary workers created is at least equal to themin_num_instances
number.Example: Cluster creation request with
num_instances
=5
andmin_num_instances
=3
:If 4 VMs are created and 1 instance fails, the failed VM is deleted. The cluster is resized to 4 instances and placed in a
RUNNING
state.If 2 instances are created and 3 instances fail, the cluster in placed in an
ERROR
state. The failed VMs are not deleted.
- Type
- instance_flexibility_policy¶
Optional. Instance flexibility Policy allowing a mixture of VM shapes and provisioning models.
- startup_config¶
Optional. Configuration to handle the startup of instances during cluster create and update process.
- class Preemptibility(value)[source]¶
Bases:
proto.enums.Enum
Controls the use of preemptible instances within the group.
- Values:
- PREEMPTIBILITY_UNSPECIFIED (0):
Preemptibility is unspecified, the system will choose the appropriate setting for each instance group.
- NON_PREEMPTIBLE (1):
Instances are non-preemptible.
This option is allowed for all instance groups and is the only valid value for Master and Worker instance groups.
- PREEMPTIBLE (2):
Instances are [preemptible] (https://cloud.google.com/compute/docs/instances/preemptible).
This option is allowed only for [secondary worker] (https://cloud.google.com/dataproc/docs/concepts/compute/secondary-vms) groups.
- SPOT (3):
Instances are [Spot VMs] (https://cloud.google.com/compute/docs/instances/spot).
This option is allowed only for [secondary worker] (https://cloud.google.com/dataproc/docs/concepts/compute/secondary-vms) groups. Spot VMs are the latest version of [preemptible VMs] (https://cloud.google.com/compute/docs/instances/preemptible), and provide additional features.
- class google.cloud.dataproc_v1.types.InstanceReference(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A reference to a Compute Engine instance.
- class google.cloud.dataproc_v1.types.InstantiateInlineWorkflowTemplateRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A request to instantiate an inline workflow template.
- parent¶
Required. The resource name of the region or location, as described in https://cloud.google.com/apis/design/resource_names.
For
projects.regions.workflowTemplates,instantiateinline
, the resource name of the region has the following format:projects/{project_id}/regions/{region}
For
projects.locations.workflowTemplates.instantiateinline
, the resource name of the location has the following format:projects/{project_id}/locations/{location}
- Type
- template¶
Required. The workflow template to instantiate.
- request_id¶
Optional. A tag that prevents multiple concurrent workflow instances with the same tag from running. This mitigates risk of concurrent instances started due to retries.
It is recommended to always set this value to a UUID.
The tag must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters.
- Type
- class google.cloud.dataproc_v1.types.InstantiateWorkflowTemplateRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A request to instantiate a workflow template.
- name¶
Required. The resource name of the workflow template, as described in https://cloud.google.com/apis/design/resource_names.
For
projects.regions.workflowTemplates.instantiate
, the resource name of the template has the following format:projects/{project_id}/regions/{region}/workflowTemplates/{template_id}
For
projects.locations.workflowTemplates.instantiate
, the resource name of the template has the following format:projects/{project_id}/locations/{location}/workflowTemplates/{template_id}
- Type
- version¶
Optional. The version of workflow template to instantiate. If specified, the workflow will be instantiated only if the current version of the workflow template has the supplied version.
This option cannot be used to instantiate a previous version of workflow template.
- Type
- request_id¶
Optional. A tag that prevents multiple concurrent workflow instances with the same tag from running. This mitigates risk of concurrent instances started due to retries.
It is recommended to always set this value to a UUID.
The tag must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters.
- Type
- parameters¶
Optional. Map from parameter names to values that should be used for those parameters. Values may not exceed 1000 characters.
- class ParametersEntry(mapping=None, *, ignore_unknown_fields=False, **kwargs)¶
Bases:
proto.message.Message
- class google.cloud.dataproc_v1.types.Job(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A Dataproc job resource.
This message has oneof fields (mutually exclusive fields). For each oneof, at most one member field can be set at the same time. Setting any member of the oneof automatically clears all other members.
- reference¶
Optional. The fully qualified reference to the job, which can be used to obtain the equivalent REST path of the job resource. If this property is not specified when a job is created, the server generates a job_id.
- placement¶
Required. Job information, including how, when, and where to run the job.
- status¶
Output only. The job status. Additional application-specific status information might be contained in the type_job and yarn_applications fields.
- status_history¶
Output only. The previous job status.
- Type
MutableSequence[google.cloud.dataproc_v1.types.JobStatus]
- yarn_applications¶
Output only. The collection of YARN applications spun up by this job.
Beta Feature: This report is available for testing purposes only. It might be changed before final release.
- Type
MutableSequence[google.cloud.dataproc_v1.types.YarnApplication]
- driver_output_resource_uri¶
Output only. A URI pointing to the location of the stdout of the job’s driver program.
- Type
- driver_control_files_uri¶
Output only. If present, the location of miscellaneous control files which can be used as part of job setup and handling. If not present, control files might be placed in the same location as
driver_output_uri
.- Type
- labels¶
Optional. The labels to associate with this job. Label keys must contain 1 to 63 characters, and must conform to RFC 1035. Label values can be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035. No more than 32 labels can be associated with a job.
- scheduling¶
Optional. Job scheduling configuration.
- job_uuid¶
Output only. A UUID that uniquely identifies a job within the project over time. This is in contrast to a user-settable reference.job_id that might be reused over time.
- Type
- done¶
Output only. Indicates whether the job is completed. If the value is
false
, the job is still in progress. Iftrue
, the job is completed, andstatus.state
field will indicate if it was successful, failed, or cancelled.- Type
- driver_scheduling_config¶
Optional. Driver scheduling configuration.
- class LabelsEntry(mapping=None, *, ignore_unknown_fields=False, **kwargs)¶
Bases:
proto.message.Message
- class google.cloud.dataproc_v1.types.JobMetadata(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Job Operation metadata.
- status¶
Output only. Most recent job status.
- start_time¶
Output only. Job submission time.
- class google.cloud.dataproc_v1.types.JobPlacement(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Dataproc job config.
- cluster_uuid¶
Output only. A cluster UUID generated by the Dataproc service when the job is submitted.
- Type
- cluster_labels¶
Optional. Cluster labels to identify a cluster where the job will be submitted.
- class ClusterLabelsEntry(mapping=None, *, ignore_unknown_fields=False, **kwargs)¶
Bases:
proto.message.Message
- class google.cloud.dataproc_v1.types.JobReference(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Encapsulates the full scoping used to reference a job.
- project_id¶
Optional. The ID of the Google Cloud Platform project that the job belongs to. If specified, must match the request project ID.
- Type
- class google.cloud.dataproc_v1.types.JobScheduling(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Job scheduling options.
- max_failures_per_hour¶
Optional. Maximum number of times per hour a driver can be restarted as a result of driver exiting with non-zero code before job is reported failed.
A job might be reported as thrashing if the driver exits with a non-zero code four times within a 10-minute window.
Maximum value is 10.
Note: This restartable job option is not supported in Dataproc [workflow templates] (https://cloud.google.com/dataproc/docs/concepts/workflows/using-workflows#adding_jobs_to_a_template).
- Type
- max_failures_total¶
Optional. Maximum total number of times a driver can be restarted as a result of the driver exiting with a non-zero code. After the maximum number is reached, the job will be reported as failed.
Maximum value is 240.
Note: Currently, this restartable job option is not supported in Dataproc workflow templates.
- Type
- class google.cloud.dataproc_v1.types.JobStatus(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Dataproc job status.
- state¶
Output only. A state message specifying the overall job state.
- details¶
Optional. Output only. Job state details, such as an error description if the state is
ERROR
.- Type
- state_start_time¶
Output only. The time when this state was entered.
- substate¶
Output only. Additional state information, which includes status reported by the agent.
- class State(value)[source]¶
Bases:
proto.enums.Enum
The job state.
- Values:
- STATE_UNSPECIFIED (0):
The job state is unknown.
- PENDING (1):
The job is pending; it has been submitted, but is not yet running.
- SETUP_DONE (8):
Job has been received by the service and completed initial setup; it will soon be submitted to the cluster.
- RUNNING (2):
The job is running on the cluster.
- CANCEL_PENDING (3):
A CancelJob request has been received, but is pending.
- CANCEL_STARTED (7):
Transient in-flight resources have been canceled, and the request to cancel the running job has been issued to the cluster.
- CANCELLED (4):
The job cancellation was successful.
- DONE (5):
The job has completed successfully.
- ERROR (6):
The job has completed, but encountered an error.
- ATTEMPT_FAILURE (9):
Job attempt has failed. The detail field contains failure details for this attempt.
Applies to restartable jobs only.
- class Substate(value)[source]¶
Bases:
proto.enums.Enum
The job substate.
- Values:
- UNSPECIFIED (0):
The job substate is unknown.
- SUBMITTED (1):
The Job is submitted to the agent.
Applies to RUNNING state.
- QUEUED (2):
The Job has been received and is awaiting execution (it might be waiting for a condition to be met). See the “details” field for the reason for the delay.
Applies to RUNNING state.
- STALE_STATUS (3):
The agent-reported status is out of date, which can be caused by a loss of communication between the agent and Dataproc. If the agent does not send a timely update, the job will fail.
Applies to RUNNING state.
- class google.cloud.dataproc_v1.types.JupyterConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Jupyter configuration for an interactive session.
- kernel¶
Optional. Kernel
- class Kernel(value)[source]¶
Bases:
proto.enums.Enum
Jupyter kernel types.
- Values:
- KERNEL_UNSPECIFIED (0):
The kernel is unknown.
- PYTHON (1):
Python kernel.
- SCALA (2):
Scala kernel.
- class google.cloud.dataproc_v1.types.KerberosConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Specifies Kerberos related configuration.
- enable_kerberos¶
Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.
- Type
- root_principal_password_uri¶
Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password.
- Type
- keystore_uri¶
Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
- Type
- truststore_uri¶
Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
- Type
- keystore_password_uri¶
Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.
- Type
- key_password_uri¶
Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.
- Type
- truststore_password_uri¶
Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.
- Type
- cross_realm_trust_realm¶
Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.
- Type
- cross_realm_trust_kdc¶
Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
- Type
- cross_realm_trust_admin_server¶
Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
- Type
Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.
- Type
- kdc_db_key_uri¶
Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.
- Type
- tgt_lifetime_hours¶
Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.
- Type
- class google.cloud.dataproc_v1.types.KubernetesClusterConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
The configuration for running the Dataproc cluster on Kubernetes.
- kubernetes_namespace¶
Optional. A namespace within the Kubernetes cluster to deploy into. If this namespace does not exist, it is created. If it exists, Dataproc verifies that another Dataproc VirtualCluster is not installed into it. If not specified, the name of the Dataproc Cluster is used.
- Type
- gke_cluster_config¶
Required. The configuration for running the Dataproc cluster on GKE.
This field is a member of oneof
config
.
- kubernetes_software_config¶
Optional. The software configuration for this Dataproc cluster running on Kubernetes.
- class google.cloud.dataproc_v1.types.KubernetesSoftwareConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
The software configuration for this Dataproc cluster running on Kubernetes.
- component_version¶
The components that should be installed in this Dataproc cluster. The key must be a string from the KubernetesComponent enumeration. The value is the version of the software to be installed. At least one entry must be specified.
- properties¶
The properties to set on daemon config files.
Property keys are specified in
prefix:property
format, for examplespark:spark.kubernetes.container.image
. The following are supported prefixes and their mappings:spark:
spark-defaults.conf
For more information, see Cluster properties.
- class ComponentVersionEntry(mapping=None, *, ignore_unknown_fields=False, **kwargs)¶
Bases:
proto.message.Message
- class PropertiesEntry(mapping=None, *, ignore_unknown_fields=False, **kwargs)¶
Bases:
proto.message.Message
- class google.cloud.dataproc_v1.types.LifecycleConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Specifies the cluster auto-delete schedule configuration.
This message has oneof fields (mutually exclusive fields). For each oneof, at most one member field can be set at the same time. Setting any member of the oneof automatically clears all other members.
- idle_delete_ttl¶
Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration).
- auto_delete_time¶
Optional. The time when cluster will be auto-deleted (see JSON representation of Timestamp).
This field is a member of oneof
ttl
.
- auto_delete_ttl¶
Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration).
This field is a member of oneof
ttl
.
- class google.cloud.dataproc_v1.types.ListAutoscalingPoliciesRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A request to list autoscaling policies in a project.
- parent¶
Required. The “resource name” of the region or location, as described in https://cloud.google.com/apis/design/resource_names.
For
projects.regions.autoscalingPolicies.list
, the resource name of the region has the following format:projects/{project_id}/regions/{region}
For
projects.locations.autoscalingPolicies.list
, the resource name of the location has the following format:projects/{project_id}/locations/{location}
- Type
- page_size¶
Optional. The maximum number of results to return in each response. Must be less than or equal to 1000. Defaults to 100.
- Type
- class google.cloud.dataproc_v1.types.ListAutoscalingPoliciesResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A response to a request to list autoscaling policies in a project.
- policies¶
Output only. Autoscaling policies list.
- Type
MutableSequence[google.cloud.dataproc_v1.types.AutoscalingPolicy]
- class google.cloud.dataproc_v1.types.ListBatchesRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A request to list batch workloads in a project.
- page_size¶
Optional. The maximum number of batches to return in each response. The service may return fewer than this value. The default page size is 20; the maximum page size is 1000.
- Type
- page_token¶
Optional. A page token received from a previous
ListBatches
call. Provide this token to retrieve the subsequent page.- Type
- filter¶
Optional. A filter for the batches to return in the response.
A filter is a logical expression constraining the values of various fields in each batch resource. Filters are case sensitive, and may contain multiple clauses combined with logical operators (AND/OR). Supported fields are
batch_id
,batch_uuid
,state
, andcreate_time
.e.g.
state = RUNNING and create_time < "2023-01-01T00:00:00Z"
filters for batches in state RUNNING that were created before 2023-01-01See https://google.aip.dev/assets/misc/ebnf-filtering.txt for a detailed description of the filter syntax and a list of supported comparisons.
- Type
- order_by¶
Optional. Field(s) on which to sort the list of batches.
Currently the only supported sort orders are unspecified (empty) and
create_time desc
to sort by most recently created batches first.See https://google.aip.dev/132#ordering for more details.
- Type
- class google.cloud.dataproc_v1.types.ListBatchesResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A list of batch workloads.
- batches¶
The batches from the specified collection.
- Type
MutableSequence[google.cloud.dataproc_v1.types.Batch]
- next_page_token¶
A token, which can be sent as
page_token
to retrieve the next page. If this field is omitted, there are no subsequent pages.- Type
- class google.cloud.dataproc_v1.types.ListClustersRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A request to list the clusters in a project.
- project_id¶
Required. The ID of the Google Cloud Platform project that the cluster belongs to.
- Type
- filter¶
Optional. A filter constraining the clusters to list. Filters are case-sensitive and have the following syntax:
field = value [AND [field = value]] …
where field is one of
status.state
,clusterName
, orlabels.[KEY]
, and[KEY]
is a label key. value can be*
to match all values.status.state
can be one of the following:ACTIVE
,INACTIVE
,CREATING
,RUNNING
,ERROR
,DELETING
,UPDATING
,STOPPING
, orSTOPPED
.ACTIVE
contains theCREATING
,UPDATING
, andRUNNING
states.INACTIVE
contains theDELETING
,ERROR
,STOPPING
, andSTOPPED
states.clusterName
is the name of the cluster provided at creation time. Only the logicalAND
operator is supported; space-separated items are treated as having an implicitAND
operator.Example filter:
status.state = ACTIVE AND clusterName = mycluster AND labels.env = staging AND labels.starred = *
- Type
- class google.cloud.dataproc_v1.types.ListClustersResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
The list of all clusters in a project.
- clusters¶
Output only. The clusters in the project.
- Type
MutableSequence[google.cloud.dataproc_v1.types.Cluster]
- class google.cloud.dataproc_v1.types.ListJobsRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A request to list jobs in a project.
- page_token¶
Optional. The page token, returned by a previous call, to request the next page of results.
- Type
- cluster_name¶
Optional. If set, the returned jobs list includes only jobs that were submitted to the named cluster.
- Type
- job_state_matcher¶
Optional. Specifies enumerated categories of jobs to list. (default = match ALL jobs).
If
filter
is provided,jobStateMatcher
will be ignored.
- filter¶
Optional. A filter constraining the jobs to list. Filters are case-sensitive and have the following syntax:
[field = value] AND [field [= value]] …
where field is
status.state
orlabels.[KEY]
, and[KEY]
is a label key. value can be*
to match all values.status.state
can be eitherACTIVE
orNON_ACTIVE
. Only the logicalAND
operator is supported; space-separated items are treated as having an implicitAND
operator.Example filter:
status.state = ACTIVE AND labels.env = staging AND labels.starred = *
- Type
- class JobStateMatcher(value)[source]¶
Bases:
proto.enums.Enum
A matcher that specifies categories of job states.
- Values:
- ALL (0):
Match all jobs, regardless of state.
- ACTIVE (1):
Only match jobs in non-terminal states: PENDING, RUNNING, or CANCEL_PENDING.
- NON_ACTIVE (2):
Only match jobs in terminal states: CANCELLED, DONE, or ERROR.
- class google.cloud.dataproc_v1.types.ListJobsResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A list of jobs in a project.
- jobs¶
Output only. Jobs list.
- Type
MutableSequence[google.cloud.dataproc_v1.types.Job]
- next_page_token¶
Optional. This token is included in the response if there are more results to fetch. To fetch additional results, provide this value as the
page_token
in a subsequent ListJobsRequest.- Type
- class google.cloud.dataproc_v1.types.ListSessionTemplatesRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A request to list session templates in a project.
- page_size¶
Optional. The maximum number of sessions to return in each response. The service may return fewer than this value.
- Type
- page_token¶
Optional. A page token received from a previous
ListSessions
call. Provide this token to retrieve the subsequent page.- Type
- class google.cloud.dataproc_v1.types.ListSessionTemplatesResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A list of session templates.
- session_templates¶
Output only. Session template list
- Type
MutableSequence[google.cloud.dataproc_v1.types.SessionTemplate]
- class google.cloud.dataproc_v1.types.ListSessionsRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A request to list sessions in a project.
- page_size¶
Optional. The maximum number of sessions to return in each response. The service may return fewer than this value.
- Type
- page_token¶
Optional. A page token received from a previous
ListSessions
call. Provide this token to retrieve the subsequent page.- Type
- filter¶
Optional. A filter for the sessions to return in the response.
A filter is a logical expression constraining the values of various fields in each session resource. Filters are case sensitive, and may contain multiple clauses combined with logical operators (AND, OR). Supported fields are
session_id
,session_uuid
,state
,create_time
, andlabels
.Example:
state = ACTIVE and create_time < "2023-01-01T00:00:00Z"
is a filter for sessions in an ACTIVE state that were created before 2023-01-01.state = ACTIVE and labels.environment=production
is a filter for sessions in an ACTIVE state that have a production environment label.See https://google.aip.dev/assets/misc/ebnf-filtering.txt for a detailed description of the filter syntax and a list of supported comparators.
- Type
- class google.cloud.dataproc_v1.types.ListSessionsResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A list of interactive sessions.
- sessions¶
Output only. The sessions from the specified collection.
- Type
MutableSequence[google.cloud.dataproc_v1.types.Session]
- class google.cloud.dataproc_v1.types.ListWorkflowTemplatesRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A request to list workflow templates in a project.
- parent¶
Required. The resource name of the region or location, as described in https://cloud.google.com/apis/design/resource_names.
For
projects.regions.workflowTemplates,list
, the resource name of the region has the following format:projects/{project_id}/regions/{region}
For
projects.locations.workflowTemplates.list
, the resource name of the location has the following format:projects/{project_id}/locations/{location}
- Type
- class google.cloud.dataproc_v1.types.ListWorkflowTemplatesResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A response to a request to list workflow templates in a project.
- templates¶
Output only. WorkflowTemplates list.
- Type
MutableSequence[google.cloud.dataproc_v1.types.WorkflowTemplate]
- next_page_token¶
Output only. This token is included in the response if there are more results to fetch. To fetch additional results, provide this value as the page_token in a subsequent ListWorkflowTemplatesRequest.
- Type
- class google.cloud.dataproc_v1.types.LoggingConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
The runtime logging config of the job.
- driver_log_levels¶
The per-package log levels for the driver. This can include “root” package name to configure rootLogger. Examples:
‘com.google = FATAL’
‘root = INFO’
‘org.apache = DEBUG’
- Type
MutableMapping[str, google.cloud.dataproc_v1.types.LoggingConfig.Level]
- class DriverLogLevelsEntry(mapping=None, *, ignore_unknown_fields=False, **kwargs)¶
Bases:
proto.message.Message
- class Level(value)[source]¶
Bases:
proto.enums.Enum
The Log4j level for job execution. When running an Apache Hive job, Cloud Dataproc configures the Hive client to an equivalent verbosity level.
- Values:
- LEVEL_UNSPECIFIED (0):
Level is unspecified. Use default level for log4j.
- ALL (1):
Use ALL level for log4j.
- TRACE (2):
Use TRACE level for log4j.
- DEBUG (3):
Use DEBUG level for log4j.
- INFO (4):
Use INFO level for log4j.
- WARN (5):
Use WARN level for log4j.
- ERROR (6):
Use ERROR level for log4j.
- FATAL (7):
Use FATAL level for log4j.
- OFF (8):
Turn off log4j.
- class google.cloud.dataproc_v1.types.ManagedCluster(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Cluster that is managed by the workflow.
- cluster_name¶
Required. The cluster name prefix. A unique cluster name will be formed by appending a random suffix.
The name must contain only lower-case letters (a-z), numbers (0-9), and hyphens (-). Must begin with a letter. Cannot begin or end with hyphen. Must consist of between 2 and 35 characters.
- Type
- config¶
Required. The cluster configuration.
- labels¶
Optional. The labels to associate with this cluster.
Label keys must be between 1 and 63 characters long, and must conform to the following PCRE regular expression: [p{Ll}p{Lo}][p{Ll}p{Lo}p{N}_-]{0,62}
Label values must be between 1 and 63 characters long, and must conform to the following PCRE regular expression: [p{Ll}p{Lo}p{N}_-]{0,63}
No more than 32 labels can be associated with a given cluster.
- class LabelsEntry(mapping=None, *, ignore_unknown_fields=False, **kwargs)¶
Bases:
proto.message.Message
- class google.cloud.dataproc_v1.types.ManagedGroupConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Specifies the resources used to actively manage an instance group.
- instance_template_name¶
Output only. The name of the Instance Template used for the Managed Instance Group.
- Type
- instance_group_manager_name¶
Output only. The name of the Instance Group Manager for this group.
- Type
- class google.cloud.dataproc_v1.types.MetastoreConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Specifies a Metastore configuration.
- class google.cloud.dataproc_v1.types.NodeGroup(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Dataproc Node Group. The Dataproc ``NodeGroup`` resource is not related to the Dataproc [NodeGroupAffinity][google.cloud.dataproc.v1.NodeGroupAffinity] resource.
- name¶
The Node group resource name.
- Type
- roles¶
Required. Node group roles.
- Type
MutableSequence[google.cloud.dataproc_v1.types.NodeGroup.Role]
- node_group_config¶
Optional. The node group instance group configuration.
- labels¶
Optional. Node group labels.
Label keys must consist of from 1 to 63 characters and conform to RFC 1035.
Label values can be empty. If specified, they must consist of from 1 to 63 characters and conform to [RFC 1035] (https://www.ietf.org/rfc/rfc1035.txt).
The node group must have no more than 32 labels.
- class LabelsEntry(mapping=None, *, ignore_unknown_fields=False, **kwargs)¶
Bases:
proto.message.Message
- class Role(value)[source]¶
Bases:
proto.enums.Enum
Node pool roles.
- Values:
- ROLE_UNSPECIFIED (0):
Required unspecified role.
- DRIVER (1):
Job drivers run on the node pool.
- class google.cloud.dataproc_v1.types.NodeGroupAffinity(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Node Group Affinity for clusters using sole-tenant node groups. The Dataproc ``NodeGroupAffinity`` resource is not related to the Dataproc [NodeGroup][google.cloud.dataproc.v1.NodeGroup] resource.
- node_group_uri¶
Required. The URI of a sole-tenant node group resource that the cluster will be created on.
A full URL, partial URI, or node group name are valid. Examples:
https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/nodeGroups/node-group-1
projects/[project_id]/zones/[zone]/nodeGroups/node-group-1
node-group-1
- Type
- class google.cloud.dataproc_v1.types.NodeGroupOperationMetadata(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Metadata describing the node group operation.
- status¶
Output only. Current operation status.
- status_history¶
Output only. The previous operation status.
- Type
MutableSequence[google.cloud.dataproc_v1.types.ClusterOperationStatus]
- operation_type¶
The operation type.
- class LabelsEntry(mapping=None, *, ignore_unknown_fields=False, **kwargs)¶
Bases:
proto.message.Message
- class NodeGroupOperationType(value)[source]¶
Bases:
proto.enums.Enum
Operation type for node group resources.
- Values:
- NODE_GROUP_OPERATION_TYPE_UNSPECIFIED (0):
Node group operation type is unknown.
- CREATE (1):
Create node group operation type.
- UPDATE (2):
Update node group operation type.
- DELETE (3):
Delete node group operation type.
- RESIZE (4):
Resize node group operation type.
- class google.cloud.dataproc_v1.types.NodeInitializationAction(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Specifies an executable to run on a fully configured node and a timeout period for executable completion.
- execution_timeout¶
Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration).
Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.
- class google.cloud.dataproc_v1.types.OrderedJob(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A job executed by the workflow.
This message has oneof fields (mutually exclusive fields). For each oneof, at most one member field can be set at the same time. Setting any member of the oneof automatically clears all other members.
- step_id¶
Required. The step id. The id must be unique among all jobs within the template.
The step id is used as prefix for job id, as job
goog-dataproc-workflow-step-id
label, and in [prerequisiteStepIds][google.cloud.dataproc.v1.OrderedJob.prerequisite_step_ids] field from other steps.The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of between 3 and 50 characters.
- Type
- labels¶
Optional. The labels to associate with this job.
Label keys must be between 1 and 63 characters long, and must conform to the following regular expression: [p{Ll}p{Lo}][p{Ll}p{Lo}p{N}_-]{0,62}
Label values must be between 1 and 63 characters long, and must conform to the following regular expression: [p{Ll}p{Lo}p{N}_-]{0,63}
No more than 32 labels can be associated with a given job.
- scheduling¶
Optional. Job scheduling configuration.
- prerequisite_step_ids¶
Optional. The optional list of prerequisite job step_ids. If not specified, the job will start at the beginning of workflow.
- Type
MutableSequence[str]
- class LabelsEntry(mapping=None, *, ignore_unknown_fields=False, **kwargs)¶
Bases:
proto.message.Message
- class google.cloud.dataproc_v1.types.ParameterValidation(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Configuration for parameter validation.
This message has oneof fields (mutually exclusive fields). For each oneof, at most one member field can be set at the same time. Setting any member of the oneof automatically clears all other members.
- class google.cloud.dataproc_v1.types.PeripheralsConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Auxiliary services configuration for a workload.
- metastore_service¶
Optional. Resource name of an existing Dataproc Metastore service.
Example:
projects/[project_id]/locations/[region]/services/[service_id]
- Type
- spark_history_server_config¶
Optional. The Spark History Server configuration for the workload.
- class google.cloud.dataproc_v1.types.PigJob(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A Dataproc job for running Apache Pig queries on YARN.
This message has oneof fields (mutually exclusive fields). For each oneof, at most one member field can be set at the same time. Setting any member of the oneof automatically clears all other members.
- query_file_uri¶
The HCFS URI of the script that contains the Pig queries.
This field is a member of oneof
queries
.- Type
- continue_on_failure¶
Optional. Whether to continue executing queries if a query fails. The default value is
false
. Setting totrue
can be useful when executing independent parallel queries.- Type
- script_variables¶
Optional. Mapping of query variable names to values (equivalent to the Pig command:
name=[value]
).
- properties¶
Optional. A mapping of property names to values, used to configure Pig. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in
/etc/hadoop/conf/*-site.xml
, /etc/pig/conf/pig.properties, and classes in user code.
- jar_file_uris¶
Optional. HCFS URIs of jar files to add to the CLASSPATH of the Pig Client and Hadoop MapReduce (MR) tasks. Can contain Pig UDFs.
- Type
MutableSequence[str]
- logging_config¶
Optional. The runtime log config for job execution.
- class PropertiesEntry(mapping=None, *, ignore_unknown_fields=False, **kwargs)¶
Bases:
proto.message.Message
- class ScriptVariablesEntry(mapping=None, *, ignore_unknown_fields=False, **kwargs)¶
Bases:
proto.message.Message
- class google.cloud.dataproc_v1.types.PrestoJob(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A Dataproc job for running Presto queries. IMPORTANT: The Dataproc Presto Optional Component must be enabled when the cluster is created to submit a Presto job to the cluster.
This message has oneof fields (mutually exclusive fields). For each oneof, at most one member field can be set at the same time. Setting any member of the oneof automatically clears all other members.
- query_file_uri¶
The HCFS URI of the script that contains SQL queries.
This field is a member of oneof
queries
.- Type
- continue_on_failure¶
Optional. Whether to continue executing queries if a query fails. The default value is
false
. Setting totrue
can be useful when executing independent parallel queries.- Type
- output_format¶
Optional. The format in which query output will be displayed. See the Presto documentation for supported output formats
- Type
- properties¶
Optional. A mapping of property names to values. Used to set Presto session properties Equivalent to using the –session flag in the Presto CLI
- logging_config¶
Optional. The runtime log config for job execution.
- class PropertiesEntry(mapping=None, *, ignore_unknown_fields=False, **kwargs)¶
Bases:
proto.message.Message
- class google.cloud.dataproc_v1.types.PyPiRepositoryConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Configuration for PyPi repository
- class google.cloud.dataproc_v1.types.PySparkBatch(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A configuration for running an Apache PySpark batch workload.
- main_python_file_uri¶
Required. The HCFS URI of the main Python file to use as the Spark driver. Must be a .py file.
- Type
- args¶
Optional. The arguments to pass to the driver. Do not include arguments that can be set as batch properties, such as
--conf
, since a collision can occur that causes an incorrect batch submission.- Type
MutableSequence[str]
- python_file_uris¶
Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types:
.py
,.egg
, and.zip
.- Type
MutableSequence[str]
- jar_file_uris¶
Optional. HCFS URIs of jar files to add to the classpath of the Spark driver and tasks.
- Type
MutableSequence[str]
- class google.cloud.dataproc_v1.types.PySparkJob(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A Dataproc job for running Apache PySpark applications on YARN.
- main_python_file_uri¶
Required. The HCFS URI of the main Python file to use as the driver. Must be a .py file.
- Type
- args¶
Optional. The arguments to pass to the driver. Do not include arguments, such as
--conf
, that can be set as job properties, since a collision may occur that causes an incorrect job submission.- Type
MutableSequence[str]
- python_file_uris¶
Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.
- Type
MutableSequence[str]
- jar_file_uris¶
Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Python driver and tasks.
- Type
MutableSequence[str]
- file_uris¶
Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
- Type
MutableSequence[str]
- archive_uris¶
Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types:
.jar, .tar, .tar.gz, .tgz, and .zip.
- Type
MutableSequence[str]
- properties¶
Optional. A mapping of property names to values, used to configure PySpark. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
- logging_config¶
Optional. The runtime log config for job execution.
- class PropertiesEntry(mapping=None, *, ignore_unknown_fields=False, **kwargs)¶
Bases:
proto.message.Message
- class google.cloud.dataproc_v1.types.QueryList(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A list of queries to run on a cluster.
- queries¶
Required. The queries to execute. You do not need to end a query expression with a semicolon. Multiple queries can be specified in one string by separating each with a semicolon. Here is an example of a Dataproc API snippet that uses a QueryList to specify a HiveJob:
"hiveJob": { "queryList": { "queries": [ "query1", "query2", "query3;query4", ] } }
- Type
MutableSequence[str]
- class google.cloud.dataproc_v1.types.RegexValidation(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Validation based on regular expressions.
- class google.cloud.dataproc_v1.types.RepositoryConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Configuration for dependency repositories
- pypi_repository_config¶
Optional. Configuration for PyPi repository.
- class google.cloud.dataproc_v1.types.ReservationAffinity(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Reservation Affinity for consuming Zonal reservation.
- consume_reservation_type¶
Optional. Type of reservation to consume
- values¶
Optional. Corresponds to the label values of reservation resource.
- Type
MutableSequence[str]
- class Type(value)[source]¶
Bases:
proto.enums.Enum
Indicates whether to consume capacity from an reservation or not.
- Values:
- TYPE_UNSPECIFIED (0):
No description available.
- NO_RESERVATION (1):
Do not consume from any allocated capacity.
- ANY_RESERVATION (2):
Consume any reservation available.
- SPECIFIC_RESERVATION (3):
Must consume from a specific reservation. Must specify key value fields for specifying the reservations.
- class google.cloud.dataproc_v1.types.ResizeNodeGroupRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A request to resize a node group.
- name¶
Required. The name of the node group to resize. Format:
projects/{project}/regions/{region}/clusters/{cluster}/nodeGroups/{nodeGroup}
- Type
- size¶
Required. The number of running instances for the node group to maintain. The group adds or removes instances to maintain the number of instances specified by this parameter.
- Type
- request_id¶
Optional. A unique ID used to identify the request. If the server receives two ResizeNodeGroupRequest with the same ID, the second request is ignored and the first [google.longrunning.Operation][google.longrunning.Operation] created and stored in the backend is returned.
Recommendation: Set this value to a UUID.
The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters.
- Type
- graceful_decommission_timeout¶
Optional. Timeout for graceful YARN decommissioning. [Graceful decommissioning] (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/scaling-clusters#graceful_decommissioning) allows the removal of nodes from the Compute Engine node group without interrupting jobs in progress. This timeout specifies how long to wait for jobs in progress to finish before forcefully removing nodes (and potentially interrupting jobs). Default timeout is 0 (for forceful decommission), and the maximum allowed timeout is 1 day. (see JSON representation of Duration).
Only supported on Dataproc image versions 1.2 and higher.
- class google.cloud.dataproc_v1.types.RuntimeConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Runtime configuration for a workload.
- container_image¶
Optional. Optional custom container image for the job runtime environment. If not specified, a default container image will be used.
- Type
- properties¶
Optional. A mapping of property names to values, which are used to configure workload execution.
- repository_config¶
Optional. Dependency repository configuration.
- autotuning_config¶
Optional. Autotuning configuration of the workload.
- cohort¶
Optional. Cohort identifier. Identifies families of the workloads having the same shape, e.g. daily ETL jobs.
- Type
- class PropertiesEntry(mapping=None, *, ignore_unknown_fields=False, **kwargs)¶
Bases:
proto.message.Message
- class google.cloud.dataproc_v1.types.RuntimeInfo(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Runtime information about workload execution.
- endpoints¶
Output only. Map of remote access endpoints (such as web interfaces and APIs) to their URIs.
- output_uri¶
Output only. A URI pointing to the location of the stdout and stderr of the workload.
- Type
- diagnostic_output_uri¶
Output only. A URI pointing to the location of the diagnostics tarball.
- Type
- approximate_usage¶
Output only. Approximate workload resource usage, calculated when the workload completes (see [Dataproc Serverless pricing] (https://cloud.google.com/dataproc-serverless/pricing)).
Note: This metric calculation may change in the future, for example, to capture cumulative workload resource consumption during workload execution (see the [Dataproc Serverless release notes] (https://cloud.google.com/dataproc-serverless/docs/release-notes) for announcements, changes, fixes and other Dataproc developments).
- current_usage¶
Output only. Snapshot of current workload resource usage.
- class EndpointsEntry(mapping=None, *, ignore_unknown_fields=False, **kwargs)¶
Bases:
proto.message.Message
- class google.cloud.dataproc_v1.types.SecurityConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Security related configuration, including encryption, Kerberos, etc.
- kerberos_config¶
Optional. Kerberos related configuration.
- identity_config¶
Optional. Identity related configuration, including service account based secure multi-tenancy user mappings.
- class google.cloud.dataproc_v1.types.Session(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A representation of a session.
This message has oneof fields (mutually exclusive fields). For each oneof, at most one member field can be set at the same time. Setting any member of the oneof automatically clears all other members.
- uuid¶
Output only. A session UUID (Unique Universal Identifier). The service generates this value when it creates the session.
- Type
- create_time¶
Output only. The time when the session was created.
- spark_connect_session¶
Optional. Spark Connect session config.
This field is a member of oneof
session_config
.
- runtime_info¶
Output only. Runtime information about session execution.
- state¶
Output only. A state of the session.
- state_message¶
Output only. Session state details, such as the failure description if the state is
FAILED
.- Type
- state_time¶
Output only. The time when the session entered the current state.
- labels¶
Optional. The labels to associate with the session. Label keys must contain 1 to 63 characters, and must conform to RFC 1035. Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035. No more than 32 labels can be associated with a session.
- runtime_config¶
Optional. Runtime configuration for the session execution.
- environment_config¶
Optional. Environment configuration for the session execution.
- state_history¶
Output only. Historical state information for the session.
- Type
MutableSequence[google.cloud.dataproc_v1.types.Session.SessionStateHistory]
- session_template¶
Optional. The session template used by the session.
Only resource names, including project ID and location, are valid.
Example:
https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/sessionTemplates/[template_id]
projects/[project_id]/locations/[dataproc_region]/sessionTemplates/[template_id]
The template must be in the same project and Dataproc region as the session.
- Type
- class LabelsEntry(mapping=None, *, ignore_unknown_fields=False, **kwargs)¶
Bases:
proto.message.Message
- class SessionStateHistory(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Historical state information.
- state¶
Output only. The state of the session at this point in the session history.
- state_start_time¶
Output only. The time when the session entered the historical state.
- class State(value)[source]¶
Bases:
proto.enums.Enum
The session state.
- Values:
- STATE_UNSPECIFIED (0):
The session state is unknown.
- CREATING (1):
The session is created prior to running.
- ACTIVE (2):
The session is running.
- TERMINATING (3):
The session is terminating.
- TERMINATED (4):
The session is terminated successfully.
- FAILED (5):
The session is no longer running due to an error.
- class google.cloud.dataproc_v1.types.SessionOperationMetadata(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Metadata describing the Session operation.
- create_time¶
The time when the operation was created.
- done_time¶
The time when the operation was finished.
- operation_type¶
The operation type.
- class LabelsEntry(mapping=None, *, ignore_unknown_fields=False, **kwargs)¶
Bases:
proto.message.Message
- class SessionOperationType(value)[source]¶
Bases:
proto.enums.Enum
Operation type for Session resources
- Values:
- SESSION_OPERATION_TYPE_UNSPECIFIED (0):
Session operation type is unknown.
- CREATE (1):
Create Session operation type.
- TERMINATE (2):
Terminate Session operation type.
- DELETE (3):
Delete Session operation type.
- class google.cloud.dataproc_v1.types.SessionTemplate(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A representation of a session template.
This message has oneof fields (mutually exclusive fields). For each oneof, at most one member field can be set at the same time. Setting any member of the oneof automatically clears all other members.
- create_time¶
Output only. The time when the template was created.
- spark_connect_session¶
Optional. Spark Connect session config.
This field is a member of oneof
session_config
.
- labels¶
Optional. Labels to associate with sessions created using this template. Label keys must contain 1 to 63 characters, and must conform to RFC 1035. Label values can be empty, but, if present, must contain 1 to 63 characters and conform to RFC 1035. No more than 32 labels can be associated with a session.
- runtime_config¶
Optional. Runtime configuration for session execution.
- environment_config¶
Optional. Environment configuration for session execution.
- update_time¶
Output only. The time the template was last updated.
- uuid¶
Output only. A session template UUID (Unique Universal Identifier). The service generates this value when it creates the session template.
- Type
- class LabelsEntry(mapping=None, *, ignore_unknown_fields=False, **kwargs)¶
Bases:
proto.message.Message
- class google.cloud.dataproc_v1.types.ShieldedInstanceConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Shielded Instance Config for clusters using Compute Engine Shielded VMs.
- enable_secure_boot¶
Optional. Defines whether instances have Secure Boot enabled.
This field is a member of oneof
_enable_secure_boot
.- Type
- class google.cloud.dataproc_v1.types.SoftwareConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Specifies the selection and config of software inside the cluster.
- image_version¶
Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions, such as “1.2” (including a subminor version, such as “1.2.29”), or the “preview” version. If unspecified, it defaults to the latest Debian version.
- Type
- properties¶
Optional. The properties to set on daemon config files.
Property keys are specified in
prefix:property
format, for examplecore:hadoop.tmp.dir
. The following are supported prefixes and their mappings:capacity-scheduler:
capacity-scheduler.xml
core:
core-site.xml
distcp:
distcp-default.xml
hdfs:
hdfs-site.xml
hive:
hive-site.xml
mapred:
mapred-site.xml
pig:
pig.properties
spark:
spark-defaults.conf
yarn:
yarn-site.xml
For more information, see Cluster properties.
- optional_components¶
Optional. The set of components to activate on the cluster.
- Type
MutableSequence[google.cloud.dataproc_v1.types.Component]
- class PropertiesEntry(mapping=None, *, ignore_unknown_fields=False, **kwargs)¶
Bases:
proto.message.Message
- class google.cloud.dataproc_v1.types.SparkBatch(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A configuration for running an Apache Spark batch workload.
This message has oneof fields (mutually exclusive fields). For each oneof, at most one member field can be set at the same time. Setting any member of the oneof automatically clears all other members.
- main_jar_file_uri¶
Optional. The HCFS URI of the jar file that contains the main class.
This field is a member of oneof
driver
.- Type
- main_class¶
Optional. The name of the driver main class. The jar file that contains the class must be in the classpath or specified in
jar_file_uris
.This field is a member of oneof
driver
.- Type
- args¶
Optional. The arguments to pass to the driver. Do not include arguments that can be set as batch properties, such as
--conf
, since a collision can occur that causes an incorrect batch submission.- Type
MutableSequence[str]
- jar_file_uris¶
Optional. HCFS URIs of jar files to add to the classpath of the Spark driver and tasks.
- Type
MutableSequence[str]
- class google.cloud.dataproc_v1.types.SparkConnectConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Spark Connect configuration for an interactive session.
- class google.cloud.dataproc_v1.types.SparkHistoryServerConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Spark History Server configuration for the workload.
- class google.cloud.dataproc_v1.types.SparkJob(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A Dataproc job for running Apache Spark applications on YARN.
This message has oneof fields (mutually exclusive fields). For each oneof, at most one member field can be set at the same time. Setting any member of the oneof automatically clears all other members.
- main_jar_file_uri¶
The HCFS URI of the jar file that contains the main class.
This field is a member of oneof
driver
.- Type
- main_class¶
The name of the driver’s main class. The jar file that contains the class must be in the default CLASSPATH or specified in SparkJob.jar_file_uris.
This field is a member of oneof
driver
.- Type
- args¶
Optional. The arguments to pass to the driver. Do not include arguments, such as
--conf
, that can be set as job properties, since a collision may occur that causes an incorrect job submission.- Type
MutableSequence[str]
- jar_file_uris¶
Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Spark driver and tasks.
- Type
MutableSequence[str]
- file_uris¶
Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
- Type
MutableSequence[str]
- archive_uris¶
Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types:
.jar, .tar, .tar.gz, .tgz, and .zip.
- Type
MutableSequence[str]
- properties¶
Optional. A mapping of property names to values, used to configure Spark. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
- logging_config¶
Optional. The runtime log config for job execution.
- class PropertiesEntry(mapping=None, *, ignore_unknown_fields=False, **kwargs)¶
Bases:
proto.message.Message
- class google.cloud.dataproc_v1.types.SparkRBatch(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A configuration for running an Apache SparkR batch workload.
- main_r_file_uri¶
Required. The HCFS URI of the main R file to use as the driver. Must be a
.R
or.r
file.- Type
- args¶
Optional. The arguments to pass to the Spark driver. Do not include arguments that can be set as batch properties, such as
--conf
, since a collision can occur that causes an incorrect batch submission.- Type
MutableSequence[str]
- class google.cloud.dataproc_v1.types.SparkRJob(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A Dataproc job for running Apache SparkR applications on YARN.
- main_r_file_uri¶
Required. The HCFS URI of the main R file to use as the driver. Must be a .R file.
- Type
- args¶
Optional. The arguments to pass to the driver. Do not include arguments, such as
--conf
, that can be set as job properties, since a collision may occur that causes an incorrect job submission.- Type
MutableSequence[str]
- file_uris¶
Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
- Type
MutableSequence[str]
- archive_uris¶
Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types:
.jar, .tar, .tar.gz, .tgz, and .zip.
- Type
MutableSequence[str]
- properties¶
Optional. A mapping of property names to values, used to configure SparkR. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
- logging_config¶
Optional. The runtime log config for job execution.
- class PropertiesEntry(mapping=None, *, ignore_unknown_fields=False, **kwargs)¶
Bases:
proto.message.Message
- class google.cloud.dataproc_v1.types.SparkSqlBatch(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A configuration for running Apache Spark SQL queries as a batch workload.
- query_file_uri¶
Required. The HCFS URI of the script that contains Spark SQL queries to execute.
- Type
- query_variables¶
Optional. Mapping of query variable names to values (equivalent to the Spark SQL command:
SET name="value";
).
- jar_file_uris¶
Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.
- Type
MutableSequence[str]
- class QueryVariablesEntry(mapping=None, *, ignore_unknown_fields=False, **kwargs)¶
Bases:
proto.message.Message
- class google.cloud.dataproc_v1.types.SparkSqlJob(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A Dataproc job for running Apache Spark SQL queries.
This message has oneof fields (mutually exclusive fields). For each oneof, at most one member field can be set at the same time. Setting any member of the oneof automatically clears all other members.
- query_file_uri¶
The HCFS URI of the script that contains SQL queries.
This field is a member of oneof
queries
.- Type
- script_variables¶
Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET
name="value";
).
- properties¶
Optional. A mapping of property names to values, used to configure Spark SQL’s SparkConf. Properties that conflict with values set by the Dataproc API might be overwritten.
- jar_file_uris¶
Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.
- Type
MutableSequence[str]
- logging_config¶
Optional. The runtime log config for job execution.
- class PropertiesEntry(mapping=None, *, ignore_unknown_fields=False, **kwargs)¶
Bases:
proto.message.Message
- class ScriptVariablesEntry(mapping=None, *, ignore_unknown_fields=False, **kwargs)¶
Bases:
proto.message.Message
- class google.cloud.dataproc_v1.types.StartClusterRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A request to start a cluster.
- cluster_uuid¶
Optional. Specifying the
cluster_uuid
means the RPC will fail (with error NOT_FOUND) if a cluster with the specified UUID does not exist.- Type
- request_id¶
Optional. A unique ID used to identify the request. If the server receives two StartClusterRequests with the same id, then the second request will be ignored and the first [google.longrunning.Operation][google.longrunning.Operation] created and stored in the backend is returned.
Recommendation: Set this value to a UUID.
The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters.
- Type
- class google.cloud.dataproc_v1.types.StartupConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Configuration to handle the startup of instances during cluster create and update process.
- required_registration_fraction¶
Optional. The config setting to enable cluster creation/ updation to be successful only after required_registration_fraction of instances are up and running. This configuration is applicable to only secondary workers for now. The cluster will fail if required_registration_fraction of instances are not available. This will include instance creation, agent registration, and service registration (if enabled).
This field is a member of oneof
_required_registration_fraction
.- Type
- class google.cloud.dataproc_v1.types.StopClusterRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A request to stop a cluster.
- cluster_uuid¶
Optional. Specifying the
cluster_uuid
means the RPC will fail (with error NOT_FOUND) if a cluster with the specified UUID does not exist.- Type
- request_id¶
Optional. A unique ID used to identify the request. If the server receives two StopClusterRequests with the same id, then the second request will be ignored and the first [google.longrunning.Operation][google.longrunning.Operation] created and stored in the backend is returned.
Recommendation: Set this value to a UUID.
The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters.
- Type
- class google.cloud.dataproc_v1.types.SubmitJobRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A request to submit a job.
- job¶
Required. The job resource.
- request_id¶
Optional. A unique id used to identify the request. If the server receives two SubmitJobRequests with the same id, then the second request will be ignored and the first [Job][google.cloud.dataproc.v1.Job] created and stored in the backend is returned.
It is recommended to always set this value to a UUID.
The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters.
- Type
- class google.cloud.dataproc_v1.types.TemplateParameter(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A configurable parameter that replaces one or more fields in the template. Parameterizable fields:
Labels
File uris
Job properties
Job arguments
Script variables
Main class (in HadoopJob and SparkJob)
Zone (in ClusterSelector)
- name¶
Required. Parameter name. The parameter name is used as the key, and paired with the parameter value, which are passed to the template when the template is instantiated. The name must contain only capital letters (A-Z), numbers (0-9), and underscores (_), and must not start with a number. The maximum length is 40 characters.
- Type
- fields¶
Required. Paths to all fields that the parameter replaces. A field is allowed to appear in at most one parameter’s list of field paths.
A field path is similar in syntax to a [google.protobuf.FieldMask][google.protobuf.FieldMask]. For example, a field path that references the zone field of a workflow template’s cluster selector would be specified as
placement.clusterSelector.zone
.Also, field paths can reference fields using the following syntax:
Values in maps can be referenced by key:
labels[‘key’]
placement.clusterSelector.clusterLabels[‘key’]
placement.managedCluster.labels[‘key’]
placement.clusterSelector.clusterLabels[‘key’]
jobs[‘step-id’].labels[‘key’]
Jobs in the jobs list can be referenced by step-id:
jobs[‘step-id’].hadoopJob.mainJarFileUri
jobs[‘step-id’].hiveJob.queryFileUri
jobs[‘step-id’].pySparkJob.mainPythonFileUri
jobs[‘step-id’].hadoopJob.jarFileUris[0]
jobs[‘step-id’].hadoopJob.archiveUris[0]
jobs[‘step-id’].hadoopJob.fileUris[0]
jobs[‘step-id’].pySparkJob.pythonFileUris[0]
Items in repeated fields can be referenced by a zero-based index:
jobs[‘step-id’].sparkJob.args[0]
Other examples:
jobs[‘step-id’].hadoopJob.properties[‘key’]
jobs[‘step-id’].hadoopJob.args[0]
jobs[‘step-id’].hiveJob.scriptVariables[‘key’]
jobs[‘step-id’].hadoopJob.mainJarFileUri
placement.clusterSelector.zone
It may not be possible to parameterize maps and repeated fields in their entirety since only individual map values and individual items in repeated fields can be referenced. For example, the following field paths are invalid:
placement.clusterSelector.clusterLabels
jobs[‘step-id’].sparkJob.args
- Type
MutableSequence[str]
- description¶
Optional. Brief description of the parameter. Must not exceed 1024 characters.
- Type
- validation¶
Optional. Validation rules to be applied to this parameter’s value.
- class google.cloud.dataproc_v1.types.TerminateSessionRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A request to terminate an interactive session.
- request_id¶
Optional. A unique ID used to identify the request. If the service receives two TerminateSessionRequests with the same ID, the second request is ignored.
Recommendation: Set this value to a UUID.
The value must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters.
- Type
- class google.cloud.dataproc_v1.types.TrinoJob(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A Dataproc job for running Trino queries. IMPORTANT: The Dataproc Trino Optional Component must be enabled when the cluster is created to submit a Trino job to the cluster.
This message has oneof fields (mutually exclusive fields). For each oneof, at most one member field can be set at the same time. Setting any member of the oneof automatically clears all other members.
- query_file_uri¶
The HCFS URI of the script that contains SQL queries.
This field is a member of oneof
queries
.- Type
- continue_on_failure¶
Optional. Whether to continue executing queries if a query fails. The default value is
false
. Setting totrue
can be useful when executing independent parallel queries.- Type
- output_format¶
Optional. The format in which query output will be displayed. See the Trino documentation for supported output formats
- Type
- properties¶
Optional. A mapping of property names to values. Used to set Trino session properties Equivalent to using the –session flag in the Trino CLI
- logging_config¶
Optional. The runtime log config for job execution.
- class PropertiesEntry(mapping=None, *, ignore_unknown_fields=False, **kwargs)¶
Bases:
proto.message.Message
- class google.cloud.dataproc_v1.types.UpdateAutoscalingPolicyRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A request to update an autoscaling policy.
- policy¶
Required. The updated autoscaling policy.
- class google.cloud.dataproc_v1.types.UpdateClusterRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A request to update a cluster.
- cluster¶
Required. The changes to the cluster.
- graceful_decommission_timeout¶
Optional. Timeout for graceful YARN decommissioning. Graceful decommissioning allows removing nodes from the cluster without interrupting jobs in progress. Timeout specifies how long to wait for jobs in progress to finish before forcefully removing nodes (and potentially interrupting jobs). Default timeout is 0 (for forceful decommission), and the maximum allowed timeout is 1 day. (see JSON representation of Duration).
Only supported on Dataproc image versions 1.2 and higher.
- update_mask¶
Required. Specifies the path, relative to
Cluster
, of the field to update. For example, to change the number of workers in a cluster to 5, theupdate_mask
parameter would be specified asconfig.worker_config.num_instances
, and thePATCH
request body would specify the new value, as follows:{ "config":{ "workerConfig":{ "numInstances":"5" } } }
Similarly, to change the number of preemptible workers in a cluster to 5, the
update_mask
parameter would beconfig.secondary_worker_config.num_instances
, and thePATCH
request body would be set as follows:{ "config":{ "secondaryWorkerConfig":{ "numInstances":"5" } } }
Note: Currently, only the following fields can be updated:
Mask Purpose labels Update labels config.worker_config.num_instances Resize primary worker group config.secondary_worker_config.num_instances Resize secondary worker group config.autoscaling_config.policy_uri Use, stop using, or change autoscaling policies
- request_id¶
Optional. A unique ID used to identify the request. If the server receives two UpdateClusterRequests with the same id, then the second request will be ignored and the first [google.longrunning.Operation][google.longrunning.Operation] created and stored in the backend is returned.
It is recommended to always set this value to a UUID.
The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters.
- Type
- class google.cloud.dataproc_v1.types.UpdateJobRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A request to update a job.
- job¶
Required. The changes to the job.
- update_mask¶
Required. Specifies the path, relative to Job, of the field to update. For example, to update the labels of a Job the update_mask parameter would be specified as labels, and the
PATCH
request body would specify the new value. Note: Currently, labels is the only field that can be updated.
- class google.cloud.dataproc_v1.types.UpdateSessionTemplateRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A request to update a session template.
- session_template¶
Required. The updated session template.
- class google.cloud.dataproc_v1.types.UpdateWorkflowTemplateRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A request to update a workflow template.
- template¶
Required. The updated workflow template.
The
template.version
field must match the current version.
- class google.cloud.dataproc_v1.types.UsageMetrics(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Usage metrics represent approximate total resources consumed by a workload.
- milli_dcu_seconds¶
Optional. DCU (Dataproc Compute Units) usage in (
milliDCU
xseconds
) (see [Dataproc Serverless pricing] (https://cloud.google.com/dataproc-serverless/pricing)).- Type
- shuffle_storage_gb_seconds¶
Optional. Shuffle storage usage in (
GB
xseconds
) (see [Dataproc Serverless pricing] (https://cloud.google.com/dataproc-serverless/pricing)).- Type
- milli_accelerator_seconds¶
Optional. Accelerator usage in (
milliAccelerator
xseconds
) (see [Dataproc Serverless pricing] (https://cloud.google.com/dataproc-serverless/pricing)).- Type
- class google.cloud.dataproc_v1.types.UsageSnapshot(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
The usage snapshot represents the resources consumed by a workload at a specified time.
- milli_dcu¶
Optional. Milli (one-thousandth) Dataproc Compute Units (DCUs) (see [Dataproc Serverless pricing] (https://cloud.google.com/dataproc-serverless/pricing)).
- Type
- shuffle_storage_gb¶
Optional. Shuffle Storage in gigabytes (GB). (see [Dataproc Serverless pricing] (https://cloud.google.com/dataproc-serverless/pricing))
- Type
Optional. Milli (one-thousandth) Dataproc Compute Units (DCUs) charged at premium tier (see [Dataproc Serverless pricing] (https://cloud.google.com/dataproc-serverless/pricing)).
- Type
Optional. Shuffle Storage in gigabytes (GB) charged at premium tier. (see [Dataproc Serverless pricing] (https://cloud.google.com/dataproc-serverless/pricing))
- Type
- milli_accelerator¶
Optional. Milli (one-thousandth) accelerator. (see [Dataproc Serverless pricing] (https://cloud.google.com/dataproc-serverless/pricing))
- Type
- snapshot_time¶
Optional. The timestamp of the usage snapshot.
- class google.cloud.dataproc_v1.types.ValueValidation(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Validation based on a list of allowed values.
- class google.cloud.dataproc_v1.types.VirtualClusterConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
The Dataproc cluster config for a cluster that does not directly control the underlying compute resources, such as a Dataproc-on-GKE cluster.
- staging_bucket¶
Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster’s staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets). This field requires a Cloud Storage bucket name, not a ``gs://…`` URI to a Cloud Storage bucket.
- Type
- kubernetes_cluster_config¶
Required. The configuration for running the Dataproc cluster on Kubernetes.
This field is a member of oneof
infrastructure_config
.
- auxiliary_services_config¶
Optional. Configuration of auxiliary services used by this cluster.
- class google.cloud.dataproc_v1.types.WorkflowGraph(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
The workflow graph.
- nodes¶
Output only. The workflow nodes.
- Type
MutableSequence[google.cloud.dataproc_v1.types.WorkflowNode]
- class google.cloud.dataproc_v1.types.WorkflowMetadata(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A Dataproc workflow template resource.
- template¶
Output only. The resource name of the workflow template as described in https://cloud.google.com/apis/design/resource_names.
For
projects.regions.workflowTemplates
, the resource name of the template has the following format:projects/{project_id}/regions/{region}/workflowTemplates/{template_id}
For
projects.locations.workflowTemplates
, the resource name of the template has the following format:projects/{project_id}/locations/{location}/workflowTemplates/{template_id}
- Type
- create_cluster¶
Output only. The create cluster operation metadata.
- graph¶
Output only. The workflow graph.
- delete_cluster¶
Output only. The delete cluster operation metadata.
- state¶
Output only. The workflow state.
- parameters¶
Map from parameter names to values that were used for those parameters.
- start_time¶
Output only. Workflow start time.
- end_time¶
Output only. Workflow end time.
- dag_timeout¶
Output only. The timeout duration for the DAG of jobs, expressed in seconds (see JSON representation of duration).
- dag_start_time¶
Output only. DAG start time, only set for workflows with [dag_timeout][google.cloud.dataproc.v1.WorkflowMetadata.dag_timeout] when DAG begins.
- dag_end_time¶
Output only. DAG end time, only set for workflows with [dag_timeout][google.cloud.dataproc.v1.WorkflowMetadata.dag_timeout] when DAG ends.
- class ParametersEntry(mapping=None, *, ignore_unknown_fields=False, **kwargs)¶
Bases:
proto.message.Message
- class State(value)[source]¶
Bases:
proto.enums.Enum
The operation state.
- Values:
- UNKNOWN (0):
Unused.
- PENDING (1):
The operation has been created.
- RUNNING (2):
The operation is running.
- DONE (3):
The operation is done; either cancelled or completed.
- class google.cloud.dataproc_v1.types.WorkflowNode(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
The workflow node.
- state¶
Output only. The node state.
- class NodeState(value)[source]¶
Bases:
proto.enums.Enum
The workflow node state.
- Values:
- NODE_STATE_UNSPECIFIED (0):
State is unspecified.
- BLOCKED (1):
The node is awaiting prerequisite node to finish.
- RUNNABLE (2):
The node is runnable but not running.
- RUNNING (3):
The node is running.
- COMPLETED (4):
The node completed successfully.
- FAILED (5):
The node failed. A node can be marked FAILED because its ancestor or peer failed.
- class google.cloud.dataproc_v1.types.WorkflowTemplate(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A Dataproc workflow template resource.
- name¶
Output only. The resource name of the workflow template, as described in https://cloud.google.com/apis/design/resource_names.
For
projects.regions.workflowTemplates
, the resource name of the template has the following format:projects/{project_id}/regions/{region}/workflowTemplates/{template_id}
For
projects.locations.workflowTemplates
, the resource name of the template has the following format:projects/{project_id}/locations/{location}/workflowTemplates/{template_id}
- Type
- version¶
Optional. Used to perform a consistent read-modify-write.
This field should be left blank for a
CreateWorkflowTemplate
request. It is required for anUpdateWorkflowTemplate
request, and must match the current server version. A typical update template flow would fetch the current template with aGetWorkflowTemplate
request, which will return the current template with theversion
field filled in with the current server version. The user updates other fields in the template, then returns it as part of theUpdateWorkflowTemplate
request.- Type
- create_time¶
Output only. The time template was created.
- update_time¶
Output only. The time template was last updated.
- labels¶
Optional. The labels to associate with this template. These labels will be propagated to all jobs and clusters created by the workflow instance.
Label keys must contain 1 to 63 characters, and must conform to RFC 1035.
Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035.
No more than 32 labels can be associated with a template.
- placement¶
Required. WorkflowTemplate scheduling information.
- jobs¶
Required. The Directed Acyclic Graph of Jobs to submit.
- Type
MutableSequence[google.cloud.dataproc_v1.types.OrderedJob]
- parameters¶
Optional. Template parameters whose values are substituted into the template. Values for parameters must be provided when the template is instantiated.
- Type
MutableSequence[google.cloud.dataproc_v1.types.TemplateParameter]
- dag_timeout¶
Optional. Timeout duration for the DAG of jobs, expressed in seconds (see JSON representation of duration). The timeout duration must be from 10 minutes (“600s”) to 24 hours (“86400s”). The timer begins when the first job is submitted. If the workflow is running at the end of the timeout period, any remaining jobs are cancelled, the workflow is ended, and if the workflow was running on a managed cluster, the cluster is deleted.
- encryption_config¶
Optional. Encryption settings for encrypting workflow template job arguments.
- class EncryptionConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Encryption settings for encrypting workflow template job arguments.
- kms_key¶
Optional. The Cloud KMS key name to use for encrypting workflow template job arguments.
When this this key is provided, the following workflow template [job arguments] (https://cloud.google.com/dataproc/docs/concepts/workflows/use-workflows#adding_jobs_to_a_template), if present, are CMEK encrypted:
SparkSqlJob scriptVariables and queryList.queries
HiveJob scriptVariables and queryList.queries
PigJob scriptVariables and queryList.queries
PrestoJob scriptVariables and queryList.queries
- Type
- class LabelsEntry(mapping=None, *, ignore_unknown_fields=False, **kwargs)¶
Bases:
proto.message.Message
- class google.cloud.dataproc_v1.types.WorkflowTemplatePlacement(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
Specifies workflow execution target.
Either
managed_cluster
orcluster_selector
is required.This message has oneof fields (mutually exclusive fields). For each oneof, at most one member field can be set at the same time. Setting any member of the oneof automatically clears all other members.
- managed_cluster¶
A cluster that is managed by the workflow.
This field is a member of oneof
placement
.
- class google.cloud.dataproc_v1.types.YarnApplication(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]¶
Bases:
proto.message.Message
A YARN application created by a job. Application information is a subset of org.apache.hadoop.yarn.proto.YarnProtos.ApplicationReportProto.
Beta Feature: This report is available for testing purposes only. It may be changed before final release.
- state¶
Required. The application state.
- tracking_url¶
Optional. The HTTP URL of the ApplicationMaster, HistoryServer, or TimelineServer that provides application-specific information. The URL uses the internal hostname, and requires a proxy server for resolution and, possibly, access.
- Type
- class State(value)[source]¶
Bases:
proto.enums.Enum
The application state, corresponding to <code>YarnProtos.YarnApplicationStateProto</code>.
- Values:
- STATE_UNSPECIFIED (0):
Status is unspecified.
- NEW (1):
Status is NEW.
- NEW_SAVING (2):
Status is NEW_SAVING.
- SUBMITTED (3):
Status is SUBMITTED.
- ACCEPTED (4):
Status is ACCEPTED.
- RUNNING (5):
Status is RUNNING.
- FINISHED (6):
Status is FINISHED.
- FAILED (7):
Status is FAILED.
- KILLED (8):
Status is KILLED.