As of January 1, 2020 this library no longer supports Python 2 on the latest released version. Library versions released prior to that date will continue to be available. For more information please visit Python 2 support on Google Cloud.

Types for Google Cloud Dataproc v1 API

class google.cloud.dataproc_v1.types.AutoscalingPolicy(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Describes an autoscaling policy for Dataproc cluster autoscaler.

id

Required. The policy id.

The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of between 3 and 50 characters.

Type

str

name

Output only. The “resource name” of the autoscaling policy, as described in https://cloud.google.com/apis/design/resource_names.

  • For projects.regions.autoscalingPolicies, the resource name of the policy has the following format: projects/{project_id}/regions/{region}/autoscalingPolicies/{policy_id}

  • For projects.locations.autoscalingPolicies, the resource name of the policy has the following format: projects/{project_id}/locations/{location}/autoscalingPolicies/{policy_id}

Type

str

basic_algorithm
Type

BasicAutoscalingAlgorithm

worker_config

Required. Describes how the autoscaler will operate for primary workers.

Type

InstanceGroupAutoscalingPolicyConfig

secondary_worker_config

Optional. Describes how the autoscaler will operate for secondary workers.

Type

InstanceGroupAutoscalingPolicyConfig

class google.cloud.dataproc_v1.types.BasicAutoscalingAlgorithm(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Basic algorithm for autoscaling.

yarn_config

Required. YARN autoscaling configuration.

Type

BasicYarnAutoscalingConfig

cooldown_period

Optional. Duration between scaling events. A scaling period starts after the update operation from the previous event has completed.

Bounds: [2m, 1d]. Default: 2m.

Type

Duration

class google.cloud.dataproc_v1.types.BasicYarnAutoscalingConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Basic autoscaling configurations for YARN.

graceful_decommission_timeout

Required. Timeout for YARN graceful decommissioning of Node Managers. Specifies the duration to wait for jobs to complete before forcefully removing workers (and potentially interrupting jobs). Only applicable to downscaling operations.

Bounds: [0s, 1d].

Type

Duration

scale_up_factor

Required. Fraction of average YARN pending memory in the last cooldown period for which to add workers. A scale-up factor of 1.0 will result in scaling up so that there is no pending memory remaining after the update (more aggressive scaling). A scale-up factor closer to 0 will result in a smaller magnitude of scaling up (less aggressive scaling). See How autoscaling works for more information.

Bounds: [0.0, 1.0].

Type

float

scale_down_factor

Required. Fraction of average YARN pending memory in the last cooldown period for which to remove workers. A scale-down factor of 1 will result in scaling down so that there is no available memory remaining after the update (more aggressive scaling). A scale-down factor of 0 disables removing workers, which can be beneficial for autoscaling a single job. See How autoscaling works for more information.

Bounds: [0.0, 1.0].

Type

float

scale_up_min_worker_fraction

Optional. Minimum scale-up threshold as a fraction of total cluster size before scaling occurs. For example, in a 20-worker cluster, a threshold of 0.1 means the autoscaler must recommend at least a 2-worker scale-up for the cluster to scale. A threshold of 0 means the autoscaler will scale up on any recommended change.

Bounds: [0.0, 1.0]. Default: 0.0.

Type

float

scale_down_min_worker_fraction

Optional. Minimum scale-down threshold as a fraction of total cluster size before scaling occurs. For example, in a 20-worker cluster, a threshold of 0.1 means the autoscaler must recommend at least a 2 worker scale-down for the cluster to scale. A threshold of 0 means the autoscaler will scale down on any recommended change.

Bounds: [0.0, 1.0]. Default: 0.0.

Type

float

class google.cloud.dataproc_v1.types.InstanceGroupAutoscalingPolicyConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Configuration for the size bounds of an instance group, including its proportional size to other groups.

min_instances

Optional. Minimum number of instances for this group.

Primary workers - Bounds: [2, max_instances]. Default: 2. Secondary workers - Bounds: [0, max_instances]. Default: 0.

Type

int

max_instances

Required. Maximum number of instances for this group. Required for primary workers. Note that by default, clusters will not use secondary workers. Required for secondary workers if the minimum secondary instances is set.

Primary workers - Bounds: [min_instances, ). Secondary workers - Bounds: [min_instances, ). Default: 0.

Type

int

weight

Optional. Weight for the instance group, which is used to determine the fraction of total workers in the cluster from this instance group. For example, if primary workers have weight 2, and secondary workers have weight 1, the cluster will have approximately 2 primary workers for each secondary worker.

The cluster may not reach the specified balance if constrained by min/max bounds or other autoscaling settings. For example, if max_instances for secondary workers is 0, then only primary workers will be added. The cluster can also be out of balance when created.

If weight is not set on any instance group, the cluster will default to equal weight for all groups: the cluster will attempt to maintain an equal number of workers in each group within the configured size bounds for each group. If weight is set for one group only, the cluster will default to zero weight on the unset group. For example if weight is set only on primary workers, the cluster will use primary workers only and no secondary workers.

Type

int

class google.cloud.dataproc_v1.types.CreateAutoscalingPolicyRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

A request to create an autoscaling policy.

parent

Required. The “resource name” of the region or location, as described in https://cloud.google.com/apis/design/resource_names.

  • For projects.regions.autoscalingPolicies.create, the resource name of the region has the following format: projects/{project_id}/regions/{region}

  • For projects.locations.autoscalingPolicies.create, the resource name of the location has the following format: projects/{project_id}/locations/{location}

Type

str

policy

Required. The autoscaling policy to create.

Type

AutoscalingPolicy

class google.cloud.dataproc_v1.types.GetAutoscalingPolicyRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

A request to fetch an autoscaling policy.

name

Required. The “resource name” of the autoscaling policy, as described in https://cloud.google.com/apis/design/resource_names.

  • For projects.regions.autoscalingPolicies.get, the resource name of the policy has the following format: projects/{project_id}/regions/{region}/autoscalingPolicies/{policy_id}

  • For projects.locations.autoscalingPolicies.get, the resource name of the policy has the following format: projects/{project_id}/locations/{location}/autoscalingPolicies/{policy_id}

Type

str

class google.cloud.dataproc_v1.types.UpdateAutoscalingPolicyRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

A request to update an autoscaling policy.

policy

Required. The updated autoscaling policy.

Type

AutoscalingPolicy

class google.cloud.dataproc_v1.types.DeleteAutoscalingPolicyRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

A request to delete an autoscaling policy. Autoscaling policies in use by one or more clusters will not be deleted.

name

Required. The “resource name” of the autoscaling policy, as described in https://cloud.google.com/apis/design/resource_names.

  • For projects.regions.autoscalingPolicies.delete, the resource name of the policy has the following format: projects/{project_id}/regions/{region}/autoscalingPolicies/{policy_id}

  • For projects.locations.autoscalingPolicies.delete, the resource name of the policy has the following format: projects/{project_id}/locations/{location}/autoscalingPolicies/{policy_id}

Type

str

class google.cloud.dataproc_v1.types.ListAutoscalingPoliciesRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

A request to list autoscaling policies in a project.

parent

Required. The “resource name” of the region or location, as described in https://cloud.google.com/apis/design/resource_names.

  • For projects.regions.autoscalingPolicies.list, the resource name of the region has the following format: projects/{project_id}/regions/{region}

  • For projects.locations.autoscalingPolicies.list, the resource name of the location has the following format: projects/{project_id}/locations/{location}

Type

str

page_size

Optional. The maximum number of results to return in each response. Must be less than or equal to 1000. Defaults to 100.

Type

int

page_token

Optional. The page token, returned by a previous call, to request the next page of results.

Type

str

class google.cloud.dataproc_v1.types.ListAutoscalingPoliciesResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

A response to a request to list autoscaling policies in a project.

policies

Output only. Autoscaling policies list.

Type

Sequence[AutoscalingPolicy]

next_page_token

Output only. This token is included in the response if there are more results to fetch.

Type

str

class google.cloud.dataproc_v1.types.Cluster(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Describes the identifying information, config, and status of a cluster of Compute Engine instances.

project_id

Required. The Google Cloud Platform project ID that the cluster belongs to.

Type

str

cluster_name

Required. The cluster name. Cluster names within a project must be unique. Names of deleted clusters can be reused.

Type

str

config

Required. The cluster config. Note that Dataproc may set default values, and values may change when clusters are updated.

Type

ClusterConfig

labels

Optional. The labels to associate with this cluster. Label keys must contain 1 to 63 characters, and must conform to RFC 1035. Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035. No more than 32 labels can be associated with a cluster.

Type

Sequence[LabelsEntry]

status

Output only. Cluster status.

Type

ClusterStatus

status_history

Output only. The previous cluster status.

Type

Sequence[ClusterStatus]

cluster_uuid

Output only. A cluster UUID (Unique Universal Identifier). Dataproc generates this value when it creates the cluster.

Type

str

metrics

Output only. Contains cluster daemon metrics such as HDFS and YARN stats.

Beta Feature: This report is available for testing purposes only. It may be changed before final release.

Type

ClusterMetrics

class LabelsEntry(mapping=None, *, ignore_unknown_fields=False, **kwargs)

Bases: proto.message.Message

class google.cloud.dataproc_v1.types.ClusterConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

The cluster config.

config_bucket

Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster’s staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging bucket).

Type

str

temp_bucket

Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster’s temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket.

Type

str

gce_cluster_config

Optional. The shared Compute Engine config settings for all instances in a cluster.

Type

GceClusterConfig

master_config

Optional. The Compute Engine config settings for the master instance in a cluster.

Type

InstanceGroupConfig

worker_config

Optional. The Compute Engine config settings for worker instances in a cluster.

Type

InstanceGroupConfig

secondary_worker_config

Optional. The Compute Engine config settings for additional worker instances in a cluster.

Type

InstanceGroupConfig

software_config

Optional. The config settings for software inside the cluster.

Type

SoftwareConfig

initialization_actions

Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node’s role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget):

ROLE=$(curl -H Metadata-Flavor:Google
http://metadata/computeMetadata/v1/instance/attributes/dataproc-role)
if [[ "${ROLE}" == 'Master' ]]; then
  ... master specific actions ...
else
  ... worker specific actions ...
fi
Type

Sequence[NodeInitializationAction]

encryption_config

Optional. Encryption settings for the cluster.

Type

EncryptionConfig

autoscaling_config

Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.

Type

AutoscalingConfig

security_config

Optional. Security settings for the cluster.

Type

SecurityConfig

lifecycle_config

Optional. Lifecycle setting for the cluster.

Type

LifecycleConfig

endpoint_config

Optional. Port/endpoint configuration for this cluster

Type

EndpointConfig

class google.cloud.dataproc_v1.types.EndpointConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Endpoint config for this cluster

http_ports

Output only. The map of port descriptions to URLs. Will only be populated if enable_http_port_access is true.

Type

Sequence[HttpPortsEntry]

enable_http_port_access

Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.

Type

bool

class HttpPortsEntry(mapping=None, *, ignore_unknown_fields=False, **kwargs)

Bases: proto.message.Message

class google.cloud.dataproc_v1.types.AutoscalingConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Autoscaling Policy config associated with the cluster.

policy_uri

Optional. The autoscaling policy used by the cluster.

Only resource names including projectid and location (region) are valid. Examples:

  • https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]

  • projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]

Note that the policy must be in the same project and Dataproc region.

Type

str

class google.cloud.dataproc_v1.types.EncryptionConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Encryption settings for the cluster.

gce_pd_kms_key_name

Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.

Type

str

class google.cloud.dataproc_v1.types.GceClusterConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Common config settings for resources of Compute Engine cluster instances, applicable to all instances in the cluster.

zone_uri

Optional. The zone where the Compute Engine cluster will be located. On a create request, it is required in the “global” region. If omitted in a non-global Dataproc region, the service will pick a zone in the corresponding Compute Engine region. On a get request, zone will always be present.

A full URL, partial URI, or short name are valid. Examples:

  • https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]

  • projects/[project_id]/zones/[zone]

  • us-central1-f

Type

str

network_uri

Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the “default” network of the project is used, if it exists. Cannot be a “Custom Subnet Network” (see Using Subnetworks for more information).

A full URL, partial URI, or short name are valid. Examples:

  • https://www.googleapis.com/compute/v1/projects/[project_id]/regions/global/default

  • projects/[project_id]/regions/global/default

  • default

Type

str

subnetwork_uri

Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.

A full URL, partial URI, or short name are valid. Examples:

  • https://www.googleapis.com/compute/v1/projects/[project_id]/regions/us-east1/subnetworks/sub0

  • projects/[project_id]/regions/us-east1/subnetworks/sub0

  • sub0

Type

str

internal_ip_only

Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.

Type

bool

service_account

Optional. The Dataproc service account (also see VM Data Plane identity) used by Dataproc cluster VM instances to access Google Cloud Platform services.

If not specified, the Compute Engine default service account is used.

Type

str

service_account_scopes

Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included:

If no scopes are specified, the following defaults are also provided:

Type

Sequence[str]

tags

The Compute Engine tags to add to all instances (see Tagging instances).

Type

Sequence[str]

metadata

The Compute Engine metadata entries to add to all instances (see Project and instance metadata).

Type

Sequence[MetadataEntry]

reservation_affinity

Optional. Reservation Affinity for consuming Zonal reservation.

Type

ReservationAffinity

class MetadataEntry(mapping=None, *, ignore_unknown_fields=False, **kwargs)

Bases: proto.message.Message

class google.cloud.dataproc_v1.types.InstanceGroupConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

The config settings for Compute Engine resources in an instance group, such as a master or worker group.

num_instances

Optional. The number of VM instances in the instance group. For master instance groups, must be set to 1.

Type

int

instance_names

Output only. The list of instance names. Dataproc derives the names from cluster_name, num_instances, and the instance group.

Type

Sequence[str]

image_uri

Optional. The Compute Engine image resource used for cluster instances.

The URI can represent an image or image family.

Image examples:

  • https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/[image-id]

  • projects/[project_id]/global/images/[image-id]

  • image-id

Image family examples. Dataproc will use the most recent image from the family:

  • https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/family/[custom-image-family-name]

  • projects/[project_id]/global/images/family/[custom-image-family-name]

If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.

Type

str

machine_type_uri

Optional. The Compute Engine machine type used for cluster instances.

A full URL, partial URI, or short name are valid. Examples:

  • https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2

  • projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2

  • n1-standard-2

Auto Zone Exception: If you are using the Dataproc Auto Zone Placement feature, you must use the short name of the machine type resource, for example, n1-standard-2.

Type

str

disk_config

Optional. Disk option config settings.

Type

DiskConfig

is_preemptible

Output only. Specifies that this instance group contains preemptible instances.

Type

bool

preemptibility

Optional. Specifies the preemptibility of the instance group.

The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.

The default value for secondary instances is PREEMPTIBLE.

Type

Preemptibility

managed_group_config

Output only. The config for Compute Engine Instance Group Manager that manages this group. This is only used for preemptible instance groups.

Type

ManagedGroupConfig

accelerators

Optional. The Compute Engine accelerator configuration for these instances.

Type

Sequence[AcceleratorConfig]

min_cpu_platform

Optional. Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform.

Type

str

class Preemptibility[source]

Bases: proto.enums.Enum

Controls the use of [preemptible instances] (https://cloud.google.com/compute/docs/instances/preemptible) within the group.

class google.cloud.dataproc_v1.types.ManagedGroupConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Specifies the resources used to actively manage an instance group.

instance_template_name

Output only. The name of the Instance Template used for the Managed Instance Group.

Type

str

instance_group_manager_name

Output only. The name of the Instance Group Manager for this group.

Type

str

class google.cloud.dataproc_v1.types.AcceleratorConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Specifies the type and number of accelerator cards attached to the instances of an instance. See GPUs on Compute Engine.

accelerator_type_uri

Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes.

Examples:

  • https://www.googleapis.com/compute/beta/projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80

  • projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80

  • nvidia-tesla-k80

Auto Zone Exception: If you are using the Dataproc Auto Zone Placement feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.

Type

str

accelerator_count

The number of the accelerator cards of this type exposed to this instance.

Type

int

class google.cloud.dataproc_v1.types.DiskConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Specifies the config of disk options for a group of VM instances.

boot_disk_type

Optional. Type of the boot disk (default is “pd-standard”). Valid values: “pd-ssd” (Persistent Disk Solid State Drive) or “pd- standard” (Persistent Disk Hard Disk Drive).

Type

str

boot_disk_size_gb

Optional. Size in GB of the boot disk (default is 500GB).

Type

int

num_local_ssds

Optional. Number of attached SSDs, from 0 to 4 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.

Type

int

class google.cloud.dataproc_v1.types.NodeInitializationAction(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Specifies an executable to run on a fully configured node and a timeout period for executable completion.

executable_file

Required. Cloud Storage URI of executable file.

Type

str

execution_timeout

Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration).

Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.

Type

Duration

class google.cloud.dataproc_v1.types.ClusterStatus(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

The status of a cluster and its instances.

state

Output only. The cluster’s state.

Type

State

detail

Optional. Output only. Details of cluster’s state.

Type

str

state_start_time

Output only. Time when this state was entered (see JSON representation of Timestamp).

Type

Timestamp

substate

Output only. Additional state information that includes status reported by the agent.

Type

Substate

class State[source]

Bases: proto.enums.Enum

The cluster state.

class Substate[source]

Bases: proto.enums.Enum

The cluster substate.

class google.cloud.dataproc_v1.types.SecurityConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Security related configuration, including Kerberos.

kerberos_config

Kerberos related configuration.

Type

KerberosConfig

class google.cloud.dataproc_v1.types.KerberosConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Specifies Kerberos related configuration.

enable_kerberos

Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.

Type

bool

root_principal_password_uri

Required. The Cloud Storage URI of a KMS encrypted file containing the root principal password.

Type

str

kms_key_uri

Required. The uri of the KMS key used to encrypt various sensitive files.

Type

str

keystore_uri

Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.

Type

str

truststore_uri

Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.

Type

str

keystore_password_uri

Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.

Type

str

key_password_uri

Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.

Type

str

truststore_password_uri

Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.

Type

str

cross_realm_trust_realm

Optional. The remote realm the Dataproc on- luster KDC will trust, should the user enable cross realm trust.

Type

str

cross_realm_trust_kdc

Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.

Type

str

cross_realm_trust_admin_server

Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.

Type

str

cross_realm_trust_shared_password_uri

Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.

Type

str

kdc_db_key_uri

Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.

Type

str

tgt_lifetime_hours

Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.

Type

int

realm

Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.

Type

str

class google.cloud.dataproc_v1.types.SoftwareConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Specifies the selection and config of software inside the cluster.

image_version

Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions, such as “1.2” (including a subminor version, such as “1.2.29”), or the “preview” version. If unspecified, it defaults to the latest Debian version.

Type

str

properties

Optional. The properties to set on daemon config files.

Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings:

  • capacity-scheduler: capacity-scheduler.xml

  • core: core-site.xml

  • distcp: distcp-default.xml

  • hdfs: hdfs-site.xml

  • hive: hive-site.xml

  • mapred: mapred-site.xml

  • pig: pig.properties

  • spark: spark-defaults.conf

  • yarn: yarn-site.xml

For more information, see Cluster properties.

Type

Sequence[PropertiesEntry]

optional_components

Optional. The set of components to activate on the cluster.

Type

Sequence[Component]

class PropertiesEntry(mapping=None, *, ignore_unknown_fields=False, **kwargs)

Bases: proto.message.Message

class google.cloud.dataproc_v1.types.LifecycleConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Specifies the cluster auto-delete schedule configuration.

idle_delete_ttl

Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration.

Type

Duration

auto_delete_time

Optional. The time when cluster will be auto-deleted (see JSON representation of Timestamp).

Type

Timestamp

auto_delete_ttl

Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration).

Type

Duration

idle_start_time

Output only. The time when cluster became idle (most recent job finished) and became eligible for deletion due to idleness (see JSON representation of Timestamp).

Type

Timestamp

class google.cloud.dataproc_v1.types.ClusterMetrics(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Contains cluster daemon metrics, such as HDFS and YARN stats.

Beta Feature: This report is available for testing purposes only. It may be changed before final release.

hdfs_metrics

The HDFS metrics.

Type

Sequence[HdfsMetricsEntry]

yarn_metrics

The YARN metrics.

Type

Sequence[YarnMetricsEntry]

class HdfsMetricsEntry(mapping=None, *, ignore_unknown_fields=False, **kwargs)

Bases: proto.message.Message

class YarnMetricsEntry(mapping=None, *, ignore_unknown_fields=False, **kwargs)

Bases: proto.message.Message

class google.cloud.dataproc_v1.types.CreateClusterRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

A request to create a cluster.

project_id

Required. The ID of the Google Cloud Platform project that the cluster belongs to.

Type

str

region

Required. The Dataproc region in which to handle the request.

Type

str

cluster

Required. The cluster to create.

Type

Cluster

request_id

Optional. A unique id used to identify the request. If the server receives two [CreateClusterRequest][google.cloud.dataproc.v1.CreateClusterRequest] requests with the same id, then the second request will be ignored and the first [google.longrunning.Operation][google.longrunning.Operation] created and stored in the backend is returned.

It is recommended to always set this value to a UUID.

The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters.

Type

str

class google.cloud.dataproc_v1.types.UpdateClusterRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

A request to update a cluster.

project_id

Required. The ID of the Google Cloud Platform project the cluster belongs to.

Type

str

region

Required. The Dataproc region in which to handle the request.

Type

str

cluster_name

Required. The cluster name.

Type

str

cluster

Required. The changes to the cluster.

Type

Cluster

graceful_decommission_timeout

Optional. Timeout for graceful YARN decomissioning. Graceful decommissioning allows removing nodes from the cluster without interrupting jobs in progress. Timeout specifies how long to wait for jobs in progress to finish before forcefully removing nodes (and potentially interrupting jobs). Default timeout is 0 (for forceful decommission), and the maximum allowed timeout is 1 day. (see JSON representation of Duration).

Only supported on Dataproc image versions 1.2 and higher.

Type

Duration

update_mask

Required. Specifies the path, relative to Cluster, of the field to update. For example, to change the number of workers in a cluster to 5, the update_mask parameter would be specified as config.worker_config.num_instances, and the PATCH request body would specify the new value, as follows:

{
  "config":{
    "workerConfig":{
      "numInstances":"5"
    }
  }
}

Similarly, to change the number of preemptible workers in a cluster to 5, the update_mask parameter would be config.secondary_worker_config.num_instances, and the PATCH request body would be set as follows:

{
  "config":{
    "secondaryWorkerConfig":{
      "numInstances":"5"
    }
  }
}

Note: Currently, only the following fields can be updated:

Mask Purpose
labels Update labels
config.worker_config.num_instances Resize primary worker group
config.secondary_worker_config.num_instances Resize secondary worker group
config.autoscaling_config.policy_uriUse, stop using, or change autoscaling policies
Type

FieldMask

request_id

Optional. A unique id used to identify the request. If the server receives two [UpdateClusterRequest][google.cloud.dataproc.v1.UpdateClusterRequest] requests with the same id, then the second request will be ignored and the first [google.longrunning.Operation][google.longrunning.Operation] created and stored in the backend is returned.

It is recommended to always set this value to a UUID.

The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters.

Type

str

class google.cloud.dataproc_v1.types.DeleteClusterRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

A request to delete a cluster.

project_id

Required. The ID of the Google Cloud Platform project that the cluster belongs to.

Type

str

region

Required. The Dataproc region in which to handle the request.

Type

str

cluster_name

Required. The cluster name.

Type

str

cluster_uuid

Optional. Specifying the cluster_uuid means the RPC should fail (with error NOT_FOUND) if cluster with specified UUID does not exist.

Type

str

request_id

Optional. A unique id used to identify the request. If the server receives two [DeleteClusterRequest][google.cloud.dataproc.v1.DeleteClusterRequest] requests with the same id, then the second request will be ignored and the first [google.longrunning.Operation][google.longrunning.Operation] created and stored in the backend is returned.

It is recommended to always set this value to a UUID.

The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters.

Type

str

class google.cloud.dataproc_v1.types.GetClusterRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Request to get the resource representation for a cluster in a project.

project_id

Required. The ID of the Google Cloud Platform project that the cluster belongs to.

Type

str

region

Required. The Dataproc region in which to handle the request.

Type

str

cluster_name

Required. The cluster name.

Type

str

class google.cloud.dataproc_v1.types.ListClustersRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

A request to list the clusters in a project.

project_id

Required. The ID of the Google Cloud Platform project that the cluster belongs to.

Type

str

region

Required. The Dataproc region in which to handle the request.

Type

str

filter

Optional. A filter constraining the clusters to list. Filters are case-sensitive and have the following syntax:

field = value [AND [field = value]] …

where field is one of status.state, clusterName, or labels.[KEY], and [KEY] is a label key. value can be * to match all values. status.state can be one of the following: ACTIVE, INACTIVE, CREATING, RUNNING, ERROR, DELETING, or UPDATING. ACTIVE contains the CREATING, UPDATING, and RUNNING states. INACTIVE contains the DELETING and ERROR states. clusterName is the name of the cluster provided at creation time. Only the logical AND operator is supported; space-separated items are treated as having an implicit AND operator.

Example filter:

status.state = ACTIVE AND clusterName = mycluster AND labels.env = staging AND labels.starred = *

Type

str

page_size

Optional. The standard List page size.

Type

int

page_token

Optional. The standard List page token.

Type

str

class google.cloud.dataproc_v1.types.ListClustersResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

The list of all clusters in a project.

clusters

Output only. The clusters in the project.

Type

Sequence[Cluster]

next_page_token

Output only. This token is included in the response if there are more results to fetch. To fetch additional results, provide this value as the page_token in a subsequent ListClustersRequest.

Type

str

class google.cloud.dataproc_v1.types.DiagnoseClusterRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

A request to collect cluster diagnostic information.

project_id

Required. The ID of the Google Cloud Platform project that the cluster belongs to.

Type

str

region

Required. The Dataproc region in which to handle the request.

Type

str

cluster_name

Required. The cluster name.

Type

str

class google.cloud.dataproc_v1.types.DiagnoseClusterResults(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

The location of diagnostic output.

output_uri

Output only. The Cloud Storage URI of the diagnostic output. The output report is a plain text file with a summary of collected diagnostics.

Type

str

class google.cloud.dataproc_v1.types.ReservationAffinity(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Reservation Affinity for consuming Zonal reservation.

consume_reservation_type

Optional. Type of reservation to consume

Type

Type

key

Optional. Corresponds to the label key of reservation resource.

Type

str

values

Optional. Corresponds to the label values of reservation resource.

Type

Sequence[str]

class Type[source]

Bases: proto.enums.Enum

Indicates whether to consume capacity from an reservation or not.

class google.cloud.dataproc_v1.types.LoggingConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

The runtime logging config of the job.

driver_log_levels

The per-package log levels for the driver. This may include “root” package name to configure rootLogger. Examples:

‘com.google = FATAL’, ‘root = INFO’,

‘org.apache = DEBUG’

Type

Sequence[DriverLogLevelsEntry]

class DriverLogLevelsEntry(mapping=None, *, ignore_unknown_fields=False, **kwargs)

Bases: proto.message.Message

class Level[source]

Bases: proto.enums.Enum

The Log4j level for job execution. When running an Apache Hive job, Cloud Dataproc configures the Hive client to an equivalent verbosity level.

class google.cloud.dataproc_v1.types.HadoopJob(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

A Dataproc job for running Apache Hadoop MapReduce jobs on Apache Hadoop YARN.

main_jar_file_uri

The HCFS URI of the jar file containing the main class. Examples:

‘gs://foo-bucket/analytics-binaries/extract-

useful-metrics-mr.jar’ ‘hdfs:/tmp/test- samples/custom-wordcount.jar’ ‘file:///home/usr/lib/hadoop-mapreduce/hadoop- mapreduce-examples.jar’

Type

str

main_class

The name of the driver’s main class. The jar file containing the class must be in the default CLASSPATH or specified in jar_file_uris.

Type

str

args

Optional. The arguments to pass to the driver. Do not include arguments, such as -libjars or -Dfoo=bar, that can be set as job properties, since a collision may occur that causes an incorrect job submission.

Type

Sequence[str]

jar_file_uris

Optional. Jar file URIs to add to the CLASSPATHs of the Hadoop driver and tasks.

Type

Sequence[str]

file_uris

Optional. HCFS (Hadoop Compatible Filesystem) URIs of files to be copied to the working directory of Hadoop drivers and distributed tasks. Useful for naively parallel tasks.

Type

Sequence[str]

archive_uris

Optional. HCFS URIs of archives to be extracted in the working directory of Hadoop drivers and tasks. Supported file types: .jar, .tar, .tar.gz, .tgz, or .zip.

Type

Sequence[str]

properties

Optional. A mapping of property names to values, used to configure Hadoop. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site and classes in user code.

Type

Sequence[PropertiesEntry]

logging_config

Optional. The runtime log config for job execution.

Type

LoggingConfig

class PropertiesEntry(mapping=None, *, ignore_unknown_fields=False, **kwargs)

Bases: proto.message.Message

class google.cloud.dataproc_v1.types.SparkJob(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

A Dataproc job for running Apache Spark applications on YARN.

main_jar_file_uri

The HCFS URI of the jar file that contains the main class.

Type

str

main_class

The name of the driver’s main class. The jar file that contains the class must be in the default CLASSPATH or specified in jar_file_uris.

Type

str

args

Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.

Type

Sequence[str]

jar_file_uris

Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Spark driver and tasks.

Type

Sequence[str]

file_uris

Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.

Type

Sequence[str]

archive_uris

Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.

Type

Sequence[str]

properties

Optional. A mapping of property names to values, used to configure Spark. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.

Type

Sequence[PropertiesEntry]

logging_config

Optional. The runtime log config for job execution.

Type

LoggingConfig

class PropertiesEntry(mapping=None, *, ignore_unknown_fields=False, **kwargs)

Bases: proto.message.Message

class google.cloud.dataproc_v1.types.PySparkJob(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

A Dataproc job for running Apache PySpark applications on YARN.

main_python_file_uri

Required. The HCFS URI of the main Python file to use as the driver. Must be a .py file.

Type

str

args

Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.

Type

Sequence[str]

python_file_uris

Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.

Type

Sequence[str]

jar_file_uris

Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Python driver and tasks.

Type

Sequence[str]

file_uris

Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.

Type

Sequence[str]

archive_uris

Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.

Type

Sequence[str]

properties

Optional. A mapping of property names to values, used to configure PySpark. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.

Type

Sequence[PropertiesEntry]

logging_config

Optional. The runtime log config for job execution.

Type

LoggingConfig

class PropertiesEntry(mapping=None, *, ignore_unknown_fields=False, **kwargs)

Bases: proto.message.Message

class google.cloud.dataproc_v1.types.QueryList(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

A list of queries to run on a cluster.

queries

Required. The queries to execute. You do not need to terminate a query with a semicolon. Multiple queries can be specified in one string by separating each with a semicolon. Here is an example of an Cloud Dataproc API snippet that uses a QueryList to specify a HiveJob:

"hiveJob": {
  "queryList": {
    "queries": [
      "query1",
      "query2",
      "query3;query4",
    ]
  }
}
Type

Sequence[str]

class google.cloud.dataproc_v1.types.HiveJob(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

A Dataproc job for running Apache Hive queries on YARN.

query_file_uri

The HCFS URI of the script that contains Hive queries.

Type

str

query_list

A list of queries.

Type

QueryList

continue_on_failure

Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.

Type

bool

script_variables

Optional. Mapping of query variable names to values (equivalent to the Hive command: SET name="value";).

Type

Sequence[ScriptVariablesEntry]

properties

Optional. A mapping of property names and values, used to configure Hive. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/hive/conf/hive-site.xml, and classes in user code.

Type

Sequence[PropertiesEntry]

jar_file_uris

Optional. HCFS URIs of jar files to add to the CLASSPATH of the Hive server and Hadoop MapReduce (MR) tasks. Can contain Hive SerDes and UDFs.

Type

Sequence[str]

class PropertiesEntry(mapping=None, *, ignore_unknown_fields=False, **kwargs)

Bases: proto.message.Message

class ScriptVariablesEntry(mapping=None, *, ignore_unknown_fields=False, **kwargs)

Bases: proto.message.Message

class google.cloud.dataproc_v1.types.SparkSqlJob(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

A Dataproc job for running Apache Spark SQL queries.

query_file_uri

The HCFS URI of the script that contains SQL queries.

Type

str

query_list

A list of queries.

Type

QueryList

script_variables

Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).

Type

Sequence[ScriptVariablesEntry]

properties

Optional. A mapping of property names to values, used to configure Spark SQL’s SparkConf. Properties that conflict with values set by the Dataproc API may be overwritten.

Type

Sequence[PropertiesEntry]

jar_file_uris

Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.

Type

Sequence[str]

logging_config

Optional. The runtime log config for job execution.

Type

LoggingConfig

class PropertiesEntry(mapping=None, *, ignore_unknown_fields=False, **kwargs)

Bases: proto.message.Message

class ScriptVariablesEntry(mapping=None, *, ignore_unknown_fields=False, **kwargs)

Bases: proto.message.Message

class google.cloud.dataproc_v1.types.PigJob(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

A Dataproc job for running Apache Pig queries on YARN.

query_file_uri

The HCFS URI of the script that contains the Pig queries.

Type

str

query_list

A list of queries.

Type

QueryList

continue_on_failure

Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.

Type

bool

script_variables

Optional. Mapping of query variable names to values (equivalent to the Pig command: name=[value]).

Type

Sequence[ScriptVariablesEntry]

properties

Optional. A mapping of property names to values, used to configure Pig. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/pig/conf/pig.properties, and classes in user code.

Type

Sequence[PropertiesEntry]

jar_file_uris

Optional. HCFS URIs of jar files to add to the CLASSPATH of the Pig Client and Hadoop MapReduce (MR) tasks. Can contain Pig UDFs.

Type

Sequence[str]

logging_config

Optional. The runtime log config for job execution.

Type

LoggingConfig

class PropertiesEntry(mapping=None, *, ignore_unknown_fields=False, **kwargs)

Bases: proto.message.Message

class ScriptVariablesEntry(mapping=None, *, ignore_unknown_fields=False, **kwargs)

Bases: proto.message.Message

class google.cloud.dataproc_v1.types.SparkRJob(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

A Dataproc job for running Apache SparkR applications on YARN.

main_r_file_uri

Required. The HCFS URI of the main R file to use as the driver. Must be a .R file.

Type

str

args

Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.

Type

Sequence[str]

file_uris

Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.

Type

Sequence[str]

archive_uris

Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.

Type

Sequence[str]

properties

Optional. A mapping of property names to values, used to configure SparkR. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.

Type

Sequence[PropertiesEntry]

logging_config

Optional. The runtime log config for job execution.

Type

LoggingConfig

class PropertiesEntry(mapping=None, *, ignore_unknown_fields=False, **kwargs)

Bases: proto.message.Message

class google.cloud.dataproc_v1.types.PrestoJob(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

A Dataproc job for running Presto queries. IMPORTANT: The Dataproc Presto Optional Component must be enabled when the cluster is created to submit a Presto job to the cluster.

query_file_uri

The HCFS URI of the script that contains SQL queries.

Type

str

query_list

A list of queries.

Type

QueryList

continue_on_failure

Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.

Type

bool

output_format

Optional. The format in which query output will be displayed. See the Presto documentation for supported output formats

Type

str

client_tags

Optional. Presto client tags to attach to this query

Type

Sequence[str]

properties

Optional. A mapping of property names to values. Used to set Presto session properties Equivalent to using the –session flag in the Presto CLI

Type

Sequence[PropertiesEntry]

logging_config

Optional. The runtime log config for job execution.

Type

LoggingConfig

class PropertiesEntry(mapping=None, *, ignore_unknown_fields=False, **kwargs)

Bases: proto.message.Message

class google.cloud.dataproc_v1.types.JobPlacement(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Dataproc job config.

cluster_name

Required. The name of the cluster where the job will be submitted.

Type

str

cluster_uuid

Output only. A cluster UUID generated by the Dataproc service when the job is submitted.

Type

str

class google.cloud.dataproc_v1.types.JobStatus(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Dataproc job status.

state

Output only. A state message specifying the overall job state.

Type

State

details

Optional. Output only. Job state details, such as an error description if the state is <code>ERROR</code>.

Type

str

state_start_time

Output only. The time when this state was entered.

Type

Timestamp

substate

Output only. Additional state information, which includes status reported by the agent.

Type

Substate

class State[source]

Bases: proto.enums.Enum

The job state.

class Substate[source]

Bases: proto.enums.Enum

The job substate.

class google.cloud.dataproc_v1.types.JobReference(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Encapsulates the full scoping used to reference a job.

project_id

Optional. The ID of the Google Cloud Platform project that the job belongs to. If specified, must match the request project ID.

Type

str

job_id

Optional. The job ID, which must be unique within the project.

The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), or hyphens (-). The maximum length is 100 characters.

If not specified by the caller, the job ID will be provided by the server.

Type

str

class google.cloud.dataproc_v1.types.YarnApplication(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

A YARN application created by a job. Application information is a subset of org.apache.hadoop.yarn.proto.YarnProtos.ApplicationReportProto.

Beta Feature: This report is available for testing purposes only. It may be changed before final release.

name

Required. The application name.

Type

str

state

Required. The application state.

Type

State

progress

Required. The numerical progress of the application, from 1 to 100.

Type

float

tracking_url

Optional. The HTTP URL of the ApplicationMaster, HistoryServer, or TimelineServer that provides application- specific information. The URL uses the internal hostname, and requires a proxy server for resolution and, possibly, access.

Type

str

class State[source]

Bases: proto.enums.Enum

The application state, corresponding to <code>YarnProtos.YarnApplicationStateProto</code>.

class google.cloud.dataproc_v1.types.Job(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

A Dataproc job resource.

reference

Optional. The fully qualified reference to the job, which can be used to obtain the equivalent REST path of the job resource. If this property is not specified when a job is created, the server generates a job_id.

Type

JobReference

placement

Required. Job information, including how, when, and where to run the job.

Type

JobPlacement

hadoop_job

Optional. Job is a Hadoop job.

Type

HadoopJob

spark_job

Optional. Job is a Spark job.

Type

SparkJob

pyspark_job

Optional. Job is a PySpark job.

Type

PySparkJob

hive_job

Optional. Job is a Hive job.

Type

HiveJob

pig_job

Optional. Job is a Pig job.

Type

PigJob

spark_r_job

Optional. Job is a SparkR job.

Type

SparkRJob

spark_sql_job

Optional. Job is a SparkSql job.

Type

SparkSqlJob

presto_job

Optional. Job is a Presto job.

Type

PrestoJob

status

Output only. The job status. Additional application-specific status information may be contained in the type_job and yarn_applications fields.

Type

JobStatus

status_history

Output only. The previous job status.

Type

Sequence[JobStatus]

yarn_applications

Output only. The collection of YARN applications spun up by this job.

Beta Feature: This report is available for testing purposes only. It may be changed before final release.

Type

Sequence[YarnApplication]

driver_output_resource_uri

Output only. A URI pointing to the location of the stdout of the job’s driver program.

Type

str

driver_control_files_uri

Output only. If present, the location of miscellaneous control files which may be used as part of job setup and handling. If not present, control files may be placed in the same location as driver_output_uri.

Type

str

labels

Optional. The labels to associate with this job. Label keys must contain 1 to 63 characters, and must conform to RFC 1035. Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035. No more than 32 labels can be associated with a job.

Type

Sequence[LabelsEntry]

scheduling

Optional. Job scheduling configuration.

Type

JobScheduling

job_uuid

Output only. A UUID that uniquely identifies a job within the project over time. This is in contrast to a user-settable reference.job_id that may be reused over time.

Type

str

done

Output only. Indicates whether the job is completed. If the value is false, the job is still in progress. If true, the job is completed, and status.state field will indicate if it was successful, failed, or cancelled.

Type

bool

class LabelsEntry(mapping=None, *, ignore_unknown_fields=False, **kwargs)

Bases: proto.message.Message

class google.cloud.dataproc_v1.types.JobScheduling(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Job scheduling options.

max_failures_per_hour

Optional. Maximum number of times per hour a driver may be restarted as a result of driver terminating with non-zero code before job is reported failed.

A job may be reported as thrashing if driver exits with non-zero code 4 times within 10 minute window.

Maximum value is 10.

Type

int

class google.cloud.dataproc_v1.types.SubmitJobRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

A request to submit a job.

project_id

Required. The ID of the Google Cloud Platform project that the job belongs to.

Type

str

region

Required. The Dataproc region in which to handle the request.

Type

str

job

Required. The job resource.

Type

Job

request_id

Optional. A unique id used to identify the request. If the server receives two [SubmitJobRequest][google.cloud.dataproc.v1.SubmitJobRequest] requests with the same id, then the second request will be ignored and the first [Job][google.cloud.dataproc.v1.Job] created and stored in the backend is returned.

It is recommended to always set this value to a UUID.

The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters.

Type

str

class google.cloud.dataproc_v1.types.JobMetadata(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Job Operation metadata.

job_id

Output only. The job id.

Type

str

status

Output only. Most recent job status.

Type

JobStatus

operation_type

Output only. Operation type.

Type

str

start_time

Output only. Job submission time.

Type

Timestamp

class google.cloud.dataproc_v1.types.GetJobRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

A request to get the resource representation for a job in a project.

project_id

Required. The ID of the Google Cloud Platform project that the job belongs to.

Type

str

region

Required. The Dataproc region in which to handle the request.

Type

str

job_id

Required. The job ID.

Type

str

class google.cloud.dataproc_v1.types.ListJobsRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

A request to list jobs in a project.

project_id

Required. The ID of the Google Cloud Platform project that the job belongs to.

Type

str

region

Required. The Dataproc region in which to handle the request.

Type

str

page_size

Optional. The number of results to return in each response.

Type

int

page_token

Optional. The page token, returned by a previous call, to request the next page of results.

Type

str

cluster_name

Optional. If set, the returned jobs list includes only jobs that were submitted to the named cluster.

Type

str

job_state_matcher

Optional. Specifies enumerated categories of jobs to list. (default = match ALL jobs).

If filter is provided, jobStateMatcher will be ignored.

Type

JobStateMatcher

filter

Optional. A filter constraining the jobs to list. Filters are case-sensitive and have the following syntax:

[field = value] AND [field [= value]] …

where field is status.state or labels.[KEY], and [KEY] is a label key. value can be * to match all values. status.state can be either ACTIVE or NON_ACTIVE. Only the logical AND operator is supported; space-separated items are treated as having an implicit AND operator.

Example filter:

status.state = ACTIVE AND labels.env = staging AND labels.starred = *

Type

str

class JobStateMatcher[source]

Bases: proto.enums.Enum

A matcher that specifies categories of job states.

class google.cloud.dataproc_v1.types.UpdateJobRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

A request to update a job.

project_id

Required. The ID of the Google Cloud Platform project that the job belongs to.

Type

str

region

Required. The Dataproc region in which to handle the request.

Type

str

job_id

Required. The job ID.

Type

str

job

Required. The changes to the job.

Type

Job

update_mask

Required. Specifies the path, relative to Job, of the field to update. For example, to update the labels of a Job the update_mask parameter would be specified as labels, and the PATCH request body would specify the new value. Note: Currently, labels is the only field that can be updated.

Type

FieldMask

class google.cloud.dataproc_v1.types.ListJobsResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

A list of jobs in a project.

jobs

Output only. Jobs list.

Type

Sequence[Job]

next_page_token

Optional. This token is included in the response if there are more results to fetch. To fetch additional results, provide this value as the page_token in a subsequent ListJobsRequest.

Type

str

class google.cloud.dataproc_v1.types.CancelJobRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

A request to cancel a job.

project_id

Required. The ID of the Google Cloud Platform project that the job belongs to.

Type

str

region

Required. The Dataproc region in which to handle the request.

Type

str

job_id

Required. The job ID.

Type

str

class google.cloud.dataproc_v1.types.DeleteJobRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

A request to delete a job.

project_id

Required. The ID of the Google Cloud Platform project that the job belongs to.

Type

str

region

Required. The Dataproc region in which to handle the request.

Type

str

job_id

Required. The job ID.

Type

str

class google.cloud.dataproc_v1.types.ClusterOperationStatus(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

The status of the operation.

state

Output only. A message containing the operation state.

Type

State

inner_state

Output only. A message containing the detailed operation state.

Type

str

details

Output only. A message containing any operation metadata details.

Type

str

state_start_time

Output only. The time this state was entered.

Type

Timestamp

class State[source]

Bases: proto.enums.Enum

The operation state.

class google.cloud.dataproc_v1.types.ClusterOperationMetadata(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Metadata describing the operation.

cluster_name

Output only. Name of the cluster for the operation.

Type

str

cluster_uuid

Output only. Cluster UUID for the operation.

Type

str

status

Output only. Current operation status.

Type

ClusterOperationStatus

status_history

Output only. The previous operation status.

Type

Sequence[ClusterOperationStatus]

operation_type

Output only. The operation type.

Type

str

description

Output only. Short description of operation.

Type

str

labels

Output only. Labels associated with the operation

Type

Sequence[LabelsEntry]

warnings

Output only. Errors encountered during operation execution.

Type

Sequence[str]

class LabelsEntry(mapping=None, *, ignore_unknown_fields=False, **kwargs)

Bases: proto.message.Message

class google.cloud.dataproc_v1.types.WorkflowTemplate(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

A Dataproc workflow template resource.

id
Type

str

name

Output only. The resource name of the workflow template, as described in https://cloud.google.com/apis/design/resource_names.

  • For projects.regions.workflowTemplates, the resource name of the template has the following format: projects/{project_id}/regions/{region}/workflowTemplates/{template_id}

  • For projects.locations.workflowTemplates, the resource name of the template has the following format: projects/{project_id}/locations/{location}/workflowTemplates/{template_id}

Type

str

version

Optional. Used to perform a consistent read-modify-write.

This field should be left blank for a CreateWorkflowTemplate request. It is required for an UpdateWorkflowTemplate request, and must match the current server version. A typical update template flow would fetch the current template with a GetWorkflowTemplate request, which will return the current template with the version field filled in with the current server version. The user updates other fields in the template, then returns it as part of the UpdateWorkflowTemplate request.

Type

int

create_time

Output only. The time template was created.

Type

Timestamp

update_time

Output only. The time template was last updated.

Type

Timestamp

labels

Optional. The labels to associate with this template. These labels will be propagated to all jobs and clusters created by the workflow instance.

Label keys must contain 1 to 63 characters, and must conform to RFC 1035.

Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035.

No more than 32 labels can be associated with a template.

Type

Sequence[LabelsEntry]

placement

Required. WorkflowTemplate scheduling information.

Type

WorkflowTemplatePlacement

jobs

Required. The Directed Acyclic Graph of Jobs to submit.

Type

Sequence[OrderedJob]

parameters

Optional. Template parameters whose values are substituted into the template. Values for parameters must be provided when the template is instantiated.

Type

Sequence[TemplateParameter]

class LabelsEntry(mapping=None, *, ignore_unknown_fields=False, **kwargs)

Bases: proto.message.Message

class google.cloud.dataproc_v1.types.WorkflowTemplatePlacement(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Specifies workflow execution target.

Either managed_cluster or cluster_selector is required.

managed_cluster

A cluster that is managed by the workflow.

Type

ManagedCluster

cluster_selector

Optional. A selector that chooses target cluster for jobs based on metadata.

The selector is evaluated at the time each job is submitted.

Type

ClusterSelector

class google.cloud.dataproc_v1.types.ManagedCluster(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Cluster that is managed by the workflow.

cluster_name

Required. The cluster name prefix. A unique cluster name will be formed by appending a random suffix. The name must contain only lower-case letters (a-z), numbers (0-9), and hyphens (-). Must begin with a letter. Cannot begin or end with hyphen. Must consist of between 2 and 35 characters.

Type

str

config

Required. The cluster configuration.

Type

ClusterConfig

labels

Optional. The labels to associate with this cluster.

Label keys must be between 1 and 63 characters long, and must conform to the following PCRE regular expression: [p{Ll}p{Lo}][p{Ll}p{Lo}p{N}_-]{0,62}

Label values must be between 1 and 63 characters long, and must conform to the following PCRE regular expression: [p{Ll}p{Lo}p{N}_-]{0,63}

No more than 32 labels can be associated with a given cluster.

Type

Sequence[LabelsEntry]

class LabelsEntry(mapping=None, *, ignore_unknown_fields=False, **kwargs)

Bases: proto.message.Message

class google.cloud.dataproc_v1.types.ClusterSelector(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

A selector that chooses target cluster for jobs based on metadata.

zone

Optional. The zone where workflow process executes. This parameter does not affect the selection of the cluster. If unspecified, the zone of the first cluster matching the selector is used.

Type

str

cluster_labels

Required. The cluster labels. Cluster must have all labels to match.

Type

Sequence[ClusterLabelsEntry]

class ClusterLabelsEntry(mapping=None, *, ignore_unknown_fields=False, **kwargs)

Bases: proto.message.Message

class google.cloud.dataproc_v1.types.OrderedJob(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

A job executed by the workflow.

step_id

Required. The step id. The id must be unique among all jobs within the template.

The step id is used as prefix for job id, as job goog-dataproc-workflow-step-id label, and in [prerequisiteStepIds][google.cloud.dataproc.v1.OrderedJob.prerequisite_step_ids] field from other steps.

The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of between 3 and 50 characters.

Type

str

hadoop_job

Optional. Job is a Hadoop job.

Type

HadoopJob

spark_job

Optional. Job is a Spark job.

Type

SparkJob

pyspark_job

Optional. Job is a PySpark job.

Type

PySparkJob

hive_job

Optional. Job is a Hive job.

Type

HiveJob

pig_job

Optional. Job is a Pig job.

Type

PigJob

spark_r_job

Optional. Job is a SparkR job.

Type

SparkRJob

spark_sql_job

Optional. Job is a SparkSql job.

Type

SparkSqlJob

presto_job

Optional. Job is a Presto job.

Type

PrestoJob

labels

Optional. The labels to associate with this job.

Label keys must be between 1 and 63 characters long, and must conform to the following regular expression: [p{Ll}p{Lo}][p{Ll}p{Lo}p{N}_-]{0,62}

Label values must be between 1 and 63 characters long, and must conform to the following regular expression: [p{Ll}p{Lo}p{N}_-]{0,63}

No more than 32 labels can be associated with a given job.

Type

Sequence[LabelsEntry]

scheduling

Optional. Job scheduling configuration.

Type

JobScheduling

prerequisite_step_ids

Optional. The optional list of prerequisite job step_ids. If not specified, the job will start at the beginning of workflow.

Type

Sequence[str]

class LabelsEntry(mapping=None, *, ignore_unknown_fields=False, **kwargs)

Bases: proto.message.Message

class google.cloud.dataproc_v1.types.TemplateParameter(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

A configurable parameter that replaces one or more fields in the template. Parameterizable fields: - Labels - File uris - Job properties - Job arguments - Script variables - Main class (in HadoopJob and SparkJob) - Zone (in ClusterSelector)

name

Required. Parameter name. The parameter name is used as the key, and paired with the parameter value, which are passed to the template when the template is instantiated. The name must contain only capital letters (A-Z), numbers (0-9), and underscores (_), and must not start with a number. The maximum length is 40 characters.

Type

str

fields

Required. Paths to all fields that the parameter replaces. A field is allowed to appear in at most one parameter’s list of field paths.

A field path is similar in syntax to a [google.protobuf.FieldMask][google.protobuf.FieldMask]. For example, a field path that references the zone field of a workflow template’s cluster selector would be specified as placement.clusterSelector.zone.

Also, field paths can reference fields using the following syntax:

  • Values in maps can be referenced by key:

    • labels[‘key’]

    • placement.clusterSelector.clusterLabels[‘key’]

    • placement.managedCluster.labels[‘key’]

    • placement.clusterSelector.clusterLabels[‘key’]

    • jobs[‘step-id’].labels[‘key’]

  • Jobs in the jobs list can be referenced by step-id:

    • jobs[‘step-id’].hadoopJob.mainJarFileUri

    • jobs[‘step-id’].hiveJob.queryFileUri

    • jobs[‘step-id’].pySparkJob.mainPythonFileUri

    • jobs[‘step-id’].hadoopJob.jarFileUris[0]

    • jobs[‘step-id’].hadoopJob.archiveUris[0]

    • jobs[‘step-id’].hadoopJob.fileUris[0]

    • jobs[‘step-id’].pySparkJob.pythonFileUris[0]

  • Items in repeated fields can be referenced by a zero-based index:

    • jobs[‘step-id’].sparkJob.args[0]

  • Other examples:

    • jobs[‘step-id’].hadoopJob.properties[‘key’]

    • jobs[‘step-id’].hadoopJob.args[0]

    • jobs[‘step-id’].hiveJob.scriptVariables[‘key’]

    • jobs[‘step-id’].hadoopJob.mainJarFileUri

    • placement.clusterSelector.zone

It may not be possible to parameterize maps and repeated fields in their entirety since only individual map values and individual items in repeated fields can be referenced. For example, the following field paths are invalid:

  • placement.clusterSelector.clusterLabels

  • jobs[‘step-id’].sparkJob.args

Type

Sequence[str]

description

Optional. Brief description of the parameter. Must not exceed 1024 characters.

Type

str

validation

Optional. Validation rules to be applied to this parameter’s value.

Type

ParameterValidation

class google.cloud.dataproc_v1.types.ParameterValidation(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Configuration for parameter validation.

regex

Validation based on regular expressions.

Type

RegexValidation

values

Validation based on a list of allowed values.

Type

ValueValidation

class google.cloud.dataproc_v1.types.RegexValidation(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Validation based on regular expressions.

regexes

Required. RE2 regular expressions used to validate the parameter’s value. The value must match the regex in its entirety (substring matches are not sufficient).

Type

Sequence[str]

class google.cloud.dataproc_v1.types.ValueValidation(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

Validation based on a list of allowed values.

values

Required. List of allowed values for the parameter.

Type

Sequence[str]

class google.cloud.dataproc_v1.types.WorkflowMetadata(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

A Dataproc workflow template resource.

template

Output only. The resource name of the workflow template as described in https://cloud.google.com/apis/design/resource_names.

  • For projects.regions.workflowTemplates, the resource name of the template has the following format: projects/{project_id}/regions/{region}/workflowTemplates/{template_id}

  • For projects.locations.workflowTemplates, the resource name of the template has the following format: projects/{project_id}/locations/{location}/workflowTemplates/{template_id}

Type

str

version

Output only. The version of template at the time of workflow instantiation.

Type

int

create_cluster

Output only. The create cluster operation metadata.

Type

ClusterOperation

graph

Output only. The workflow graph.

Type

WorkflowGraph

delete_cluster

Output only. The delete cluster operation metadata.

Type

ClusterOperation

state

Output only. The workflow state.

Type

State

cluster_name

Output only. The name of the target cluster.

Type

str

parameters

Map from parameter names to values that were used for those parameters.

Type

Sequence[ParametersEntry]

start_time

Output only. Workflow start time.

Type

Timestamp

end_time

Output only. Workflow end time.

Type

Timestamp

cluster_uuid

Output only. The UUID of target cluster.

Type

str

class ParametersEntry(mapping=None, *, ignore_unknown_fields=False, **kwargs)

Bases: proto.message.Message

class State[source]

Bases: proto.enums.Enum

The operation state.

class google.cloud.dataproc_v1.types.ClusterOperation(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

The cluster operation triggered by a workflow.

operation_id

Output only. The id of the cluster operation.

Type

str

error

Output only. Error, if operation failed.

Type

str

done

Output only. Indicates the operation is done.

Type

bool

class google.cloud.dataproc_v1.types.WorkflowGraph(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

The workflow graph.

nodes

Output only. The workflow nodes.

Type

Sequence[WorkflowNode]

class google.cloud.dataproc_v1.types.WorkflowNode(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

The workflow node.

step_id

Output only. The name of the node.

Type

str

prerequisite_step_ids

Output only. Node’s prerequisite nodes.

Type

Sequence[str]

job_id

Output only. The job id; populated after the node enters RUNNING state.

Type

str

state

Output only. The node state.

Type

NodeState

error

Output only. The error detail.

Type

str

class NodeState[source]

Bases: proto.enums.Enum

The workflow node state.

class google.cloud.dataproc_v1.types.CreateWorkflowTemplateRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

A request to create a workflow template.

parent

Required. The resource name of the region or location, as described in https://cloud.google.com/apis/design/resource_names.

  • For projects.regions.workflowTemplates,create, the resource name of the region has the following format: projects/{project_id}/regions/{region}

  • For projects.locations.workflowTemplates.create, the resource name of the location has the following format: projects/{project_id}/locations/{location}

Type

str

template

Required. The Dataproc workflow template to create.

Type

WorkflowTemplate

class google.cloud.dataproc_v1.types.GetWorkflowTemplateRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

A request to fetch a workflow template.

name

Required. The resource name of the workflow template, as described in https://cloud.google.com/apis/design/resource_names.

  • For projects.regions.workflowTemplates.get, the resource name of the template has the following format: projects/{project_id}/regions/{region}/workflowTemplates/{template_id}

  • For projects.locations.workflowTemplates.get, the resource name of the template has the following format: projects/{project_id}/locations/{location}/workflowTemplates/{template_id}

Type

str

version

Optional. The version of workflow template to retrieve. Only previously instantiated versions can be retrieved. If unspecified, retrieves the current version.

Type

int

class google.cloud.dataproc_v1.types.InstantiateWorkflowTemplateRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

A request to instantiate a workflow template.

name

Required. The resource name of the workflow template, as described in https://cloud.google.com/apis/design/resource_names.

  • For projects.regions.workflowTemplates.instantiate, the resource name of the template has the following format: projects/{project_id}/regions/{region}/workflowTemplates/{template_id}

  • For projects.locations.workflowTemplates.instantiate, the resource name of the template has the following format: projects/{project_id}/locations/{location}/workflowTemplates/{template_id}

Type

str

version

Optional. The version of workflow template to instantiate. If specified, the workflow will be instantiated only if the current version of the workflow template has the supplied version. This option cannot be used to instantiate a previous version of workflow template.

Type

int

request_id

Optional. A tag that prevents multiple concurrent workflow instances with the same tag from running. This mitigates risk of concurrent instances started due to retries.

It is recommended to always set this value to a UUID.

The tag must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters.

Type

str

parameters

Optional. Map from parameter names to values that should be used for those parameters. Values may not exceed 100 characters.

Type

Sequence[ParametersEntry]

class ParametersEntry(mapping=None, *, ignore_unknown_fields=False, **kwargs)

Bases: proto.message.Message

class google.cloud.dataproc_v1.types.InstantiateInlineWorkflowTemplateRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

A request to instantiate an inline workflow template.

parent

Required. The resource name of the region or location, as described in https://cloud.google.com/apis/design/resource_names.

  • For projects.regions.workflowTemplates,instantiateinline, the resource name of the region has the following format: projects/{project_id}/regions/{region}

  • For projects.locations.workflowTemplates.instantiateinline, the resource name of the location has the following format: projects/{project_id}/locations/{location}

Type

str

template

Required. The workflow template to instantiate.

Type

WorkflowTemplate

request_id

Optional. A tag that prevents multiple concurrent workflow instances with the same tag from running. This mitigates risk of concurrent instances started due to retries.

It is recommended to always set this value to a UUID.

The tag must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters.

Type

str

class google.cloud.dataproc_v1.types.UpdateWorkflowTemplateRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

A request to update a workflow template.

template

Required. The updated workflow template.

The template.version field must match the current version.

Type

WorkflowTemplate

class google.cloud.dataproc_v1.types.ListWorkflowTemplatesRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

A request to list workflow templates in a project.

parent

Required. The resource name of the region or location, as described in https://cloud.google.com/apis/design/resource_names.

  • For projects.regions.workflowTemplates,list, the resource name of the region has the following format: projects/{project_id}/regions/{region}

  • For projects.locations.workflowTemplates.list, the resource name of the location has the following format: projects/{project_id}/locations/{location}

Type

str

page_size

Optional. The maximum number of results to return in each response.

Type

int

page_token

Optional. The page token, returned by a previous call, to request the next page of results.

Type

str

class google.cloud.dataproc_v1.types.ListWorkflowTemplatesResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

A response to a request to list workflow templates in a project.

templates

Output only. WorkflowTemplates list.

Type

Sequence[WorkflowTemplate]

next_page_token

Output only. This token is included in the response if there are more results to fetch. To fetch additional results, provide this value as the page_token in a subsequent ListWorkflowTemplatesRequest.

Type

str

class google.cloud.dataproc_v1.types.DeleteWorkflowTemplateRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]

Bases: proto.message.Message

A request to delete a workflow template. Currently started workflows will remain running.

name

Required. The resource name of the workflow template, as described in https://cloud.google.com/apis/design/resource_names.

  • For projects.regions.workflowTemplates.delete, the resource name of the template has the following format: projects/{project_id}/regions/{region}/workflowTemplates/{template_id}

  • For projects.locations.workflowTemplates.instantiate, the resource name of the template has the following format: projects/{project_id}/locations/{location}/workflowTemplates/{template_id}

Type

str

version

Optional. The version of workflow template to delete. If specified, will only delete the template if the current server version matches specified version.

Type

int