Members
(static) JobStateMatcher :number
A matcher that specifies categories of job states.
Properties:
| Name | Type | Description |
|---|---|---|
ALL |
number |
Match all jobs, regardless of state. |
ACTIVE |
number |
Only match jobs in non-terminal states: PENDING, RUNNING, or CANCEL_PENDING. |
NON_ACTIVE |
number |
Only match jobs in terminal states: CANCELLED, DONE, or ERROR. |
(static) Level :number
The Log4j level for job execution. When running an Apache Hive job, Cloud Dataproc configures the Hive client to an equivalent verbosity level.
Properties:
| Name | Type | Description |
|---|---|---|
LEVEL_UNSPECIFIED |
number |
Level is unspecified. Use default level for log4j. |
ALL |
number |
Use ALL level for log4j. |
TRACE |
number |
Use TRACE level for log4j. |
DEBUG |
number |
Use DEBUG level for log4j. |
INFO |
number |
Use INFO level for log4j. |
WARN |
number |
Use WARN level for log4j. |
ERROR |
number |
Use ERROR level for log4j. |
FATAL |
number |
Use FATAL level for log4j. |
OFF |
number |
Turn off log4j. |
(static) NodeState :number
The workflow node state.
Properties:
| Name | Type | Description |
|---|---|---|
NODE_STATUS_UNSPECIFIED |
number |
State is unspecified. |
BLOCKED |
number |
The node is awaiting prerequisite node to finish. |
RUNNABLE |
number |
The node is runnable but not running. |
RUNNING |
number |
The node is running. |
COMPLETED |
number |
The node completed successfully. |
FAILED |
number |
The node failed. A node can be marked FAILED because its ancestor or peer failed. |
(static) State :number
The job state.
Properties:
| Name | Type | Description |
|---|---|---|
STATE_UNSPECIFIED |
number |
The job state is unknown. |
PENDING |
number |
The job is pending; it has been submitted, but is not yet running. |
SETUP_DONE |
number |
Job has been received by the service and completed initial setup; it will soon be submitted to the cluster. |
RUNNING |
number |
The job is running on the cluster. |
CANCEL_PENDING |
number |
A CancelJob request has been received, but is pending. |
CANCEL_STARTED |
number |
Transient in-flight resources have been canceled, and the request to cancel the running job has been issued to the cluster. |
CANCELLED |
number |
The job cancellation was successful. |
DONE |
number |
The job has completed successfully. |
ERROR |
number |
The job has completed, but encountered an error. |
ATTEMPT_FAILURE |
number |
Job attempt has failed. The detail field contains failure details for this attempt. Applies to restartable jobs only. |
(static) State :number
The operation state.
Properties:
| Name | Type | Description |
|---|---|---|
UNKNOWN |
number |
Unused. |
PENDING |
number |
The operation has been created. |
RUNNING |
number |
The operation is running. |
DONE |
number |
The operation is done; either cancelled or completed. |
(static) State :number
The application state, corresponding to
YarnProtos.YarnApplicationStateProto.
Properties:
| Name | Type | Description |
|---|---|---|
STATE_UNSPECIFIED |
number |
Status is unspecified. |
NEW |
number |
Status is NEW. |
NEW_SAVING |
number |
Status is NEW_SAVING. |
SUBMITTED |
number |
Status is SUBMITTED. |
ACCEPTED |
number |
Status is ACCEPTED. |
RUNNING |
number |
Status is RUNNING. |
FINISHED |
number |
Status is FINISHED. |
FAILED |
number |
Status is FAILED. |
KILLED |
number |
Status is KILLED. |
(static) State :number
The cluster state.
Properties:
| Name | Type | Description |
|---|---|---|
UNKNOWN |
number |
The cluster state is unknown. |
CREATING |
number |
The cluster is being created and set up. It is not ready for use. |
RUNNING |
number |
The cluster is currently running and healthy. It is ready for use. |
ERROR |
number |
The cluster encountered an error. It is not ready for use. |
DELETING |
number |
The cluster is being deleted. It cannot be used. |
UPDATING |
number |
The cluster is being updated. It continues to accept and process jobs. |
(static) Substate :number
The cluster substate.
Properties:
| Name | Type | Description |
|---|---|---|
UNSPECIFIED |
number |
The cluster substate is unknown. |
UNHEALTHY |
number |
The cluster is known to be in an unhealthy state (for example, critical daemons are not running or HDFS capacity is exhausted). Applies to RUNNING state. |
STALE_STATUS |
number |
The agent-reported status is out of date (may occur if Cloud Dataproc loses communication with Agent). Applies to RUNNING state. |
(static) Substate :number
The job substate.
Properties:
| Name | Type | Description |
|---|---|---|
UNSPECIFIED |
number |
The job substate is unknown. |
SUBMITTED |
number |
The Job is submitted to the agent. Applies to RUNNING state. |
QUEUED |
number |
The Job has been received and is awaiting execution (it may be waiting for a condition to be met). See the "details" field for the reason for the delay. Applies to RUNNING state. |
STALE_STATUS |
number |
The agent-reported status is out of date, which may be caused by a loss of communication between the agent and Cloud Dataproc. If the agent does not send a timely update, the job will fail. Applies to RUNNING state. |
(static) Type :number
Indicates whether to consume capacity from an reservation or not.
Properties:
| Name | Type | Description |
|---|---|---|
TYPE_UNSPECIFIED |
number | |
NO_RESERVATION |
number |
Do not consume from any allocated capacity. |
ANY_RESERVATION |
number |
Consume any reservation available. |
SPECIFIC_RESERVATION |
number |
Must consume from a specific reservation. Must specify key value fields for specifying the reservations. |
Type Definitions
AcceleratorConfig
Specifies the type and number of accelerator cards attached to the instances of an instance group (see GPUs on Compute Engine).
Properties:
| Name | Type | Description |
|---|---|---|
acceleratorTypeUri |
string |
Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes Examples
Auto Zone Exception: If you are using the Cloud Dataproc
Auto Zone
Placement
feature, you must use the short name of the accelerator type
resource, for example, |
acceleratorCount |
number |
The number of the accelerator cards of this type exposed to this instance. |
- Source:
- See:
AutoscalingConfig
Autoscaling Policy config associated with the cluster.
Properties:
| Name | Type | Description |
|---|---|---|
policyUri |
string |
Optional. The autoscaling policy used by the cluster. Only resource names including projectid and location (region) are valid. Examples:
Note that the policy must be in the same project and Cloud Dataproc region. |
- Source:
- See:
AutoscalingPolicy
Describes an autoscaling policy for Dataproc cluster autoscaler.
Properties:
| Name | Type | Description |
|---|---|---|
id |
string |
Required. The policy id. The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of between 3 and 50 characters. |
name |
string |
Output only. The "resource name" of the policy, as described
in https://cloud.google.com/apis/design/resource_names of the form
|
basicAlgorithm |
Object |
This object should have the same structure as BasicAutoscalingAlgorithm |
workerConfig |
Object |
Required. Describes how the autoscaler will operate for primary workers. This object should have the same structure as InstanceGroupAutoscalingPolicyConfig |
secondaryWorkerConfig |
Object |
Optional. Describes how the autoscaler will operate for secondary workers. This object should have the same structure as InstanceGroupAutoscalingPolicyConfig |
- Source:
- See:
BasicAutoscalingAlgorithm
Basic algorithm for autoscaling.
Properties:
| Name | Type | Description |
|---|---|---|
yarnConfig |
Object |
Required. YARN autoscaling configuration. This object should have the same structure as BasicYarnAutoscalingConfig |
cooldownPeriod |
Object |
Optional. Duration between scaling events. A scaling period starts after the update operation from the previous event has completed. Bounds: [2m, 1d]. Default: 2m. This object should have the same structure as Duration |
- Source:
- See:
BasicYarnAutoscalingConfig
Basic autoscaling configurations for YARN.
Properties:
| Name | Type | Description |
|---|---|---|
gracefulDecommissionTimeout |
Object |
Required. Timeout for YARN graceful decommissioning of Node Managers. Specifies the duration to wait for jobs to complete before forcefully removing workers (and potentially interrupting jobs). Only applicable to downscaling operations. Bounds: [0s, 1d]. This object should have the same structure as Duration |
scaleUpFactor |
number |
Required. Fraction of average pending memory in the last cooldown period for which to add workers. A scale-up factor of 1.0 will result in scaling up so that there is no pending memory remaining after the update (more aggressive scaling). A scale-up factor closer to 0 will result in a smaller magnitude of scaling up (less aggressive scaling). Bounds: [0.0, 1.0]. |
scaleDownFactor |
number |
Required. Fraction of average pending memory in the last cooldown period for which to remove workers. A scale-down factor of 1 will result in scaling down so that there is no available memory remaining after the update (more aggressive scaling). A scale-down factor of 0 disables removing workers, which can be beneficial for autoscaling a single job. Bounds: [0.0, 1.0]. |
scaleUpMinWorkerFraction |
number |
Optional. Minimum scale-up threshold as a fraction of total cluster size before scaling occurs. For example, in a 20-worker cluster, a threshold of 0.1 means the autoscaler must recommend at least a 2-worker scale-up for the cluster to scale. A threshold of 0 means the autoscaler will scale up on any recommended change. Bounds: [0.0, 1.0]. Default: 0.0. |
scaleDownMinWorkerFraction |
number |
Optional. Minimum scale-down threshold as a fraction of total cluster size before scaling occurs. For example, in a 20-worker cluster, a threshold of 0.1 means the autoscaler must recommend at least a 2 worker scale-down for the cluster to scale. A threshold of 0 means the autoscaler will scale down on any recommended change. Bounds: [0.0, 1.0]. Default: 0.0. |
- Source:
- See:
CancelJobRequest
A request to cancel a job.
Properties:
| Name | Type | Description |
|---|---|---|
projectId |
string |
Required. The ID of the Google Cloud Platform project that the job belongs to. |
region |
string |
Required. The Cloud Dataproc region in which to handle the request. |
jobId |
string |
Required. The job ID. |
- Source:
- See:
Cluster
Describes the identifying information, config, and status of a cluster of Compute Engine instances.
Properties:
| Name | Type | Description |
|---|---|---|
projectId |
string |
Required. The Google Cloud Platform project ID that the cluster belongs to. |
clusterName |
string |
Required. The cluster name. Cluster names within a project must be unique. Names of deleted clusters can be reused. |
config |
Object |
Required. The cluster config. Note that Cloud Dataproc may set default values, and values may change when clusters are updated. This object should have the same structure as ClusterConfig |
labels |
Object.<string, string> |
Optional. The labels to associate with this cluster. Label keys must contain 1 to 63 characters, and must conform to RFC 1035. Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035. No more than 32 labels can be associated with a cluster. |
status |
Object |
Output only. Cluster status. This object should have the same structure as ClusterStatus |
statusHistory |
Array.<Object> |
Output only. The previous cluster status. This object should have the same structure as ClusterStatus |
clusterUuid |
string |
Output only. A cluster UUID (Unique Universal Identifier). Cloud Dataproc generates this value when it creates the cluster. |
metrics |
Object |
Output only. Contains cluster daemon metrics such as HDFS and YARN stats. Beta Feature: This report is available for testing purposes only. It may be changed before final release. This object should have the same structure as ClusterMetrics |
- Source:
- See:
ClusterConfig
The cluster config.
Properties:
| Name | Type | Description |
|---|---|---|
configBucket |
string |
Optional. A Google Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Google Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Cloud Dataproc staging bucket). |
gceClusterConfig |
Object |
Optional. The shared Compute Engine config settings for all instances in a cluster. This object should have the same structure as GceClusterConfig |
masterConfig |
Object |
Optional. The Compute Engine config settings for the master instance in a cluster. This object should have the same structure as InstanceGroupConfig |
workerConfig |
Object |
Optional. The Compute Engine config settings for worker instances in a cluster. This object should have the same structure as InstanceGroupConfig |
secondaryWorkerConfig |
Object |
Optional. The Compute Engine config settings for additional worker instances in a cluster. This object should have the same structure as InstanceGroupConfig |
softwareConfig |
Object |
Optional. The config settings for software inside the cluster. This object should have the same structure as SoftwareConfig |
lifecycleConfig |
Object |
Optional. The config setting for auto delete cluster schedule. This object should have the same structure as LifecycleConfig |
initializationActions |
Array.<Object> |
Optional. Commands to execute on each node after config is
completed. By default, executables are run on master and all worker nodes.
You can test a node's
This object should have the same structure as NodeInitializationAction |
encryptionConfig |
Object |
Optional. Encryption settings for the cluster. This object should have the same structure as EncryptionConfig |
autoscalingConfig |
Object |
Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset. This object should have the same structure as AutoscalingConfig |
endpointConfig |
Object |
Optional. Port/endpoint configuration for this cluster This object should have the same structure as EndpointConfig |
securityConfig |
Object |
Optional. Security related configuration. This object should have the same structure as SecurityConfig |
- Source:
- See:
ClusterMetrics
Contains cluster daemon metrics, such as HDFS and YARN stats.
Beta Feature: This report is available for testing purposes only. It may be changed before final release.
Properties:
| Name | Type | Description |
|---|---|---|
hdfsMetrics |
Object.<string, number> |
The HDFS metrics. |
yarnMetrics |
Object.<string, number> |
The YARN metrics. |
- Source:
- See:
ClusterOperation
The cluster operation triggered by a workflow.
Properties:
| Name | Type | Description |
|---|---|---|
operationId |
string |
Output only. The id of the cluster operation. |
error |
string |
Output only. Error, if operation failed. |
done |
boolean |
Output only. Indicates the operation is done. |
- Source:
- See:
ClusterSelector
A selector that chooses target cluster for jobs based on metadata.
Properties:
| Name | Type | Description |
|---|---|---|
zone |
string |
Optional. The zone where workflow process executes. This parameter does not affect the selection of the cluster. If unspecified, the zone of the first cluster matching the selector is used. |
clusterLabels |
Object.<string, string> |
Required. The cluster labels. Cluster must have all labels to match. |
- Source:
- See:
ClusterStatus
The status of a cluster and its instances.
Properties:
| Name | Type | Description |
|---|---|---|
state |
number |
Output only. The cluster's state. The number should be among the values of State |
detail |
string |
Output only. Optional details of cluster's state. |
stateStartTime |
Object |
Output only. Time when this state was entered. This object should have the same structure as Timestamp |
substate |
number |
Output only. Additional state information that includes status reported by the agent. The number should be among the values of Substate |
- Source:
- See:
CreateAutoscalingPolicyRequest
A request to create an autoscaling policy.
Properties:
| Name | Type | Description |
|---|---|---|
parent |
string |
Required. The "resource name" of the region, as described
in https://cloud.google.com/apis/design/resource_names of the form
|
policy |
Object |
The autoscaling policy to create. This object should have the same structure as AutoscalingPolicy |
- Source:
- See:
CreateClusterRequest
A request to create a cluster.
Properties:
| Name | Type | Description |
|---|---|---|
projectId |
string |
Required. The ID of the Google Cloud Platform project that the cluster belongs to. |
region |
string |
Required. The Cloud Dataproc region in which to handle the request. |
cluster |
Object |
Required. The cluster to create. This object should have the same structure as Cluster |
requestId |
string |
Optional. A unique id used to identify the request. If the server receives two CreateClusterRequest requests with the same id, then the second request will be ignored and the first google.longrunning.Operation created and stored in the backend is returned. It is recommended to always set this value to a UUID. The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters. |
- Source:
- See:
CreateWorkflowTemplateRequest
A request to create a workflow template.
Properties:
| Name | Type | Description |
|---|---|---|
parent |
string |
Required. The "resource name" of the region, as described
in https://cloud.google.com/apis/design/resource_names of the form
|
template |
Object |
Required. The Dataproc workflow template to create. This object should have the same structure as WorkflowTemplate |
- Source:
- See:
DeleteAutoscalingPolicyRequest
A request to delete an autoscaling policy.
Autoscaling policies in use by one or more clusters will not be deleted.
Properties:
| Name | Type | Description |
|---|---|---|
name |
string |
Required. The "resource name" of the autoscaling policy, as described
in https://cloud.google.com/apis/design/resource_names of the form
|
- Source:
- See:
DeleteClusterRequest
A request to delete a cluster.
Properties:
| Name | Type | Description |
|---|---|---|
projectId |
string |
Required. The ID of the Google Cloud Platform project that the cluster belongs to. |
region |
string |
Required. The Cloud Dataproc region in which to handle the request. |
clusterName |
string |
Required. The cluster name. |
clusterUuid |
string |
Optional. Specifying the |
requestId |
string |
Optional. A unique id used to identify the request. If the server receives two DeleteClusterRequest requests with the same id, then the second request will be ignored and the first google.longrunning.Operation created and stored in the backend is returned. It is recommended to always set this value to a UUID. The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters. |
- Source:
- See:
DeleteJobRequest
A request to delete a job.
Properties:
| Name | Type | Description |
|---|---|---|
projectId |
string |
Required. The ID of the Google Cloud Platform project that the job belongs to. |
region |
string |
Required. The Cloud Dataproc region in which to handle the request. |
jobId |
string |
Required. The job ID. |
- Source:
- See:
DeleteWorkflowTemplateRequest
A request to delete a workflow template.
Currently started workflows will remain running.
Properties:
| Name | Type | Description |
|---|---|---|
name |
string |
Required. The "resource name" of the workflow template, as described
in https://cloud.google.com/apis/design/resource_names of the form
|
version |
number |
Optional. The version of workflow template to delete. If specified, will only delete the template if the current server version matches specified version. |
- Source:
- See:
DiagnoseClusterRequest
A request to collect cluster diagnostic information.
Properties:
| Name | Type | Description |
|---|---|---|
projectId |
string |
Required. The ID of the Google Cloud Platform project that the cluster belongs to. |
region |
string |
Required. The Cloud Dataproc region in which to handle the request. |
clusterName |
string |
Required. The cluster name. |
- Source:
- See:
DiagnoseClusterResults
The location of diagnostic output.
Properties:
| Name | Type | Description |
|---|---|---|
outputUri |
string |
Output only. The Cloud Storage URI of the diagnostic output. The output report is a plain text file with a summary of collected diagnostics. |
- Source:
- See:
DiskConfig
Specifies the config of disk options for a group of VM instances.
Properties:
| Name | Type | Description |
|---|---|---|
bootDiskType |
string |
Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-ssd" (Persistent Disk Solid State Drive) or "pd-standard" (Persistent Disk Hard Disk Drive). |
bootDiskSizeGb |
number |
Optional. Size in GB of the boot disk (default is 500GB). |
numLocalSsds |
number |
Optional. Number of attached SSDs, from 0 to 4 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries. |
- Source:
- See:
EncryptionConfig
Encryption settings for the cluster.
Properties:
| Name | Type | Description |
|---|---|---|
gcePdKmsKeyName |
string |
Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster. |
- Source:
- See:
EndpointConfig
Endpoint config for this cluster
Properties:
| Name | Type | Description |
|---|---|---|
httpPorts |
Object.<string, string> |
Output only. The map of port descriptions to URLs. Will only be populated if enable_http_port_access is true. |
enableHttpPortAccess |
boolean |
Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false. |
- Source:
- See:
GceClusterConfig
Common config settings for resources of Compute Engine cluster instances, applicable to all instances in the cluster.
Properties:
| Name | Type | Description |
|---|---|---|
zoneUri |
string |
Optional. The zone where the Compute Engine cluster will be located. On a create request, it is required in the "global" region. If omitted in a non-global Cloud Dataproc region, the service will pick a zone in the corresponding Compute Engine region. On a get request, zone will always be present. A full URL, partial URI, or short name are valid. Examples:
|
networkUri |
string |
Optional. The Compute Engine network to be used for machine
communications. Cannot be specified with subnetwork_uri. If neither
A full URL, partial URI, or short name are valid. Examples:
|
subnetworkUri |
string |
Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri. A full URL, partial URI, or short name are valid. Examples:
|
internalIpOnly |
boolean |
Optional. If true, all instances in the cluster will only have internal IP
addresses. By default, clusters are not restricted to internal IP
addresses, and will have ephemeral external IP addresses assigned to each
instance. This |
serviceAccount |
string |
Optional. The service account of the instances. Defaults to the default Compute Engine service account. Custom service accounts need permissions equivalent to the following IAM roles:
(see
https://cloud.google.com/compute/docs/access/service-accounts#custom_service_accounts
for more information).
Example: |
serviceAccountScopes |
Array.<string> |
Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included:
If no scopes are specified, the following defaults are also provided:
|
tags |
Array.<string> |
The Compute Engine tags to add to all instances (see Tagging instances). |
metadata |
Object.<string, string> |
The Compute Engine metadata entries to add to all instances (see Project and instance metadata). |
reservationAffinity |
Object |
Optional. Reservation Affinity for consuming Zonal reservation. This object should have the same structure as ReservationAffinity |
- Source:
- See:
GetAutoscalingPolicyRequest
A request to fetch an autoscaling policy.
Properties:
| Name | Type | Description |
|---|---|---|
name |
string |
Required. The "resource name" of the autoscaling policy, as described
in https://cloud.google.com/apis/design/resource_names of the form
|
- Source:
- See:
GetClusterRequest
Request to get the resource representation for a cluster in a project.
Properties:
| Name | Type | Description |
|---|---|---|
projectId |
string |
Required. The ID of the Google Cloud Platform project that the cluster belongs to. |
region |
string |
Required. The Cloud Dataproc region in which to handle the request. |
clusterName |
string |
Required. The cluster name. |
- Source:
- See:
GetJobRequest
A request to get the resource representation for a job in a project.
Properties:
| Name | Type | Description |
|---|---|---|
projectId |
string |
Required. The ID of the Google Cloud Platform project that the job belongs to. |
region |
string |
Required. The Cloud Dataproc region in which to handle the request. |
jobId |
string |
Required. The job ID. |
- Source:
- See:
GetWorkflowTemplateRequest
A request to fetch a workflow template.
Properties:
| Name | Type | Description |
|---|---|---|
name |
string |
Required. The "resource name" of the workflow template, as described
in https://cloud.google.com/apis/design/resource_names of the form
|
version |
number |
Optional. The version of workflow template to retrieve. Only previously instatiated versions can be retrieved. If unspecified, retrieves the current version. |
- Source:
- See:
HadoopJob
A Cloud Dataproc job for running Apache Hadoop MapReduce jobs on Apache Hadoop YARN.
Properties:
| Name | Type | Description |
|---|---|---|
mainJarFileUri |
string |
The HCFS URI of the jar file containing the main class. Examples: 'gs://foo-bucket/analytics-binaries/extract-useful-metrics-mr.jar' 'hdfs:/tmp/test-samples/custom-wordcount.jar' 'file:///home/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar' |
mainClass |
string |
The name of the driver's main class. The jar file containing the class
must be in the default CLASSPATH or specified in |
args |
Array.<string> |
Optional. The arguments to pass to the driver. Do not
include arguments, such as |
jarFileUris |
Array.<string> |
Optional. Jar file URIs to add to the CLASSPATHs of the Hadoop driver and tasks. |
fileUris |
Array.<string> |
Optional. HCFS (Hadoop Compatible Filesystem) URIs of files to be copied to the working directory of Hadoop drivers and distributed tasks. Useful for naively parallel tasks. |
archiveUris |
Array.<string> |
Optional. HCFS URIs of archives to be extracted in the working directory of Hadoop drivers and tasks. Supported file types: .jar, .tar, .tar.gz, .tgz, or .zip. |
properties |
Object.<string, string> |
Optional. A mapping of property names to values, used to configure Hadoop. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site and classes in user code. |
loggingConfig |
Object |
Optional. The runtime log config for job execution. This object should have the same structure as LoggingConfig |
- Source:
- See:
HiveJob
A Cloud Dataproc job for running Apache Hive queries on YARN.
Properties:
| Name | Type | Description |
|---|---|---|
queryFileUri |
string |
The HCFS URI of the script that contains Hive queries. |
queryList |
Object |
A list of queries. This object should have the same structure as QueryList |
continueOnFailure |
boolean |
Optional. Whether to continue executing queries if a query fails.
The default value is |
scriptVariables |
Object.<string, string> |
Optional. Mapping of query variable names to values (equivalent to the
Hive command: |
properties |
Object.<string, string> |
Optional. A mapping of property names and values, used to configure Hive. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/hive/conf/hive-site.xml, and classes in user code. |
jarFileUris |
Array.<string> |
Optional. HCFS URIs of jar files to add to the CLASSPATH of the Hive server and Hadoop MapReduce (MR) tasks. Can contain Hive SerDes and UDFs. |
- Source:
- See:
InstanceGroupAutoscalingPolicyConfig
Configuration for the size bounds of an instance group, including its proportional size to other groups.
Properties:
| Name | Type | Description |
|---|---|---|
minInstances |
number |
Optional. Minimum number of instances for this group. Primary workers - Bounds: [2, max_instances]. Default: 2. Secondary workers - Bounds: [0, max_instances]. Default: 0. |
maxInstances |
number |
Optional. Maximum number of instances for this group. Required for primary workers. Note that by default, clusters will not use secondary workers. Required for secondary workers if the minimum secondary instances is set. Primary workers - Bounds: [min_instances, ). Required. Secondary workers - Bounds: [min_instances, ). Default: 0. |
weight |
number |
Optional. Weight for the instance group, which is used to determine the fraction of total workers in the cluster from this instance group. For example, if primary workers have weight 2, and secondary workers have weight 1, the cluster will have approximately 2 primary workers for each secondary worker. The cluster may not reach the specified balance if constrained
by min/max bounds or other autoscaling settings. For example, if
If weight is not set on any instance group, the cluster will default to equal weight for all groups: the cluster will attempt to maintain an equal number of workers in each group within the configured size bounds for each group. If weight is set for one group only, the cluster will default to zero weight on the unset group. For example if weight is set only on primary workers, the cluster will use primary workers only and no secondary workers. |
- Source:
- See:
InstanceGroupConfig
Optional. The config settings for Compute Engine resources in an instance group, such as a master or worker group.
Properties:
| Name | Type | Description |
|---|---|---|
numInstances |
number |
Optional. The number of VM instances in the instance group. For master instance groups, must be set to 1. |
instanceNames |
Array.<string> |
Output only. The list of instance names. Cloud Dataproc derives the names
from |
imageUri |
string |
Optional. The Compute Engine image resource used for cluster
instances. It can be specified or may be inferred from
|
machineTypeUri |
string |
Optional. The Compute Engine machine type used for cluster instances. A full URL, partial URI, or short name are valid. Examples:
Auto Zone Exception: If you are using the Cloud Dataproc
Auto Zone
Placement
feature, you must use the short name of the machine type
resource, for example, |
diskConfig |
Object |
Optional. Disk option config settings. This object should have the same structure as DiskConfig |
isPreemptible |
boolean |
Optional. Specifies that this instance group contains preemptible instances. |
managedGroupConfig |
Object |
Output only. The config for Compute Engine Instance Group Manager that manages this group. This is only used for preemptible instance groups. This object should have the same structure as ManagedGroupConfig |
accelerators |
Array.<Object> |
Optional. The Compute Engine accelerator configuration for these instances. Beta Feature: This feature is still under development. It may be changed before final release. This object should have the same structure as AcceleratorConfig |
minCpuPlatform |
string |
Optional. Specifies the minimum cpu platform for the Instance Group. See [Cloud Dataproc→Minimum CPU Platform] (/dataproc/docs/concepts/compute/dataproc-min-cpu). |
- Source:
- See:
InstantiateInlineWorkflowTemplateRequest
A request to instantiate an inline workflow template.
Properties:
| Name | Type | Description |
|---|---|---|
parent |
string |
Required. The "resource name" of the workflow template region, as described
in https://cloud.google.com/apis/design/resource_names of the form
|
template |
Object |
Required. The workflow template to instantiate. This object should have the same structure as WorkflowTemplate |
instanceId |
string |
Deprecated. Please use |
requestId |
string |
Optional. A tag that prevents multiple concurrent workflow instances with the same tag from running. This mitigates risk of concurrent instances started due to retries. It is recommended to always set this value to a UUID. The tag must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters. |
- Source:
- See:
InstantiateWorkflowTemplateRequest
A request to instantiate a workflow template.
Properties:
| Name | Type | Description |
|---|---|---|
name |
string |
Required. The "resource name" of the workflow template, as described
in https://cloud.google.com/apis/design/resource_names of the form
|
version |
number |
Optional. The version of workflow template to instantiate. If specified, the workflow will be instantiated only if the current version of the workflow template has the supplied version. This option cannot be used to instantiate a previous version of workflow template. |
instanceId |
string |
Deprecated. Please use |
requestId |
string |
Optional. A tag that prevents multiple concurrent workflow instances with the same tag from running. This mitigates risk of concurrent instances started due to retries. It is recommended to always set this value to a UUID. The tag must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters. |
parameters |
Object.<string, string> |
Optional. Map from parameter names to values that should be used for those parameters. Values may not exceed 100 characters. |
- Source:
- See:
Job
A Cloud Dataproc job resource.
Properties:
| Name | Type | Description |
|---|---|---|
reference |
Object |
Optional. The fully qualified reference to the job, which can be used to
obtain the equivalent REST path of the job resource. If this property
is not specified when a job is created, the server generates a
This object should have the same structure as JobReference |
placement |
Object |
Required. Job information, including how, when, and where to run the job. This object should have the same structure as JobPlacement |
hadoopJob |
Object |
Job is a Hadoop job. This object should have the same structure as HadoopJob |
sparkJob |
Object |
Job is a Spark job. This object should have the same structure as SparkJob |
pysparkJob |
Object |
Job is a Pyspark job. This object should have the same structure as PySparkJob |
hiveJob |
Object |
Job is a Hive job. This object should have the same structure as HiveJob |
pigJob |
Object |
Job is a Pig job. This object should have the same structure as PigJob |
sparkRJob |
Object |
Job is a SparkR job. This object should have the same structure as SparkRJob |
sparkSqlJob |
Object |
Job is a SparkSql job. This object should have the same structure as SparkSqlJob |
status |
Object |
Output only. The job status. Additional application-specific
status information may be contained in the This object should have the same structure as JobStatus |
statusHistory |
Array.<Object> |
Output only. The previous job status. This object should have the same structure as JobStatus |
yarnApplications |
Array.<Object> |
Output only. The collection of YARN applications spun up by this job. Beta Feature: This report is available for testing purposes only. It may be changed before final release. This object should have the same structure as YarnApplication |
submittedBy |
string |
Output only. The email address of the user submitting the job. For jobs
submitted on the cluster, the address is |
driverOutputResourceUri |
string |
Output only. A URI pointing to the location of the stdout of the job's driver program. |
driverControlFilesUri |
string |
Output only. If present, the location of miscellaneous control files
which may be used as part of job setup and handling. If not present,
control files may be placed in the same location as |
labels |
Object.<string, string> |
Optional. The labels to associate with this job. Label keys must contain 1 to 63 characters, and must conform to RFC 1035. Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035. No more than 32 labels can be associated with a job. |
scheduling |
Object |
Optional. Job scheduling configuration. This object should have the same structure as JobScheduling |
jobUuid |
string |
Output only. A UUID that uniquely identifies a job within the project over time. This is in contrast to a user-settable reference.job_id that may be reused over time. |
- Source:
- See:
JobPlacement
Cloud Dataproc job config.
Properties:
| Name | Type | Description |
|---|---|---|
clusterName |
string |
Required. The name of the cluster where the job will be submitted. |
clusterUuid |
string |
Output only. A cluster UUID generated by the Cloud Dataproc service when the job is submitted. |
- Source:
- See:
JobReference
Encapsulates the full scoping used to reference a job.
Properties:
| Name | Type | Description |
|---|---|---|
projectId |
string |
Required. The ID of the Google Cloud Platform project that the job belongs to. |
jobId |
string |
Optional. The job ID, which must be unique within the project. The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), or hyphens (-). The maximum length is 100 characters. If not specified by the caller, the job ID will be provided by the server. |
- Source:
- See:
JobScheduling
Job scheduling options.
Properties:
| Name | Type | Description |
|---|---|---|
maxFailuresPerHour |
number |
Optional. Maximum number of times per hour a driver may be restarted as a result of driver terminating with non-zero code before job is reported failed. A job may be reported as thrashing if driver exits with non-zero code 4 times within 10 minute window. Maximum value is 10. |
- Source:
- See:
JobStatus
Cloud Dataproc job status.
Properties:
| Name | Type | Description |
|---|---|---|
state |
number |
Output only. A state message specifying the overall job state. The number should be among the values of State |
details |
string |
Output only. Optional job state details, such as an error
description if the state is |
stateStartTime |
Object |
Output only. The time when this state was entered. This object should have the same structure as Timestamp |
substate |
number |
Output only. Additional state information, which includes status reported by the agent. The number should be among the values of Substate |
- Source:
- See:
KerberosConfig
Specifies Kerberos related configuration.
Properties:
| Name | Type | Description |
|---|---|---|
enableKerberos |
boolean |
Optional. Flag to indicate whether to Kerberize the cluster. |
rootPrincipalPasswordUri |
string |
Required. The Cloud Storage URI of a KMS encrypted file containing the root principal password. |
kmsKeyUri |
string |
Required. The uri of the KMS key used to encrypt various sensitive files. |
keystoreUri |
string |
Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate. |
truststoreUri |
string |
Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate. |
keystorePasswordUri |
string |
Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc. |
keyPasswordUri |
string |
Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc. |
truststorePasswordUri |
string |
Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc. |
crossRealmTrustRealm |
string |
Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust. |
crossRealmTrustKdc |
string |
Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship. |
crossRealmTrustAdminServer |
string |
Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship. |
crossRealmTrustSharedPasswordUri |
string |
Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship. |
kdcDbKeyUri |
string |
Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database. |
tgtLifetimeHours |
number |
Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used. |
- Source:
- See:
LifecycleConfig
Specifies the cluster auto-delete schedule configuration.
Properties:
| Name | Type | Description |
|---|---|---|
idleDeleteTtl |
Object |
Optional. The duration to keep the cluster alive while idling. Passing this threshold will cause the cluster to be deleted. Valid range: [10m, 14d]. Example: "10m", the minimum value, to delete the cluster when it has had no jobs running for 10 minutes. This object should have the same structure as Duration |
autoDeleteTime |
Object |
Optional. The time when cluster will be auto-deleted. This object should have the same structure as Timestamp |
autoDeleteTtl |
Object |
Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Valid range: [10m, 14d]. Example: "1d", to delete the cluster 1 day after its creation.. This object should have the same structure as Duration |
- Source:
- See:
ListAutoscalingPoliciesRequest
A request to list autoscaling policies in a project.
Properties:
| Name | Type | Description |
|---|---|---|
parent |
string |
Required. The "resource name" of the region, as described
in https://cloud.google.com/apis/design/resource_names of the form
|
pageSize |
number |
Optional. The maximum number of results to return in each response. |
pageToken |
string |
Optional. The page token, returned by a previous call, to request the next page of results. |
- Source:
- See:
ListAutoscalingPoliciesResponse
A response to a request to list autoscaling policies in a project.
Properties:
| Name | Type | Description |
|---|---|---|
policies |
Array.<Object> |
Output only. Autoscaling policies list. This object should have the same structure as AutoscalingPolicy |
nextPageToken |
string |
Output only. This token is included in the response if there are more results to fetch. |
- Source:
- See:
ListClustersRequest
A request to list the clusters in a project.
Properties:
| Name | Type | Description |
|---|---|---|
projectId |
string |
Required. The ID of the Google Cloud Platform project that the cluster belongs to. |
region |
string |
Required. The Cloud Dataproc region in which to handle the request. |
filter |
string |
Optional. A filter constraining the clusters to list. Filters are case-sensitive and have the following syntax: field = value [AND [field = value]] ... where field is one of Example filter: status.state = ACTIVE AND clusterName = mycluster AND labels.env = staging AND labels.starred = * |
pageSize |
number |
Optional. The standard List page size. |
pageToken |
string |
Optional. The standard List page token. |
- Source:
- See:
ListClustersResponse
The list of all clusters in a project.
Properties:
| Name | Type | Description |
|---|---|---|
clusters |
Array.<Object> |
Output only. The clusters in the project. This object should have the same structure as Cluster |
nextPageToken |
string |
Output only. This token is included in the response if there are more
results to fetch. To fetch additional results, provide this value as the
|
- Source:
- See:
ListJobsRequest
A request to list jobs in a project.
Properties:
| Name | Type | Description |
|---|---|---|
projectId |
string |
Required. The ID of the Google Cloud Platform project that the job belongs to. |
region |
string |
Required. The Cloud Dataproc region in which to handle the request. |
pageSize |
number |
Optional. The number of results to return in each response. |
pageToken |
string |
Optional. The page token, returned by a previous call, to request the next page of results. |
clusterName |
string |
Optional. If set, the returned jobs list includes only jobs that were submitted to the named cluster. |
jobStateMatcher |
number |
Optional. Specifies enumerated categories of jobs to list. (default = match ALL jobs). If The number should be among the values of JobStateMatcher |
filter |
string |
Optional. A filter constraining the jobs to list. Filters are case-sensitive and have the following syntax: [field = value] AND [field [= value]] ... where field is Example filter: status.state = ACTIVE AND labels.env = staging AND labels.starred = * |
- Source:
- See:
ListJobsResponse
A list of jobs in a project.
Properties:
| Name | Type | Description |
|---|---|---|
jobs |
Array.<Object> |
Output only. Jobs list. This object should have the same structure as Job |
nextPageToken |
string |
Optional. This token is included in the response if there are more results
to fetch. To fetch additional results, provide this value as the
|
- Source:
- See:
ListWorkflowTemplatesRequest
A request to list workflow templates in a project.
Properties:
| Name | Type | Description |
|---|---|---|
parent |
string |
Required. The "resource name" of the region, as described
in https://cloud.google.com/apis/design/resource_names of the form
|
pageSize |
number |
Optional. The maximum number of results to return in each response. |
pageToken |
string |
Optional. The page token, returned by a previous call, to request the next page of results. |
- Source:
- See:
ListWorkflowTemplatesResponse
A response to a request to list workflow templates in a project.
Properties:
| Name | Type | Description |
|---|---|---|
templates |
Array.<Object> |
Output only. WorkflowTemplates list. This object should have the same structure as WorkflowTemplate |
nextPageToken |
string |
Output only. This token is included in the response if there are more
results to fetch. To fetch additional results, provide this value as the
page_token in a subsequent |
- Source:
- See:
LoggingConfig
The runtime logging config of the job.
Properties:
| Name | Type | Description |
|---|---|---|
driverLogLevels |
Object.<string, number> |
The per-package log levels for the driver. This may include "root" package name to configure rootLogger. Examples: 'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG' |
- Source:
- See:
ManagedCluster
Cluster that is managed by the workflow.
Properties:
| Name | Type | Description |
|---|---|---|
clusterName |
string |
Required. The cluster name prefix. A unique cluster name will be formed by appending a random suffix. The name must contain only lower-case letters (a-z), numbers (0-9), and hyphens (-). Must begin with a letter. Cannot begin or end with hyphen. Must consist of between 2 and 35 characters. |
config |
Object |
Required. The cluster configuration. This object should have the same structure as ClusterConfig |
labels |
Object.<string, string> |
Optional. The labels to associate with this cluster. Label keys must be between 1 and 63 characters long, and must conform to the following PCRE regular expression: [\p{Ll}\p{Lo}][\p{Ll}\p{Lo}\p{N}_-]{0,62} Label values must be between 1 and 63 characters long, and must conform to the following PCRE regular expression: [\p{Ll}\p{Lo}\p{N}_-]{0,63} No more than 32 labels can be associated with a given cluster. |
- Source:
- See:
ManagedGroupConfig
Specifies the resources used to actively manage an instance group.
Properties:
| Name | Type | Description |
|---|---|---|
instanceTemplateName |
string |
Output only. The name of the Instance Template used for the Managed Instance Group. |
instanceGroupManagerName |
string |
Output only. The name of the Instance Group Manager for this group. |
- Source:
- See:
NodeInitializationAction
Specifies an executable to run on a fully configured node and a timeout period for executable completion.
Properties:
| Name | Type | Description |
|---|---|---|
executableFile |
string |
Required. Cloud Storage URI of executable file. |
executionTimeout |
Object |
Optional. Amount of time executable has to complete. Default is 10 minutes. Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period. This object should have the same structure as Duration |
- Source:
- See:
OrderedJob
A job executed by the workflow.
Properties:
| Name | Type | Description |
|---|---|---|
stepId |
string |
Required. The step id. The id must be unique among all jobs within the template. The step id is used as prefix for job id, as job
The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of between 3 and 50 characters. |
hadoopJob |
Object |
Job is a Hadoop job. This object should have the same structure as HadoopJob |
sparkJob |
Object |
Job is a Spark job. This object should have the same structure as SparkJob |
pysparkJob |
Object |
Job is a Pyspark job. This object should have the same structure as PySparkJob |
hiveJob |
Object |
Job is a Hive job. This object should have the same structure as HiveJob |
pigJob |
Object |
Job is a Pig job. This object should have the same structure as PigJob |
sparkSqlJob |
Object |
Job is a SparkSql job. This object should have the same structure as SparkSqlJob |
labels |
Object.<string, string> |
Optional. The labels to associate with this job. Label keys must be between 1 and 63 characters long, and must conform to the following regular expression: [\p{Ll}\p{Lo}][\p{Ll}\p{Lo}\p{N}_-]{0,62} Label values must be between 1 and 63 characters long, and must conform to the following regular expression: [\p{Ll}\p{Lo}\p{N}_-]{0,63} No more than 32 labels can be associated with a given job. |
scheduling |
Object |
Optional. Job scheduling configuration. This object should have the same structure as JobScheduling |
prerequisiteStepIds |
Array.<string> |
Optional. The optional list of prerequisite job step_ids. If not specified, the job will start at the beginning of workflow. |
- Source:
- See:
ParameterValidation
Configuration for parameter validation.
Properties:
| Name | Type | Description |
|---|---|---|
regex |
Object |
Validation based on regular expressions. This object should have the same structure as RegexValidation |
values |
Object |
Validation based on a list of allowed values. This object should have the same structure as ValueValidation |
- Source:
- See:
PigJob
A Cloud Dataproc job for running Apache Pig queries on YARN.
Properties:
| Name | Type | Description |
|---|---|---|
queryFileUri |
string |
The HCFS URI of the script that contains the Pig queries. |
queryList |
Object |
A list of queries. This object should have the same structure as QueryList |
continueOnFailure |
boolean |
Optional. Whether to continue executing queries if a query fails.
The default value is |
scriptVariables |
Object.<string, string> |
Optional. Mapping of query variable names to values (equivalent to the Pig
command: |
properties |
Object.<string, string> |
Optional. A mapping of property names to values, used to configure Pig. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/pig/conf/pig.properties, and classes in user code. |
jarFileUris |
Array.<string> |
Optional. HCFS URIs of jar files to add to the CLASSPATH of the Pig Client and Hadoop MapReduce (MR) tasks. Can contain Pig UDFs. |
loggingConfig |
Object |
Optional. The runtime log config for job execution. This object should have the same structure as LoggingConfig |
- Source:
- See:
PySparkJob
A Cloud Dataproc job for running Apache PySpark applications on YARN.
Properties:
| Name | Type | Description |
|---|---|---|
mainPythonFileUri |
string |
Required. The HCFS URI of the main Python file to use as the driver. Must be a .py file. |
args |
Array.<string> |
Optional. The arguments to pass to the driver. Do not include arguments,
such as |
pythonFileUris |
Array.<string> |
Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip. |
jarFileUris |
Array.<string> |
Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Python driver and tasks. |
fileUris |
Array.<string> |
Optional. HCFS URIs of files to be copied to the working directory of Python drivers and distributed tasks. Useful for naively parallel tasks. |
archiveUris |
Array.<string> |
Optional. HCFS URIs of archives to be extracted in the working directory of .jar, .tar, .tar.gz, .tgz, and .zip. |
properties |
Object.<string, string> |
Optional. A mapping of property names to values, used to configure PySpark. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code. |
loggingConfig |
Object |
Optional. The runtime log config for job execution. This object should have the same structure as LoggingConfig |
- Source:
- See:
QueryList
A list of queries to run on a cluster.
Properties:
| Name | Type | Description |
|---|---|---|
queries |
Array.<string> |
Required. The queries to execute. You do not need to terminate a query with a semicolon. Multiple queries can be specified in one string by separating each with a semicolon. Here is an example of an Cloud Dataproc API snippet that uses a QueryList to specify a HiveJob:
|
- Source:
- See:
RegexValidation
Validation based on regular expressions.
Properties:
| Name | Type | Description |
|---|---|---|
regexes |
Array.<string> |
Required. RE2 regular expressions used to validate the parameter's value. The value must match the regex in its entirety (substring matches are not sufficient). |
- Source:
- See:
ReservationAffinity
Reservation Affinity for consuming Zonal reservation.
Properties:
| Name | Type | Description |
|---|---|---|
consumeReservationType |
number |
Optional. Type of reservation to consume The number should be among the values of Type |
key |
string |
Optional. Corresponds to the label key of reservation resource. |
values |
Array.<string> |
Optional. Corresponds to the label values of reservation resource. |
- Source:
- See:
SecurityConfig
Security related configuration, including encryption, Kerberos, etc.
Properties:
| Name | Type | Description |
|---|---|---|
kerberosConfig |
Object |
Kerberos related configuration. This object should have the same structure as KerberosConfig |
- Source:
- See:
SoftwareConfig
Specifies the selection and config of software inside the cluster.
Properties:
| Name | Type | Description |
|---|---|---|
imageVersion |
string |
Optional. The version of software inside the cluster. It must be one of the supported Cloud Dataproc Versions, such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version. If unspecified, it defaults to the latest Debian version. |
properties |
Object.<string, string> |
Optional. The properties to set on daemon config files. Property keys are specified in
For more information, see Cluster properties. |
optionalComponents |
Array.<number> |
The set of optional components to activate on the cluster. The number should be among the values of Component |
- Source:
- See:
SparkJob
A Cloud Dataproc job for running Apache Spark applications on YARN.
Properties:
| Name | Type | Description |
|---|---|---|
mainJarFileUri |
string |
The HCFS URI of the jar file that contains the main class. |
mainClass |
string |
The name of the driver's main class. The jar file that contains the class
must be in the default CLASSPATH or specified in |
args |
Array.<string> |
Optional. The arguments to pass to the driver. Do not include arguments,
such as |
jarFileUris |
Array.<string> |
Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Spark driver and tasks. |
fileUris |
Array.<string> |
Optional. HCFS URIs of files to be copied to the working directory of Spark drivers and distributed tasks. Useful for naively parallel tasks. |
archiveUris |
Array.<string> |
Optional. HCFS URIs of archives to be extracted in the working directory of Spark drivers and tasks. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip. |
properties |
Object.<string, string> |
Optional. A mapping of property names to values, used to configure Spark. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code. |
loggingConfig |
Object |
Optional. The runtime log config for job execution. This object should have the same structure as LoggingConfig |
- Source:
- See:
SparkRJob
A Cloud Dataproc job for running Apache SparkR applications on YARN.
Properties:
| Name | Type | Description |
|---|---|---|
mainRFileUri |
string |
Required. The HCFS URI of the main R file to use as the driver. Must be a .R file. |
args |
Array.<string> |
Optional. The arguments to pass to the driver. Do not include arguments,
such as |
fileUris |
Array.<string> |
Optional. HCFS URIs of files to be copied to the working directory of R drivers and distributed tasks. Useful for naively parallel tasks. |
archiveUris |
Array.<string> |
Optional. HCFS URIs of archives to be extracted in the working directory of Spark drivers and tasks. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip. |
properties |
Object.<string, string> |
Optional. A mapping of property names to values, used to configure SparkR. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code. |
loggingConfig |
Object |
Optional. The runtime log config for job execution. This object should have the same structure as LoggingConfig |
- Source:
- See:
SparkSqlJob
A Cloud Dataproc job for running Apache Spark SQL queries.
Properties:
| Name | Type | Description |
|---|---|---|
queryFileUri |
string |
The HCFS URI of the script that contains SQL queries. |
queryList |
Object |
A list of queries. This object should have the same structure as QueryList |
scriptVariables |
Object.<string, string> |
Optional. Mapping of query variable names to values (equivalent to the
Spark SQL command: SET |
properties |
Object.<string, string> |
Optional. A mapping of property names to values, used to configure Spark SQL's SparkConf. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. |
jarFileUris |
Array.<string> |
Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH. |
loggingConfig |
Object |
Optional. The runtime log config for job execution. This object should have the same structure as LoggingConfig |
- Source:
- See:
SubmitJobRequest
A request to submit a job.
Properties:
| Name | Type | Description |
|---|---|---|
projectId |
string |
Required. The ID of the Google Cloud Platform project that the job belongs to. |
region |
string |
Required. The Cloud Dataproc region in which to handle the request. |
job |
Object |
Required. The job resource. This object should have the same structure as Job |
requestId |
string |
Optional. A unique id used to identify the request. If the server receives two SubmitJobRequest requests with the same id, then the second request will be ignored and the first Job created and stored in the backend is returned. It is recommended to always set this value to a UUID. The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters. |
- Source:
- See:
TemplateParameter
A configurable parameter that replaces one or more fields in the template. Parameterizable fields:
- Labels
- File uris
- Job properties
- Job arguments
- Script variables
- Main class (in HadoopJob and SparkJob)
- Zone (in ClusterSelector)
Properties:
| Name | Type | Description |
|---|---|---|
name |
string |
Required. Parameter name. The parameter name is used as the key, and paired with the parameter value, which are passed to the template when the template is instantiated. The name must contain only capital letters (A-Z), numbers (0-9), and underscores (_), and must not start with a number. The maximum length is 40 characters. |
fields |
Array.<string> |
Required. Paths to all fields that the parameter replaces. A field is allowed to appear in at most one parameter's list of field paths. A field path is similar in syntax to a google.protobuf.FieldMask.
For example, a field path that references the zone field of a workflow
template's cluster selector would be specified as
Also, field paths can reference fields using the following syntax:
It may not be possible to parameterize maps and repeated fields in their entirety since only individual map values and individual items in repeated fields can be referenced. For example, the following field paths are invalid:
|
description |
string |
Optional. Brief description of the parameter. Must not exceed 1024 characters. |
validation |
Object |
Optional. Validation rules to be applied to this parameter's value. This object should have the same structure as ParameterValidation |
- Source:
- See:
UpdateAutoscalingPolicyRequest
A request to update an autoscaling policy.
Properties:
| Name | Type | Description |
|---|---|---|
policy |
Object |
Required. The updated autoscaling policy. This object should have the same structure as AutoscalingPolicy |
- Source:
- See:
UpdateClusterRequest
A request to update a cluster.
Properties:
| Name | Type | Description | ||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
projectId |
string |
Required. The ID of the Google Cloud Platform project the cluster belongs to. |
||||||||||||||||
region |
string |
Required. The Cloud Dataproc region in which to handle the request. |
||||||||||||||||
clusterName |
string |
Required. The cluster name. |
||||||||||||||||
cluster |
Object |
Required. The changes to the cluster. This object should have the same structure as Cluster |
||||||||||||||||
gracefulDecommissionTimeout |
Object |
Optional. Timeout for graceful YARN decomissioning. Graceful decommissioning allows removing nodes from the cluster without interrupting jobs in progress. Timeout specifies how long to wait for jobs in progress to finish before forcefully removing nodes (and potentially interrupting jobs). Default timeout is 0 (for forceful decommission), and the maximum allowed timeout is 1 day. Only supported on Dataproc image versions 1.2 and higher. This object should have the same structure as Duration |
||||||||||||||||
updateMask |
Object |
Required. Specifies the path, relative to
Similarly, to change the number of preemptible workers in a cluster to 5,
the
Note: currently only the following fields can be updated:
This object should have the same structure as FieldMask |
||||||||||||||||
requestId |
string |
Optional. A unique id used to identify the request. If the server receives two UpdateClusterRequest requests with the same id, then the second request will be ignored and the first google.longrunning.Operation created and stored in the backend is returned. It is recommended to always set this value to a UUID. The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters. |
- Source:
- See:
UpdateJobRequest
A request to update a job.
Properties:
| Name | Type | Description |
|---|---|---|
projectId |
string |
Required. The ID of the Google Cloud Platform project that the job belongs to. |
region |
string |
Required. The Cloud Dataproc region in which to handle the request. |
jobId |
string |
Required. The job ID. |
job |
Object |
Required. The changes to the job. This object should have the same structure as Job |
updateMask |
Object |
Required. Specifies the path, relative to This object should have the same structure as FieldMask |
- Source:
- See:
UpdateWorkflowTemplateRequest
A request to update a workflow template.
Properties:
| Name | Type | Description |
|---|---|---|
template |
Object |
Required. The updated workflow template. The This object should have the same structure as WorkflowTemplate |
- Source:
- See:
ValueValidation
Validation based on a list of allowed values.
Properties:
| Name | Type | Description |
|---|---|---|
values |
Array.<string> |
Required. List of allowed values for the parameter. |
- Source:
- See:
WorkflowGraph
The workflow graph.
Properties:
| Name | Type | Description |
|---|---|---|
nodes |
Array.<Object> |
Output only. The workflow nodes. This object should have the same structure as WorkflowNode |
- Source:
- See:
WorkflowMetadata
A Cloud Dataproc workflow template resource.
Properties:
| Name | Type | Description |
|---|---|---|
template |
string |
Output only. The "resource name" of the template. |
version |
number |
Output only. The version of template at the time of workflow instantiation. |
createCluster |
Object |
Output only. The create cluster operation metadata. This object should have the same structure as ClusterOperation |
graph |
Object |
Output only. The workflow graph. This object should have the same structure as WorkflowGraph |
deleteCluster |
Object |
Output only. The delete cluster operation metadata. This object should have the same structure as ClusterOperation |
state |
number |
Output only. The workflow state. The number should be among the values of State |
clusterName |
string |
Output only. The name of the target cluster. |
parameters |
Object.<string, string> |
Map from parameter names to values that were used for those parameters. |
startTime |
Object |
Output only. Workflow start time. This object should have the same structure as Timestamp |
endTime |
Object |
Output only. Workflow end time. This object should have the same structure as Timestamp |
clusterUuid |
string |
Output only. The UUID of target cluster. |
- Source:
- See:
WorkflowNode
The workflow node.
Properties:
| Name | Type | Description |
|---|---|---|
stepId |
string |
Output only. The name of the node. |
prerequisiteStepIds |
Array.<string> |
Output only. Node's prerequisite nodes. |
jobId |
string |
Output only. The job id; populated after the node enters RUNNING state. |
state |
number |
Output only. The node state. The number should be among the values of NodeState |
error |
string |
Output only. The error detail. |
- Source:
- See:
WorkflowTemplate
A Cloud Dataproc workflow template resource.
Properties:
| Name | Type | Description |
|---|---|---|
id |
string |
Required. The template id. The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of between 3 and 50 characters. . |
name |
string |
Output only. The "resource name" of the template, as described
in https://cloud.google.com/apis/design/resource_names of the form
|
version |
number |
Optional. Used to perform a consistent read-modify-write. This field should be left blank for a |
createTime |
Object |
Output only. The time template was created. This object should have the same structure as Timestamp |
updateTime |
Object |
Output only. The time template was last updated. This object should have the same structure as Timestamp |
labels |
Object.<string, string> |
Optional. The labels to associate with this template. These labels will be propagated to all jobs and clusters created by the workflow instance. Label keys must contain 1 to 63 characters, and must conform to RFC 1035. Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035. No more than 32 labels can be associated with a template. |
placement |
Object |
Required. WorkflowTemplate scheduling information. This object should have the same structure as WorkflowTemplatePlacement |
jobs |
Array.<Object> |
Required. The Directed Acyclic Graph of Jobs to submit. This object should have the same structure as OrderedJob |
parameters |
Array.<Object> |
Optional. Template parameters whose values are substituted into the template. Values for parameters must be provided when the template is instantiated. This object should have the same structure as TemplateParameter |
- Source:
- See:
WorkflowTemplatePlacement
Specifies workflow execution target.
Either managed_cluster or cluster_selector is required.
Properties:
| Name | Type | Description |
|---|---|---|
managedCluster |
Object |
Optional. A cluster that is managed by the workflow. This object should have the same structure as ManagedCluster |
clusterSelector |
Object |
Optional. A selector that chooses target cluster for jobs based on metadata. The selector is evaluated at the time each job is submitted. This object should have the same structure as ClusterSelector |
- Source:
- See:
YarnApplication
A YARN application created by a job. Application information is a subset of
org.apache.hadoop.yarn.proto.YarnProtos.ApplicationReportProto.
Beta Feature: This report is available for testing purposes only. It may be changed before final release.
Properties:
| Name | Type | Description |
|---|---|---|
name |
string |
Required. The application name. |
state |
number |
Required. The application state. The number should be among the values of State |
progress |
number |
Required. The numerical progress of the application, from 1 to 100. |
trackingUrl |
string |
Optional. The HTTP URL of the ApplicationMaster, HistoryServer, or TimelineServer that provides application-specific information. The URL uses the internal hostname, and requires a proxy server for resolution and, possibly, access. |