v1

google.cloud.dataproc. v1

Source:

Members

(static) JobStateMatcher :number

A matcher that specifies categories of job states.

Properties:
Name Type Description
ALL number

Match all jobs, regardless of state.

ACTIVE number

Only match jobs in non-terminal states: PENDING, RUNNING, or CANCEL_PENDING.

NON_ACTIVE number

Only match jobs in terminal states: CANCELLED, DONE, or ERROR.

Source:

(static) Level :number

The Log4j level for job execution. When running an Apache Hive job, Cloud Dataproc configures the Hive client to an equivalent verbosity level.

Properties:
Name Type Description
LEVEL_UNSPECIFIED number

Level is unspecified. Use default level for log4j.

ALL number

Use ALL level for log4j.

TRACE number

Use TRACE level for log4j.

DEBUG number

Use DEBUG level for log4j.

INFO number

Use INFO level for log4j.

WARN number

Use WARN level for log4j.

ERROR number

Use ERROR level for log4j.

FATAL number

Use FATAL level for log4j.

OFF number

Turn off log4j.

Source:

(static) NodeState :number

The workflow node state.

Properties:
Name Type Description
NODE_STATE_UNSPECIFIED number

State is unspecified.

BLOCKED number

The node is awaiting prerequisite node to finish.

RUNNABLE number

The node is runnable but not running.

RUNNING number

The node is running.

COMPLETED number

The node completed successfully.

FAILED number

The node failed. A node can be marked FAILED because its ancestor or peer failed.

Source:

(static) State :number

The application state, corresponding to YarnProtos.YarnApplicationStateProto.

Properties:
Name Type Description
STATE_UNSPECIFIED number

Status is unspecified.

NEW number

Status is NEW.

NEW_SAVING number

Status is NEW_SAVING.

SUBMITTED number

Status is SUBMITTED.

ACCEPTED number

Status is ACCEPTED.

RUNNING number

Status is RUNNING.

FINISHED number

Status is FINISHED.

FAILED number

Status is FAILED.

KILLED number

Status is KILLED.

Source:

(static) State :number

The job state.

Properties:
Name Type Description
STATE_UNSPECIFIED number

The job state is unknown.

PENDING number

The job is pending; it has been submitted, but is not yet running.

SETUP_DONE number

Job has been received by the service and completed initial setup; it will soon be submitted to the cluster.

RUNNING number

The job is running on the cluster.

CANCEL_PENDING number

A CancelJob request has been received, but is pending.

CANCEL_STARTED number

Transient in-flight resources have been canceled, and the request to cancel the running job has been issued to the cluster.

CANCELLED number

The job cancellation was successful.

DONE number

The job has completed successfully.

ERROR number

The job has completed, but encountered an error.

ATTEMPT_FAILURE number

Job attempt has failed. The detail field contains failure details for this attempt.

Applies to restartable jobs only.

Source:

(static) State :number

The cluster state.

Properties:
Name Type Description
UNKNOWN number

The cluster state is unknown.

CREATING number

The cluster is being created and set up. It is not ready for use.

RUNNING number

The cluster is currently running and healthy. It is ready for use.

ERROR number

The cluster encountered an error. It is not ready for use.

DELETING number

The cluster is being deleted. It cannot be used.

UPDATING number

The cluster is being updated. It continues to accept and process jobs.

Source:

(static) State :number

The operation state.

Properties:
Name Type Description
UNKNOWN number

Unused.

PENDING number

The operation has been created.

RUNNING number

The operation is running.

DONE number

The operation is done; either cancelled or completed.

Source:

(static) Substate :number

The cluster substate.

Properties:
Name Type Description
UNSPECIFIED number

The cluster substate is unknown.

UNHEALTHY number

The cluster is known to be in an unhealthy state (for example, critical daemons are not running or HDFS capacity is exhausted).

Applies to RUNNING state.

STALE_STATUS number

The agent-reported status is out of date (may occur if Cloud Dataproc loses communication with Agent).

Applies to RUNNING state.

Source:

(static) Substate :number

The job substate.

Properties:
Name Type Description
UNSPECIFIED number

The job substate is unknown.

SUBMITTED number

The Job is submitted to the agent.

Applies to RUNNING state.

QUEUED number

The Job has been received and is awaiting execution (it may be waiting for a condition to be met). See the "details" field for the reason for the delay.

Applies to RUNNING state.

STALE_STATUS number

The agent-reported status is out of date, which may be caused by a loss of communication between the agent and Cloud Dataproc. If the agent does not send a timely update, the job will fail.

Applies to RUNNING state.

Source:

Type Definitions

AcceleratorConfig

Specifies the type and number of accelerator cards attached to the instances of an instance. See GPUs on Compute Engine.

Properties:
Name Type Description
acceleratorTypeUri string

Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes.

Examples:

  • https://www.googleapis.com/compute/beta/projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80
  • projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80
  • nvidia-tesla-k80

Auto Zone Exception: If you are using the Cloud Dataproc Auto Zone Placement feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.

acceleratorCount number

The number of the accelerator cards of this type exposed to this instance.

Source:
See:

CancelJobRequest

A request to cancel a job.

Properties:
Name Type Description
projectId string

Required. The ID of the Google Cloud Platform project that the job belongs to.

region string

Required. The Cloud Dataproc region in which to handle the request.

jobId string

Required. The job ID.

Source:
See:

Cluster

Describes the identifying information, config, and status of a cluster of Compute Engine instances.

Properties:
Name Type Description
projectId string

Required. The Google Cloud Platform project ID that the cluster belongs to.

clusterName string

Required. The cluster name. Cluster names within a project must be unique. Names of deleted clusters can be reused.

config Object

Required. The cluster config. Note that Cloud Dataproc may set default values, and values may change when clusters are updated.

This object should have the same structure as ClusterConfig

labels Object.<string, string>

Optional. The labels to associate with this cluster. Label keys must contain 1 to 63 characters, and must conform to RFC 1035. Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035. No more than 32 labels can be associated with a cluster.

status Object

Output only. Cluster status.

This object should have the same structure as ClusterStatus

statusHistory Array.<Object>

Output only. The previous cluster status.

This object should have the same structure as ClusterStatus

clusterUuid string

Output only. A cluster UUID (Unique Universal Identifier). Cloud Dataproc generates this value when it creates the cluster.

metrics Object

Contains cluster daemon metrics such as HDFS and YARN stats.

Beta Feature: This report is available for testing purposes only. It may be changed before final release.

This object should have the same structure as ClusterMetrics

Source:
See:

ClusterConfig

The cluster config.

Properties:
Name Type Description
configBucket string

Optional. A Google Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Google Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Cloud Dataproc staging bucket).

gceClusterConfig Object

Optional. The shared Compute Engine config settings for all instances in a cluster.

This object should have the same structure as GceClusterConfig

masterConfig Object

Optional. The Compute Engine config settings for the master instance in a cluster.

This object should have the same structure as InstanceGroupConfig

workerConfig Object

Optional. The Compute Engine config settings for worker instances in a cluster.

This object should have the same structure as InstanceGroupConfig

secondaryWorkerConfig Object

Optional. The Compute Engine config settings for additional worker instances in a cluster.

This object should have the same structure as InstanceGroupConfig

softwareConfig Object

Optional. The config settings for software inside the cluster.

This object should have the same structure as SoftwareConfig

initializationActions Array.<Object>

Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget):

  ROLE=$(curl -H Metadata-Flavor:Google
  http://metadata/computeMetadata/v1/instance/attributes/dataproc-role)
  if [[ "${ROLE}" == 'Master' ]]; then
    ... master specific actions ...
  else
    ... worker specific actions ...
  fi

This object should have the same structure as NodeInitializationAction

encryptionConfig Object

Optional. Encryption settings for the cluster.

This object should have the same structure as EncryptionConfig

Source:
See:

ClusterMetrics

Contains cluster daemon metrics, such as HDFS and YARN stats.

Beta Feature: This report is available for testing purposes only. It may be changed before final release.

Properties:
Name Type Description
hdfsMetrics Object.<string, number>

The HDFS metrics.

yarnMetrics Object.<string, number>

The YARN metrics.

Source:
See:

ClusterOperation

The cluster operation triggered by a workflow.

Properties:
Name Type Description
operationId string

Output only. The id of the cluster operation.

error string

Output only. Error, if operation failed.

done boolean

Output only. Indicates the operation is done.

Source:
See:

ClusterSelector

A selector that chooses target cluster for jobs based on metadata.

Properties:
Name Type Description
zone string

Optional. The zone where workflow process executes. This parameter does not affect the selection of the cluster.

If unspecified, the zone of the first cluster matching the selector is used.

clusterLabels Object.<string, string>

Required. The cluster labels. Cluster must have all labels to match.

Source:
See:

ClusterStatus

The status of a cluster and its instances.

Properties:
Name Type Description
state number

Output only. The cluster's state.

The number should be among the values of State

detail string

Output only. Optional details of cluster's state.

stateStartTime Object

Output only. Time when this state was entered.

This object should have the same structure as Timestamp

substate number

Output only. Additional state information that includes status reported by the agent.

The number should be among the values of Substate

Source:
See:

CreateClusterRequest

A request to create a cluster.

Properties:
Name Type Description
projectId string

Required. The ID of the Google Cloud Platform project that the cluster belongs to.

region string

Required. The Cloud Dataproc region in which to handle the request.

cluster Object

Required. The cluster to create.

This object should have the same structure as Cluster

requestId string

Optional. A unique id used to identify the request. If the server receives two CreateClusterRequest requests with the same id, then the second request will be ignored and the first google.longrunning.Operation created and stored in the backend is returned.

It is recommended to always set this value to a UUID.

The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters.

Source:
See:

CreateWorkflowTemplateRequest

A request to create a workflow template.

Properties:
Name Type Description
parent string

Required. The "resource name" of the region, as described in https://cloud.google.com/apis/design/resource_names of the form projects/{project_id}/regions/{region}

template Object

Required. The Dataproc workflow template to create.

This object should have the same structure as WorkflowTemplate

Source:
See:

DeleteClusterRequest

A request to delete a cluster.

Properties:
Name Type Description
projectId string

Required. The ID of the Google Cloud Platform project that the cluster belongs to.

region string

Required. The Cloud Dataproc region in which to handle the request.

clusterName string

Required. The cluster name.

clusterUuid string

Optional. Specifying the cluster_uuid means the RPC should fail (with error NOT_FOUND) if cluster with specified UUID does not exist.

requestId string

Optional. A unique id used to identify the request. If the server receives two DeleteClusterRequest requests with the same id, then the second request will be ignored and the first google.longrunning.Operation created and stored in the backend is returned.

It is recommended to always set this value to a UUID.

The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters.

Source:
See:

DeleteJobRequest

A request to delete a job.

Properties:
Name Type Description
projectId string

Required. The ID of the Google Cloud Platform project that the job belongs to.

region string

Required. The Cloud Dataproc region in which to handle the request.

jobId string

Required. The job ID.

Source:
See:

DeleteWorkflowTemplateRequest

A request to delete a workflow template.

Currently started workflows will remain running.

Properties:
Name Type Description
name string

Required. The "resource name" of the workflow template, as described in https://cloud.google.com/apis/design/resource_names of the form projects/{project_id}/regions/{region}/workflowTemplates/{template_id}

version number

Optional. The version of workflow template to delete. If specified, will only delete the template if the current server version matches specified version.

Source:
See:

DiagnoseClusterRequest

A request to collect cluster diagnostic information.

Properties:
Name Type Description
projectId string

Required. The ID of the Google Cloud Platform project that the cluster belongs to.

region string

Required. The Cloud Dataproc region in which to handle the request.

clusterName string

Required. The cluster name.

Source:
See:

DiagnoseClusterResults

The location of diagnostic output.

Properties:
Name Type Description
outputUri string

Output only. The Cloud Storage URI of the diagnostic output. The output report is a plain text file with a summary of collected diagnostics.

Source:
See:

DiskConfig

Specifies the config of disk options for a group of VM instances.

Properties:
Name Type Description
bootDiskType string

Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-ssd" (Persistent Disk Solid State Drive) or "pd-standard" (Persistent Disk Hard Disk Drive).

bootDiskSizeGb number

Optional. Size in GB of the boot disk (default is 500GB).

numLocalSsds number

Optional. Number of attached SSDs, from 0 to 4 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.

Source:
See:

EncryptionConfig

Encryption settings for the cluster.

Properties:
Name Type Description
gcePdKmsKeyName string

Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.

Source:
See:

GceClusterConfig

Common config settings for resources of Compute Engine cluster instances, applicable to all instances in the cluster.

Properties:
Name Type Description
zoneUri string

Optional. The zone where the Compute Engine cluster will be located. On a create request, it is required in the "global" region. If omitted in a non-global Cloud Dataproc region, the service will pick a zone in the corresponding Compute Engine region. On a get request, zone will always be present.

A full URL, partial URI, or short name are valid. Examples:

  • https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]
  • projects/[project_id]/zones/[zone]
  • us-central1-f
networkUri string

Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks for more information).

A full URL, partial URI, or short name are valid. Examples:

  • https://www.googleapis.com/compute/v1/projects/[project_id]/regions/global/default
  • projects/[project_id]/regions/global/default
  • default
subnetworkUri string

Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.

A full URL, partial URI, or short name are valid. Examples:

  • https://www.googleapis.com/compute/v1/projects/[project_id]/regions/us-east1/subnetworks/sub0
  • projects/[project_id]/regions/us-east1/subnetworks/sub0
  • sub0
internalIpOnly boolean

Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.

serviceAccount string

Optional. The service account of the instances. Defaults to the default Compute Engine service account. Custom service accounts need permissions equivalent to the following IAM roles:

  • roles/logging.logWriter
  • roles/storage.objectAdmin

(see https://cloud.google.com/compute/docs/access/service-accounts#custom_service_accounts for more information). Example: [account_id]@[project_id].iam.gserviceaccount.com

serviceAccountScopes Array.<string>

Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included:

  • https://www.googleapis.com/auth/cloud.useraccounts.readonly
  • https://www.googleapis.com/auth/devstorage.read_write
  • https://www.googleapis.com/auth/logging.write

If no scopes are specified, the following defaults are also provided:

  • https://www.googleapis.com/auth/bigquery
  • https://www.googleapis.com/auth/bigtable.admin.table
  • https://www.googleapis.com/auth/bigtable.data
  • https://www.googleapis.com/auth/devstorage.full_control
tags Array.<string>

The Compute Engine tags to add to all instances (see Tagging instances).

metadata Object.<string, string>

The Compute Engine metadata entries to add to all instances (see Project and instance metadata).

Source:
See:

GetClusterRequest

Request to get the resource representation for a cluster in a project.

Properties:
Name Type Description
projectId string

Required. The ID of the Google Cloud Platform project that the cluster belongs to.

region string

Required. The Cloud Dataproc region in which to handle the request.

clusterName string

Required. The cluster name.

Source:
See:

GetJobRequest

A request to get the resource representation for a job in a project.

Properties:
Name Type Description
projectId string

Required. The ID of the Google Cloud Platform project that the job belongs to.

region string

Required. The Cloud Dataproc region in which to handle the request.

jobId string

Required. The job ID.

Source:
See:

GetWorkflowTemplateRequest

A request to fetch a workflow template.

Properties:
Name Type Description
name string

Required. The "resource name" of the workflow template, as described in https://cloud.google.com/apis/design/resource_names of the form projects/{project_id}/regions/{region}/workflowTemplates/{template_id}

version number

Optional. The version of workflow template to retrieve. Only previously instatiated versions can be retrieved.

If unspecified, retrieves the current version.

Source:
See:

HadoopJob

A Cloud Dataproc job for running Apache Hadoop MapReduce jobs on Apache Hadoop YARN.

Properties:
Name Type Description
mainJarFileUri string

The HCFS URI of the jar file containing the main class. Examples: 'gs://foo-bucket/analytics-binaries/extract-useful-metrics-mr.jar' 'hdfs:/tmp/test-samples/custom-wordcount.jar' 'file:///home/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar'

mainClass string

The name of the driver's main class. The jar file containing the class must be in the default CLASSPATH or specified in jar_file_uris.

args Array.<string>

Optional. The arguments to pass to the driver. Do not include arguments, such as -libjars or -Dfoo=bar, that can be set as job properties, since a collision may occur that causes an incorrect job submission.

jarFileUris Array.<string>

Optional. Jar file URIs to add to the CLASSPATHs of the Hadoop driver and tasks.

fileUris Array.<string>

Optional. HCFS (Hadoop Compatible Filesystem) URIs of files to be copied to the working directory of Hadoop drivers and distributed tasks. Useful for naively parallel tasks.

archiveUris Array.<string>

Optional. HCFS URIs of archives to be extracted in the working directory of Hadoop drivers and tasks. Supported file types: .jar, .tar, .tar.gz, .tgz, or .zip.

properties Object.<string, string>

Optional. A mapping of property names to values, used to configure Hadoop. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site and classes in user code.

loggingConfig Object

Optional. The runtime log config for job execution.

This object should have the same structure as LoggingConfig

Source:
See:

HiveJob

A Cloud Dataproc job for running Apache Hive queries on YARN.

Properties:
Name Type Description
queryFileUri string

The HCFS URI of the script that contains Hive queries.

queryList Object

A list of queries.

This object should have the same structure as QueryList

continueOnFailure boolean

Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.

scriptVariables Object.<string, string>

Optional. Mapping of query variable names to values (equivalent to the Hive command: SET name="value";).

properties Object.<string, string>

Optional. A mapping of property names and values, used to configure Hive. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/hive/conf/hive-site.xml, and classes in user code.

jarFileUris Array.<string>

Optional. HCFS URIs of jar files to add to the CLASSPATH of the Hive server and Hadoop MapReduce (MR) tasks. Can contain Hive SerDes and UDFs.

Source:
See:

InstanceGroupConfig

Optional. The config settings for Compute Engine resources in an instance group, such as a master or worker group.

Properties:
Name Type Description
numInstances number

Optional. The number of VM instances in the instance group. For master instance groups, must be set to 1.

instanceNames Array.<string>

Output only. The list of instance names. Cloud Dataproc derives the names from cluster_name, num_instances, and the instance group.

imageUri string

Optional. The Compute Engine image resource used for cluster instances. It can be specified or may be inferred from SoftwareConfig.image_version.

machineTypeUri string

Optional. The Compute Engine machine type used for cluster instances.

A full URL, partial URI, or short name are valid. Examples:

  • https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2
  • projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2
  • n1-standard-2

Auto Zone Exception: If you are using the Cloud Dataproc Auto Zone Placement feature, you must use the short name of the machine type resource, for example, n1-standard-2.

diskConfig Object

Optional. Disk option config settings.

This object should have the same structure as DiskConfig

isPreemptible boolean

Optional. Specifies that this instance group contains preemptible instances.

managedGroupConfig Object

Output only. The config for Compute Engine Instance Group Manager that manages this group. This is only used for preemptible instance groups.

This object should have the same structure as ManagedGroupConfig

accelerators Array.<Object>

Optional. The Compute Engine accelerator configuration for these instances.

Beta Feature: This feature is still under development. It may be changed before final release.

This object should have the same structure as AcceleratorConfig

Source:
See:

InstantiateInlineWorkflowTemplateRequest

A request to instantiate an inline workflow template.

Properties:
Name Type Description
parent string

Required. The "resource name" of the workflow template region, as described in https://cloud.google.com/apis/design/resource_names of the form projects/{project_id}/regions/{region}

template Object

Required. The workflow template to instantiate.

This object should have the same structure as WorkflowTemplate

requestId string

Optional. A tag that prevents multiple concurrent workflow instances with the same tag from running. This mitigates risk of concurrent instances started due to retries.

It is recommended to always set this value to a UUID.

The tag must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters.

Source:
See:

InstantiateWorkflowTemplateRequest

A request to instantiate a workflow template.

Properties:
Name Type Description
name string

Required. The "resource name" of the workflow template, as described in https://cloud.google.com/apis/design/resource_names of the form projects/{project_id}/regions/{region}/workflowTemplates/{template_id}

version number

Optional. The version of workflow template to instantiate. If specified, the workflow will be instantiated only if the current version of the workflow template has the supplied version.

This option cannot be used to instantiate a previous version of workflow template.

requestId string

Optional. A tag that prevents multiple concurrent workflow instances with the same tag from running. This mitigates risk of concurrent instances started due to retries.

It is recommended to always set this value to a UUID.

The tag must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters.

parameters Object.<string, string>

Optional. Map from parameter names to values that should be used for those parameters. Values may not exceed 100 characters.

Source:
See:

Job

A Cloud Dataproc job resource.

Properties:
Name Type Description
reference Object

Optional. The fully qualified reference to the job, which can be used to obtain the equivalent REST path of the job resource. If this property is not specified when a job is created, the server generates a job_id.

This object should have the same structure as JobReference

placement Object

Required. Job information, including how, when, and where to run the job.

This object should have the same structure as JobPlacement

hadoopJob Object

Job is a Hadoop job.

This object should have the same structure as HadoopJob

sparkJob Object

Job is a Spark job.

This object should have the same structure as SparkJob

pysparkJob Object

Job is a Pyspark job.

This object should have the same structure as PySparkJob

hiveJob Object

Job is a Hive job.

This object should have the same structure as HiveJob

pigJob Object

Job is a Pig job.

This object should have the same structure as PigJob

sparkSqlJob Object

Job is a SparkSql job.

This object should have the same structure as SparkSqlJob

status Object

Output only. The job status. Additional application-specific status information may be contained in the type_job and yarn_applications fields.

This object should have the same structure as JobStatus

statusHistory Array.<Object>

Output only. The previous job status.

This object should have the same structure as JobStatus

yarnApplications Array.<Object>

Output only. The collection of YARN applications spun up by this job.

Beta Feature: This report is available for testing purposes only. It may be changed before final release.

This object should have the same structure as YarnApplication

driverOutputResourceUri string

Output only. A URI pointing to the location of the stdout of the job's driver program.

driverControlFilesUri string

Output only. If present, the location of miscellaneous control files which may be used as part of job setup and handling. If not present, control files may be placed in the same location as driver_output_uri.

labels Object.<string, string>

Optional. The labels to associate with this job. Label keys must contain 1 to 63 characters, and must conform to RFC 1035. Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035. No more than 32 labels can be associated with a job.

scheduling Object

Optional. Job scheduling configuration.

This object should have the same structure as JobScheduling

jobUuid string

Output only. A UUID that uniquely identifies a job within the project over time. This is in contrast to a user-settable reference.job_id that may be reused over time.

Source:
See:

JobPlacement

Cloud Dataproc job config.

Properties:
Name Type Description
clusterName string

Required. The name of the cluster where the job will be submitted.

clusterUuid string

Output only. A cluster UUID generated by the Cloud Dataproc service when the job is submitted.

Source:
See:

JobReference

Encapsulates the full scoping used to reference a job.

Properties:
Name Type Description
projectId string

Required. The ID of the Google Cloud Platform project that the job belongs to.

jobId string

Optional. The job ID, which must be unique within the project.

The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), or hyphens (-). The maximum length is 100 characters.

If not specified by the caller, the job ID will be provided by the server.

Source:
See:

JobScheduling

Job scheduling options.

Properties:
Name Type Description
maxFailuresPerHour number

Optional. Maximum number of times per hour a driver may be restarted as a result of driver terminating with non-zero code before job is reported failed.

A job may be reported as thrashing if driver exits with non-zero code 4 times within 10 minute window.

Maximum value is 10.

Source:
See:

JobStatus

Cloud Dataproc job status.

Properties:
Name Type Description
state number

Output only. A state message specifying the overall job state.

The number should be among the values of State

details string

Output only. Optional job state details, such as an error description if the state is ERROR.

stateStartTime Object

Output only. The time when this state was entered.

This object should have the same structure as Timestamp

substate number

Output only. Additional state information, which includes status reported by the agent.

The number should be among the values of Substate

Source:
See:

ListClustersRequest

A request to list the clusters in a project.

Properties:
Name Type Description
projectId string

Required. The ID of the Google Cloud Platform project that the cluster belongs to.

region string

Required. The Cloud Dataproc region in which to handle the request.

filter string

Optional. A filter constraining the clusters to list. Filters are case-sensitive and have the following syntax:

field = value [AND [field = value]] ...

where field is one of status.state, clusterName, or labels.[KEY], and [KEY] is a label key. value can be * to match all values. status.state can be one of the following: ACTIVE, INACTIVE, CREATING, RUNNING, ERROR, DELETING, or UPDATING. ACTIVE contains the CREATING, UPDATING, and RUNNING states. INACTIVE contains the DELETING and ERROR states. clusterName is the name of the cluster provided at creation time. Only the logical AND operator is supported; space-separated items are treated as having an implicit AND operator.

Example filter:

status.state = ACTIVE AND clusterName = mycluster AND labels.env = staging AND labels.starred = *

pageSize number

Optional. The standard List page size.

pageToken string

Optional. The standard List page token.

Source:
See:

ListClustersResponse

The list of all clusters in a project.

Properties:
Name Type Description
clusters Array.<Object>

Output only. The clusters in the project.

This object should have the same structure as Cluster

nextPageToken string

Output only. This token is included in the response if there are more results to fetch. To fetch additional results, provide this value as the page_token in a subsequent ListClustersRequest.

Source:
See:

ListJobsRequest

A request to list jobs in a project.

Properties:
Name Type Description
projectId string

Required. The ID of the Google Cloud Platform project that the job belongs to.

region string

Required. The Cloud Dataproc region in which to handle the request.

pageSize number

Optional. The number of results to return in each response.

pageToken string

Optional. The page token, returned by a previous call, to request the next page of results.

clusterName string

Optional. If set, the returned jobs list includes only jobs that were submitted to the named cluster.

jobStateMatcher number

Optional. Specifies enumerated categories of jobs to list. (default = match ALL jobs).

If filter is provided, jobStateMatcher will be ignored.

The number should be among the values of JobStateMatcher

filter string

Optional. A filter constraining the jobs to list. Filters are case-sensitive and have the following syntax:

[field = value] AND [field [= value]] ...

where field is status.state or labels.[KEY], and [KEY] is a label key. value can be * to match all values. status.state can be either ACTIVE or NON_ACTIVE. Only the logical AND operator is supported; space-separated items are treated as having an implicit AND operator.

Example filter:

status.state = ACTIVE AND labels.env = staging AND labels.starred = *

Source:
See:

ListJobsResponse

A list of jobs in a project.

Properties:
Name Type Description
jobs Array.<Object>

Output only. Jobs list.

This object should have the same structure as Job

nextPageToken string

Optional. This token is included in the response if there are more results to fetch. To fetch additional results, provide this value as the page_token in a subsequent ListJobsRequest.

Source:
See:

ListWorkflowTemplatesRequest

A request to list workflow templates in a project.

Properties:
Name Type Description
parent string

Required. The "resource name" of the region, as described in https://cloud.google.com/apis/design/resource_names of the form projects/{project_id}/regions/{region}

pageSize number

Optional. The maximum number of results to return in each response.

pageToken string

Optional. The page token, returned by a previous call, to request the next page of results.

Source:
See:

ListWorkflowTemplatesResponse

A response to a request to list workflow templates in a project.

Properties:
Name Type Description
templates Array.<Object>

Output only. WorkflowTemplates list.

This object should have the same structure as WorkflowTemplate

nextPageToken string

Output only. This token is included in the response if there are more results to fetch. To fetch additional results, provide this value as the page_token in a subsequent ListWorkflowTemplatesRequest.

Source:
See:

LoggingConfig

The runtime logging config of the job.

Properties:
Name Type Description
driverLogLevels Object.<string, number>

The per-package log levels for the driver. This may include "root" package name to configure rootLogger. Examples: 'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG'

Source:
See:

ManagedCluster

Cluster that is managed by the workflow.

Properties:
Name Type Description
clusterName string

Required. The cluster name prefix. A unique cluster name will be formed by appending a random suffix.

The name must contain only lower-case letters (a-z), numbers (0-9), and hyphens (-). Must begin with a letter. Cannot begin or end with hyphen. Must consist of between 2 and 35 characters.

config Object

Required. The cluster configuration.

This object should have the same structure as ClusterConfig

labels Object.<string, string>

Optional. The labels to associate with this cluster.

Label keys must be between 1 and 63 characters long, and must conform to the following PCRE regular expression: [\p{Ll}\p{Lo}][\p{Ll}\p{Lo}\p{N}_-]{0,62}

Label values must be between 1 and 63 characters long, and must conform to the following PCRE regular expression: [\p{Ll}\p{Lo}\p{N}_-]{0,63}

No more than 32 labels can be associated with a given cluster.

Source:
See:

ManagedGroupConfig

Specifies the resources used to actively manage an instance group.

Properties:
Name Type Description
instanceTemplateName string

Output only. The name of the Instance Template used for the Managed Instance Group.

instanceGroupManagerName string

Output only. The name of the Instance Group Manager for this group.

Source:
See:

NodeInitializationAction

Specifies an executable to run on a fully configured node and a timeout period for executable completion.

Properties:
Name Type Description
executableFile string

Required. Cloud Storage URI of executable file.

executionTimeout Object

Optional. Amount of time executable has to complete. Default is 10 minutes. Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.

This object should have the same structure as Duration

Source:
See:

OrderedJob

A job executed by the workflow.

Properties:
Name Type Description
stepId string

Required. The step id. The id must be unique among all jobs within the template.

The step id is used as prefix for job id, as job goog-dataproc-workflow-step-id label, and in prerequisiteStepIds field from other steps.

The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of between 3 and 50 characters.

hadoopJob Object

Job is a Hadoop job.

This object should have the same structure as HadoopJob

sparkJob Object

Job is a Spark job.

This object should have the same structure as SparkJob

pysparkJob Object

Job is a Pyspark job.

This object should have the same structure as PySparkJob

hiveJob Object

Job is a Hive job.

This object should have the same structure as HiveJob

pigJob Object

Job is a Pig job.

This object should have the same structure as PigJob

sparkSqlJob Object

Job is a SparkSql job.

This object should have the same structure as SparkSqlJob

labels Object.<string, string>

Optional. The labels to associate with this job.

Label keys must be between 1 and 63 characters long, and must conform to the following regular expression: [\p{Ll}\p{Lo}][\p{Ll}\p{Lo}\p{N}_-]{0,62}

Label values must be between 1 and 63 characters long, and must conform to the following regular expression: [\p{Ll}\p{Lo}\p{N}_-]{0,63}

No more than 32 labels can be associated with a given job.

scheduling Object

Optional. Job scheduling configuration.

This object should have the same structure as JobScheduling

prerequisiteStepIds Array.<string>

Optional. The optional list of prerequisite job step_ids. If not specified, the job will start at the beginning of workflow.

Source:
See:

ParameterValidation

Configuration for parameter validation.

Properties:
Name Type Description
regex Object

Validation based on regular expressions.

This object should have the same structure as RegexValidation

values Object

Validation based on a list of allowed values.

This object should have the same structure as ValueValidation

Source:
See:

PigJob

A Cloud Dataproc job for running Apache Pig queries on YARN.

Properties:
Name Type Description
queryFileUri string

The HCFS URI of the script that contains the Pig queries.

queryList Object

A list of queries.

This object should have the same structure as QueryList

continueOnFailure boolean

Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.

scriptVariables Object.<string, string>

Optional. Mapping of query variable names to values (equivalent to the Pig command: name=[value]).

properties Object.<string, string>

Optional. A mapping of property names to values, used to configure Pig. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/pig/conf/pig.properties, and classes in user code.

jarFileUris Array.<string>

Optional. HCFS URIs of jar files to add to the CLASSPATH of the Pig Client and Hadoop MapReduce (MR) tasks. Can contain Pig UDFs.

loggingConfig Object

Optional. The runtime log config for job execution.

This object should have the same structure as LoggingConfig

Source:
See:

PySparkJob

A Cloud Dataproc job for running Apache PySpark applications on YARN.

Properties:
Name Type Description
mainPythonFileUri string

Required. The HCFS URI of the main Python file to use as the driver. Must be a .py file.

args Array.<string>

Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.

pythonFileUris Array.<string>

Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.

jarFileUris Array.<string>

Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Python driver and tasks.

fileUris Array.<string>

Optional. HCFS URIs of files to be copied to the working directory of Python drivers and distributed tasks. Useful for naively parallel tasks.

archiveUris Array.<string>

Optional. HCFS URIs of archives to be extracted in the working directory of .jar, .tar, .tar.gz, .tgz, and .zip.

properties Object.<string, string>

Optional. A mapping of property names to values, used to configure PySpark. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.

loggingConfig Object

Optional. The runtime log config for job execution.

This object should have the same structure as LoggingConfig

Source:
See:

QueryList

A list of queries to run on a cluster.

Properties:
Name Type Description
queries Array.<string>

Required. The queries to execute. You do not need to terminate a query with a semicolon. Multiple queries can be specified in one string by separating each with a semicolon. Here is an example of an Cloud Dataproc API snippet that uses a QueryList to specify a HiveJob:

  "hiveJob": {
    "queryList": {
      "queries": [
        "query1",
        "query2",
        "query3;query4",
      ]
    }
  }
Source:
See:

RegexValidation

Validation based on regular expressions.

Properties:
Name Type Description
regexes Array.<string>

Required. RE2 regular expressions used to validate the parameter's value. The value must match the regex in its entirety (substring matches are not sufficient).

Source:
See:

SoftwareConfig

Specifies the selection and config of software inside the cluster.

Properties:
Name Type Description
imageVersion string

Optional. The version of software inside the cluster. It must be one of the supported Cloud Dataproc Versions, such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version. If unspecified, it defaults to the latest Debian version.

properties Object.<string, string>

Optional. The properties to set on daemon config files.

Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings:

  • capacity-scheduler: capacity-scheduler.xml
  • core: core-site.xml
  • distcp: distcp-default.xml
  • hdfs: hdfs-site.xml
  • hive: hive-site.xml
  • mapred: mapred-site.xml
  • pig: pig.properties
  • spark: spark-defaults.conf
  • yarn: yarn-site.xml

For more information, see Cluster properties.

optionalComponents Array.<number>

The set of optional components to activate on the cluster.

The number should be among the values of Component

Source:
See:

SparkJob

A Cloud Dataproc job for running Apache Spark applications on YARN.

Properties:
Name Type Description
mainJarFileUri string

The HCFS URI of the jar file that contains the main class.

mainClass string

The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in jar_file_uris.

args Array.<string>

Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.

jarFileUris Array.<string>

Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Spark driver and tasks.

fileUris Array.<string>

Optional. HCFS URIs of files to be copied to the working directory of Spark drivers and distributed tasks. Useful for naively parallel tasks.

archiveUris Array.<string>

Optional. HCFS URIs of archives to be extracted in the working directory of Spark drivers and tasks. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.

properties Object.<string, string>

Optional. A mapping of property names to values, used to configure Spark. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.

loggingConfig Object

Optional. The runtime log config for job execution.

This object should have the same structure as LoggingConfig

Source:
See:

SparkSqlJob

A Cloud Dataproc job for running Apache Spark SQL queries.

Properties:
Name Type Description
queryFileUri string

The HCFS URI of the script that contains SQL queries.

queryList Object

A list of queries.

This object should have the same structure as QueryList

scriptVariables Object.<string, string>

Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).

properties Object.<string, string>

Optional. A mapping of property names to values, used to configure Spark SQL's SparkConf. Properties that conflict with values set by the Cloud Dataproc API may be overwritten.

jarFileUris Array.<string>

Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.

loggingConfig Object

Optional. The runtime log config for job execution.

This object should have the same structure as LoggingConfig

Source:
See:

SubmitJobRequest

A request to submit a job.

Properties:
Name Type Description
projectId string

Required. The ID of the Google Cloud Platform project that the job belongs to.

region string

Required. The Cloud Dataproc region in which to handle the request.

job Object

Required. The job resource.

This object should have the same structure as Job

requestId string

Optional. A unique id used to identify the request. If the server receives two SubmitJobRequest requests with the same id, then the second request will be ignored and the first Job created and stored in the backend is returned.

It is recommended to always set this value to a UUID.

The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters.

Source:
See:

TemplateParameter

A configurable parameter that replaces one or more fields in the template. Parameterizable fields:

  • Labels
  • File uris
  • Job properties
  • Job arguments
  • Script variables
  • Main class (in HadoopJob and SparkJob)
  • Zone (in ClusterSelector)
Properties:
Name Type Description
name string

Required. Parameter name. The parameter name is used as the key, and paired with the parameter value, which are passed to the template when the template is instantiated. The name must contain only capital letters (A-Z), numbers (0-9), and underscores (_), and must not start with a number. The maximum length is 40 characters.

fields Array.<string>

Required. Paths to all fields that the parameter replaces. A field is allowed to appear in at most one parameter's list of field paths.

A field path is similar in syntax to a google.protobuf.FieldMask. For example, a field path that references the zone field of a workflow template's cluster selector would be specified as placement.clusterSelector.zone.

Also, field paths can reference fields using the following syntax:

  • Values in maps can be referenced by key:

    • labels['key']
    • placement.clusterSelector.clusterLabels['key']
    • placement.managedCluster.labels['key']
    • placement.clusterSelector.clusterLabels['key']
    • jobs['step-id'].labels['key']
  • Jobs in the jobs list can be referenced by step-id:

    • jobs['step-id'].hadoopJob.mainJarFileUri
    • jobs['step-id'].hiveJob.queryFileUri
    • jobs['step-id'].pySparkJob.mainPythonFileUri
    • jobs['step-id'].hadoopJob.jarFileUris[0]
    • jobs['step-id'].hadoopJob.archiveUris[0]
    • jobs['step-id'].hadoopJob.fileUris[0]
    • jobs['step-id'].pySparkJob.pythonFileUris[0]
  • Items in repeated fields can be referenced by a zero-based index:

    • jobs['step-id'].sparkJob.args[0]
  • Other examples:

    • jobs['step-id'].hadoopJob.properties['key']
    • jobs['step-id'].hadoopJob.args[0]
    • jobs['step-id'].hiveJob.scriptVariables['key']
    • jobs['step-id'].hadoopJob.mainJarFileUri
    • placement.clusterSelector.zone

It may not be possible to parameterize maps and repeated fields in their entirety since only individual map values and individual items in repeated fields can be referenced. For example, the following field paths are invalid:

  • placement.clusterSelector.clusterLabels
  • jobs['step-id'].sparkJob.args
description string

Optional. Brief description of the parameter. Must not exceed 1024 characters.

validation Object

Optional. Validation rules to be applied to this parameter's value.

This object should have the same structure as ParameterValidation

Source:
See:

UpdateClusterRequest

A request to update a cluster.

Properties:
Name Type Description
projectId string

Required. The ID of the Google Cloud Platform project the cluster belongs to.

region string

Required. The Cloud Dataproc region in which to handle the request.

clusterName string

Required. The cluster name.

cluster Object

Required. The changes to the cluster.

This object should have the same structure as Cluster

gracefulDecommissionTimeout Object

Optional. Timeout for graceful YARN decomissioning. Graceful decommissioning allows removing nodes from the cluster without interrupting jobs in progress. Timeout specifies how long to wait for jobs in progress to finish before forcefully removing nodes (and potentially interrupting jobs). Default timeout is 0 (for forceful decommission), and the maximum allowed timeout is 1 day.

Only supported on Dataproc image versions 1.2 and higher.

This object should have the same structure as Duration

updateMask Object

Required. Specifies the path, relative to Cluster, of the field to update. For example, to change the number of workers in a cluster to 5, the update_mask parameter would be specified as config.worker_config.num_instances, and the PATCH request body would specify the new value, as follows:

  {
    "config":{
      "workerConfig":{
        "numInstances":"5"
      }
    }
  }

Similarly, to change the number of preemptible workers in a cluster to 5, the update_mask parameter would be config.secondary_worker_config.num_instances, and the PATCH request body would be set as follows:

  {
    "config":{
      "secondaryWorkerConfig":{
        "numInstances":"5"
      }
    }
  }

Note: Currently, only the following fields can be updated:

Mask Purpose
labels Update labels
config.worker_config.num_instances Resize primary worker group
config.secondary_worker_config.num_instances Resize secondary worker group

This object should have the same structure as FieldMask

requestId string

Optional. A unique id used to identify the request. If the server receives two UpdateClusterRequest requests with the same id, then the second request will be ignored and the first google.longrunning.Operation created and stored in the backend is returned.

It is recommended to always set this value to a UUID.

The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters.

Source:
See:

UpdateJobRequest

A request to update a job.

Properties:
Name Type Description
projectId string

Required. The ID of the Google Cloud Platform project that the job belongs to.

region string

Required. The Cloud Dataproc region in which to handle the request.

jobId string

Required. The job ID.

job Object

Required. The changes to the job.

This object should have the same structure as Job

updateMask Object

Required. Specifies the path, relative to Job, of the field to update. For example, to update the labels of a Job the update_mask parameter would be specified as labels, and the PATCH request body would specify the new value. Note: Currently, labels is the only field that can be updated.

This object should have the same structure as FieldMask

Source:
See:

UpdateWorkflowTemplateRequest

A request to update a workflow template.

Properties:
Name Type Description
template Object

Required. The updated workflow template.

The template.version field must match the current version.

This object should have the same structure as WorkflowTemplate

Source:
See:

ValueValidation

Validation based on a list of allowed values.

Properties:
Name Type Description
values Array.<string>

Required. List of allowed values for the parameter.

Source:
See:

WorkflowGraph

The workflow graph.

Properties:
Name Type Description
nodes Array.<Object>

Output only. The workflow nodes.

This object should have the same structure as WorkflowNode

Source:
See:

WorkflowMetadata

A Cloud Dataproc workflow template resource.

Properties:
Name Type Description
template string

Output only. The "resource name" of the template.

version number

Output only. The version of template at the time of workflow instantiation.

createCluster Object

Output only. The create cluster operation metadata.

This object should have the same structure as ClusterOperation

graph Object

Output only. The workflow graph.

This object should have the same structure as WorkflowGraph

deleteCluster Object

Output only. The delete cluster operation metadata.

This object should have the same structure as ClusterOperation

state number

Output only. The workflow state.

The number should be among the values of State

clusterName string

Output only. The name of the target cluster.

parameters Object.<string, string>

Map from parameter names to values that were used for those parameters.

startTime Object

Output only. Workflow start time.

This object should have the same structure as Timestamp

endTime Object

Output only. Workflow end time.

This object should have the same structure as Timestamp

clusterUuid string

Output only. The UUID of target cluster.

Source:
See:

WorkflowNode

The workflow node.

Properties:
Name Type Description
stepId string

Output only. The name of the node.

prerequisiteStepIds Array.<string>

Output only. Node's prerequisite nodes.

jobId string

Output only. The job id; populated after the node enters RUNNING state.

state number

Output only. The node state.

The number should be among the values of NodeState

error string

Output only. The error detail.

Source:
See:

WorkflowTemplate

A Cloud Dataproc workflow template resource.

Properties:
Name Type Description
id string

Required. The template id.

The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of between 3 and 50 characters.

name string

Output only. The "resource name" of the template, as described in https://cloud.google.com/apis/design/resource_names of the form projects/{project_id}/regions/{region}/workflowTemplates/{template_id}

version number

Optional. Used to perform a consistent read-modify-write.

This field should be left blank for a CreateWorkflowTemplate request. It is required for an UpdateWorkflowTemplate request, and must match the current server version. A typical update template flow would fetch the current template with a GetWorkflowTemplate request, which will return the current template with the version field filled in with the current server version. The user updates other fields in the template, then returns it as part of the UpdateWorkflowTemplate request.

createTime Object

Output only. The time template was created.

This object should have the same structure as Timestamp

updateTime Object

Output only. The time template was last updated.

This object should have the same structure as Timestamp

labels Object.<string, string>

Optional. The labels to associate with this template. These labels will be propagated to all jobs and clusters created by the workflow instance.

Label keys must contain 1 to 63 characters, and must conform to RFC 1035.

Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035.

No more than 32 labels can be associated with a template.

placement Object

Required. WorkflowTemplate scheduling information.

This object should have the same structure as WorkflowTemplatePlacement

jobs Array.<Object>

Required. The Directed Acyclic Graph of Jobs to submit.

This object should have the same structure as OrderedJob

parameters Array.<Object>

Optional. Template parameters whose values are substituted into the template. Values for parameters must be provided when the template is instantiated.

This object should have the same structure as TemplateParameter

Source:
See:

WorkflowTemplatePlacement

Specifies workflow execution target.

Either managed_cluster or cluster_selector is required.

Properties:
Name Type Description
managedCluster Object

Optional. A cluster that is managed by the workflow.

This object should have the same structure as ManagedCluster

clusterSelector Object

Optional. A selector that chooses target cluster for jobs based on metadata.

The selector is evaluated at the time each job is submitted.

This object should have the same structure as ClusterSelector

Source:
See:

YarnApplication

A YARN application created by a job. Application information is a subset of org.apache.hadoop.yarn.proto.YarnProtos.ApplicationReportProto.

Beta Feature: This report is available for testing purposes only. It may be changed before final release.

Properties:
Name Type Description
name string

Required. The application name.

state number

Required. The application state.

The number should be among the values of State

progress number

Required. The numerical progress of the application, from 1 to 100.

trackingUrl string

Optional. The HTTP URL of the ApplicationMaster, HistoryServer, or TimelineServer that provides application-specific information. The URL uses the internal hostname, and requires a proxy server for resolution and, possibly, access.

Source:
See: