Class: Google::Apis::DataprocV1::Job

Inherits:
Object
  • Object
show all
Includes:
Core::Hashable, Core::JsonObjectSupport
Defined in:
generated/google/apis/dataproc_v1/classes.rb,
generated/google/apis/dataproc_v1/representations.rb,
generated/google/apis/dataproc_v1/representations.rb

Overview

A Dataproc job resource.

Instance Attribute Summary collapse

Instance Method Summary collapse

Constructor Details

#initialize(**args) ⇒ Job

Returns a new instance of Job.



1591
1592
1593
# File 'generated/google/apis/dataproc_v1/classes.rb', line 1591

def initialize(**args)
   update!(**args)
end

Instance Attribute Details

#doneBoolean Also known as: done?

Output only. Indicates whether the job is completed. If the value is false, the job is still in progress. If true, the job is completed, and status.state field will indicate if it was successful, failed, or cancelled. Corresponds to the JSON property done

Returns:

  • (Boolean)


1475
1476
1477
# File 'generated/google/apis/dataproc_v1/classes.rb', line 1475

def done
  @done
end

#driver_control_files_uriString

Output only. If present, the location of miscellaneous control files which may be used as part of job setup and handling. If not present, control files may be placed in the same location as driver_output_uri. Corresponds to the JSON property driverControlFilesUri

Returns:

  • (String)


1483
1484
1485
# File 'generated/google/apis/dataproc_v1/classes.rb', line 1483

def driver_control_files_uri
  @driver_control_files_uri
end

#driver_output_resource_uriString

Output only. A URI pointing to the location of the stdout of the job's driver program. Corresponds to the JSON property driverOutputResourceUri

Returns:

  • (String)


1489
1490
1491
# File 'generated/google/apis/dataproc_v1/classes.rb', line 1489

def driver_output_resource_uri
  @driver_output_resource_uri
end

#hadoop_jobGoogle::Apis::DataprocV1::HadoopJob

A Dataproc job for running Apache Hadoop MapReduce (https://hadoop.apache.org/ docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/ MapReduceTutorial.html) jobs on Apache Hadoop YARN (https://hadoop.apache.org/ docs/r2.7.1/hadoop-yarn/hadoop-yarn-site/YARN.html). Corresponds to the JSON property hadoopJob



1497
1498
1499
# File 'generated/google/apis/dataproc_v1/classes.rb', line 1497

def hadoop_job
  @hadoop_job
end

#hive_jobGoogle::Apis::DataprocV1::HiveJob

A Dataproc job for running Apache Hive (https://hive.apache.org/) queries on YARN. Corresponds to the JSON property hiveJob



1503
1504
1505
# File 'generated/google/apis/dataproc_v1/classes.rb', line 1503

def hive_job
  @hive_job
end

#job_uuidString

Output only. A UUID that uniquely identifies a job within the project over time. This is in contrast to a user-settable reference.job_id that may be reused over time. Corresponds to the JSON property jobUuid

Returns:

  • (String)


1510
1511
1512
# File 'generated/google/apis/dataproc_v1/classes.rb', line 1510

def job_uuid
  @job_uuid
end

#labelsHash<String,String>

Optional. The labels to associate with this job. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035. txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt) . No more than 32 labels can be associated with a job. Corresponds to the JSON property labels

Returns:

  • (Hash<String,String>)


1519
1520
1521
# File 'generated/google/apis/dataproc_v1/classes.rb', line 1519

def labels
  @labels
end

#pig_jobGoogle::Apis::DataprocV1::PigJob

A Dataproc job for running Apache Pig (https://pig.apache.org/) queries on YARN. Corresponds to the JSON property pigJob



1525
1526
1527
# File 'generated/google/apis/dataproc_v1/classes.rb', line 1525

def pig_job
  @pig_job
end

#placementGoogle::Apis::DataprocV1::JobPlacement

Dataproc job config. Corresponds to the JSON property placement



1530
1531
1532
# File 'generated/google/apis/dataproc_v1/classes.rb', line 1530

def placement
  @placement
end

#presto_jobGoogle::Apis::DataprocV1::PrestoJob

A Dataproc job for running Presto (https://prestosql.io/) queries. IMPORTANT: The Dataproc Presto Optional Component (https://cloud.google.com/dataproc/docs/ concepts/components/presto) must be enabled when the cluster is created to submit a Presto job to the cluster. Corresponds to the JSON property prestoJob



1538
1539
1540
# File 'generated/google/apis/dataproc_v1/classes.rb', line 1538

def presto_job
  @presto_job
end

#pyspark_jobGoogle::Apis::DataprocV1::PySparkJob

A Dataproc job for running Apache PySpark (https://spark.apache.org/docs/0.9.0/ python-programming-guide.html) applications on YARN. Corresponds to the JSON property pysparkJob



1544
1545
1546
# File 'generated/google/apis/dataproc_v1/classes.rb', line 1544

def pyspark_job
  @pyspark_job
end

#referenceGoogle::Apis::DataprocV1::JobReference

Encapsulates the full scoping used to reference a job. Corresponds to the JSON property reference



1549
1550
1551
# File 'generated/google/apis/dataproc_v1/classes.rb', line 1549

def reference
  @reference
end

#schedulingGoogle::Apis::DataprocV1::JobScheduling

Job scheduling options. Corresponds to the JSON property scheduling



1554
1555
1556
# File 'generated/google/apis/dataproc_v1/classes.rb', line 1554

def scheduling
  @scheduling
end

#spark_jobGoogle::Apis::DataprocV1::SparkJob

A Dataproc job for running Apache Spark (http://spark.apache.org/) applications on YARN. Corresponds to the JSON property sparkJob



1560
1561
1562
# File 'generated/google/apis/dataproc_v1/classes.rb', line 1560

def spark_job
  @spark_job
end

#spark_r_jobGoogle::Apis::DataprocV1::SparkRJob

A Dataproc job for running Apache SparkR (https://spark.apache.org/docs/latest/ sparkr.html) applications on YARN. Corresponds to the JSON property sparkRJob



1566
1567
1568
# File 'generated/google/apis/dataproc_v1/classes.rb', line 1566

def spark_r_job
  @spark_r_job
end

#spark_sql_jobGoogle::Apis::DataprocV1::SparkSqlJob

A Dataproc job for running Apache Spark SQL (http://spark.apache.org/sql/) queries. Corresponds to the JSON property sparkSqlJob



1572
1573
1574
# File 'generated/google/apis/dataproc_v1/classes.rb', line 1572

def spark_sql_job
  @spark_sql_job
end

#statusGoogle::Apis::DataprocV1::JobStatus

Dataproc job status. Corresponds to the JSON property status



1577
1578
1579
# File 'generated/google/apis/dataproc_v1/classes.rb', line 1577

def status
  @status
end

#status_historyArray<Google::Apis::DataprocV1::JobStatus>

Output only. The previous job status. Corresponds to the JSON property statusHistory



1582
1583
1584
# File 'generated/google/apis/dataproc_v1/classes.rb', line 1582

def status_history
  @status_history
end

#yarn_applicationsArray<Google::Apis::DataprocV1::YarnApplication>

Output only. The collection of YARN applications spun up by this job.Beta Feature: This report is available for testing purposes only. It may be changed before final release. Corresponds to the JSON property yarnApplications



1589
1590
1591
# File 'generated/google/apis/dataproc_v1/classes.rb', line 1589

def yarn_applications
  @yarn_applications
end

Instance Method Details

#update!(**args) ⇒ Object

Update properties of this object



1596
1597
1598
1599
1600
1601
1602
1603
1604
1605
1606
1607
1608
1609
1610
1611
1612
1613
1614
1615
1616
# File 'generated/google/apis/dataproc_v1/classes.rb', line 1596

def update!(**args)
  @done = args[:done] if args.key?(:done)
  @driver_control_files_uri = args[:driver_control_files_uri] if args.key?(:driver_control_files_uri)
  @driver_output_resource_uri = args[:driver_output_resource_uri] if args.key?(:driver_output_resource_uri)
  @hadoop_job = args[:hadoop_job] if args.key?(:hadoop_job)
  @hive_job = args[:hive_job] if args.key?(:hive_job)
  @job_uuid = args[:job_uuid] if args.key?(:job_uuid)
  @labels = args[:labels] if args.key?(:labels)
  @pig_job = args[:pig_job] if args.key?(:pig_job)
  @placement = args[:placement] if args.key?(:placement)
  @presto_job = args[:presto_job] if args.key?(:presto_job)
  @pyspark_job = args[:pyspark_job] if args.key?(:pyspark_job)
  @reference = args[:reference] if args.key?(:reference)
  @scheduling = args[:scheduling] if args.key?(:scheduling)
  @spark_job = args[:spark_job] if args.key?(:spark_job)
  @spark_r_job = args[:spark_r_job] if args.key?(:spark_r_job)
  @spark_sql_job = args[:spark_sql_job] if args.key?(:spark_sql_job)
  @status = args[:status] if args.key?(:status)
  @status_history = args[:status_history] if args.key?(:status_history)
  @yarn_applications = args[:yarn_applications] if args.key?(:yarn_applications)
end