Class: Google::Apis::DataprocV1::Job
- Inherits:
-
Object
- Object
- Google::Apis::DataprocV1::Job
- Includes:
- Core::Hashable, Core::JsonObjectSupport
- Defined in:
- lib/google/apis/dataproc_v1/classes.rb,
lib/google/apis/dataproc_v1/representations.rb,
lib/google/apis/dataproc_v1/representations.rb
Overview
A Dataproc job resource.
Instance Attribute Summary collapse
-
#done ⇒ Boolean
(also: #done?)
Output only.
-
#driver_control_files_uri ⇒ String
Output only.
-
#driver_output_resource_uri ⇒ String
Output only.
-
#hadoop_job ⇒ Google::Apis::DataprocV1::HadoopJob
A Dataproc job for running Apache Hadoop MapReduce (https://hadoop.apache.org/ docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/ MapReduceTutorial.html) jobs on Apache Hadoop YARN (https://hadoop.apache.org/ docs/r2.7.1/hadoop-yarn/hadoop-yarn-site/YARN.html).
-
#hive_job ⇒ Google::Apis::DataprocV1::HiveJob
A Dataproc job for running Apache Hive (https://hive.apache.org/) queries on YARN.
-
#job_uuid ⇒ String
Output only.
-
#labels ⇒ Hash<String,String>
Optional.
-
#pig_job ⇒ Google::Apis::DataprocV1::PigJob
A Dataproc job for running Apache Pig (https://pig.apache.org/) queries on YARN.
-
#placement ⇒ Google::Apis::DataprocV1::JobPlacement
Dataproc job config.
-
#presto_job ⇒ Google::Apis::DataprocV1::PrestoJob
A Dataproc job for running Presto (https://prestosql.io/) queries.
-
#pyspark_job ⇒ Google::Apis::DataprocV1::PySparkJob
A Dataproc job for running Apache PySpark (https://spark.apache.org/docs/0.9.0/ python-programming-guide.html) applications on YARN.
-
#reference ⇒ Google::Apis::DataprocV1::JobReference
Encapsulates the full scoping used to reference a job.
-
#scheduling ⇒ Google::Apis::DataprocV1::JobScheduling
Job scheduling options.
-
#spark_job ⇒ Google::Apis::DataprocV1::SparkJob
A Dataproc job for running Apache Spark (http://spark.apache.org/) applications on YARN.
-
#spark_r_job ⇒ Google::Apis::DataprocV1::SparkRJob
A Dataproc job for running Apache SparkR (https://spark.apache.org/docs/latest/ sparkr.html) applications on YARN.
-
#spark_sql_job ⇒ Google::Apis::DataprocV1::SparkSqlJob
A Dataproc job for running Apache Spark SQL (http://spark.apache.org/sql/) queries.
-
#status ⇒ Google::Apis::DataprocV1::JobStatus
Dataproc job status.
-
#status_history ⇒ Array<Google::Apis::DataprocV1::JobStatus>
Output only.
-
#yarn_applications ⇒ Array<Google::Apis::DataprocV1::YarnApplication>
Output only.
Instance Method Summary collapse
-
#initialize(**args) ⇒ Job
constructor
A new instance of Job.
-
#update!(**args) ⇒ Object
Update properties of this object.
Constructor Details
#initialize(**args) ⇒ Job
Returns a new instance of Job.
1601 1602 1603 |
# File 'lib/google/apis/dataproc_v1/classes.rb', line 1601 def initialize(**args) update!(**args) end |
Instance Attribute Details
#done ⇒ Boolean Also known as: done?
Output only. Indicates whether the job is completed. If the value is false,
the job is still in progress. If true, the job is completed, and status.state
field will indicate if it was successful, failed, or cancelled.
Corresponds to the JSON property done
1485 1486 1487 |
# File 'lib/google/apis/dataproc_v1/classes.rb', line 1485 def done @done end |
#driver_control_files_uri ⇒ String
Output only. If present, the location of miscellaneous control files which may
be used as part of job setup and handling. If not present, control files may
be placed in the same location as driver_output_uri.
Corresponds to the JSON property driverControlFilesUri
1493 1494 1495 |
# File 'lib/google/apis/dataproc_v1/classes.rb', line 1493 def driver_control_files_uri @driver_control_files_uri end |
#driver_output_resource_uri ⇒ String
Output only. A URI pointing to the location of the stdout of the job's driver
program.
Corresponds to the JSON property driverOutputResourceUri
1499 1500 1501 |
# File 'lib/google/apis/dataproc_v1/classes.rb', line 1499 def driver_output_resource_uri @driver_output_resource_uri end |
#hadoop_job ⇒ Google::Apis::DataprocV1::HadoopJob
A Dataproc job for running Apache Hadoop MapReduce (https://hadoop.apache.org/
docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/
MapReduceTutorial.html) jobs on Apache Hadoop YARN (https://hadoop.apache.org/
docs/r2.7.1/hadoop-yarn/hadoop-yarn-site/YARN.html).
Corresponds to the JSON property hadoopJob
1507 1508 1509 |
# File 'lib/google/apis/dataproc_v1/classes.rb', line 1507 def hadoop_job @hadoop_job end |
#hive_job ⇒ Google::Apis::DataprocV1::HiveJob
A Dataproc job for running Apache Hive (https://hive.apache.org/) queries on
YARN.
Corresponds to the JSON property hiveJob
1513 1514 1515 |
# File 'lib/google/apis/dataproc_v1/classes.rb', line 1513 def hive_job @hive_job end |
#job_uuid ⇒ String
Output only. A UUID that uniquely identifies a job within the project over
time. This is in contrast to a user-settable reference.job_id that may be
reused over time.
Corresponds to the JSON property jobUuid
1520 1521 1522 |
# File 'lib/google/apis/dataproc_v1/classes.rb', line 1520 def job_uuid @job_uuid end |
#labels ⇒ Hash<String,String>
Optional. The labels to associate with this job. Label keys must contain 1 to
63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.
txt). Label values may be empty, but, if present, must contain 1 to 63
characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt)
. No more than 32 labels can be associated with a job.
Corresponds to the JSON property labels
1529 1530 1531 |
# File 'lib/google/apis/dataproc_v1/classes.rb', line 1529 def labels @labels end |
#pig_job ⇒ Google::Apis::DataprocV1::PigJob
A Dataproc job for running Apache Pig (https://pig.apache.org/) queries on
YARN.
Corresponds to the JSON property pigJob
1535 1536 1537 |
# File 'lib/google/apis/dataproc_v1/classes.rb', line 1535 def pig_job @pig_job end |
#placement ⇒ Google::Apis::DataprocV1::JobPlacement
Dataproc job config.
Corresponds to the JSON property placement
1540 1541 1542 |
# File 'lib/google/apis/dataproc_v1/classes.rb', line 1540 def placement @placement end |
#presto_job ⇒ Google::Apis::DataprocV1::PrestoJob
A Dataproc job for running Presto (https://prestosql.io/) queries. IMPORTANT:
The Dataproc Presto Optional Component (https://cloud.google.com/dataproc/docs/
concepts/components/presto) must be enabled when the cluster is created to
submit a Presto job to the cluster.
Corresponds to the JSON property prestoJob
1548 1549 1550 |
# File 'lib/google/apis/dataproc_v1/classes.rb', line 1548 def presto_job @presto_job end |
#pyspark_job ⇒ Google::Apis::DataprocV1::PySparkJob
A Dataproc job for running Apache PySpark (https://spark.apache.org/docs/0.9.0/
python-programming-guide.html) applications on YARN.
Corresponds to the JSON property pysparkJob
1554 1555 1556 |
# File 'lib/google/apis/dataproc_v1/classes.rb', line 1554 def pyspark_job @pyspark_job end |
#reference ⇒ Google::Apis::DataprocV1::JobReference
Encapsulates the full scoping used to reference a job.
Corresponds to the JSON property reference
1559 1560 1561 |
# File 'lib/google/apis/dataproc_v1/classes.rb', line 1559 def reference @reference end |
#scheduling ⇒ Google::Apis::DataprocV1::JobScheduling
Job scheduling options.
Corresponds to the JSON property scheduling
1564 1565 1566 |
# File 'lib/google/apis/dataproc_v1/classes.rb', line 1564 def scheduling @scheduling end |
#spark_job ⇒ Google::Apis::DataprocV1::SparkJob
A Dataproc job for running Apache Spark (http://spark.apache.org/)
applications on YARN.
Corresponds to the JSON property sparkJob
1570 1571 1572 |
# File 'lib/google/apis/dataproc_v1/classes.rb', line 1570 def spark_job @spark_job end |
#spark_r_job ⇒ Google::Apis::DataprocV1::SparkRJob
A Dataproc job for running Apache SparkR (https://spark.apache.org/docs/latest/
sparkr.html) applications on YARN.
Corresponds to the JSON property sparkRJob
1576 1577 1578 |
# File 'lib/google/apis/dataproc_v1/classes.rb', line 1576 def spark_r_job @spark_r_job end |
#spark_sql_job ⇒ Google::Apis::DataprocV1::SparkSqlJob
A Dataproc job for running Apache Spark SQL (http://spark.apache.org/sql/)
queries.
Corresponds to the JSON property sparkSqlJob
1582 1583 1584 |
# File 'lib/google/apis/dataproc_v1/classes.rb', line 1582 def spark_sql_job @spark_sql_job end |
#status ⇒ Google::Apis::DataprocV1::JobStatus
Dataproc job status.
Corresponds to the JSON property status
1587 1588 1589 |
# File 'lib/google/apis/dataproc_v1/classes.rb', line 1587 def status @status end |
#status_history ⇒ Array<Google::Apis::DataprocV1::JobStatus>
Output only. The previous job status.
Corresponds to the JSON property statusHistory
1592 1593 1594 |
# File 'lib/google/apis/dataproc_v1/classes.rb', line 1592 def status_history @status_history end |
#yarn_applications ⇒ Array<Google::Apis::DataprocV1::YarnApplication>
Output only. The collection of YARN applications spun up by this job.Beta
Feature: This report is available for testing purposes only. It may be changed
before final release.
Corresponds to the JSON property yarnApplications
1599 1600 1601 |
# File 'lib/google/apis/dataproc_v1/classes.rb', line 1599 def yarn_applications @yarn_applications end |
Instance Method Details
#update!(**args) ⇒ Object
Update properties of this object
1606 1607 1608 1609 1610 1611 1612 1613 1614 1615 1616 1617 1618 1619 1620 1621 1622 1623 1624 1625 1626 |
# File 'lib/google/apis/dataproc_v1/classes.rb', line 1606 def update!(**args) @done = args[:done] if args.key?(:done) @driver_control_files_uri = args[:driver_control_files_uri] if args.key?(:driver_control_files_uri) @driver_output_resource_uri = args[:driver_output_resource_uri] if args.key?(:driver_output_resource_uri) @hadoop_job = args[:hadoop_job] if args.key?(:hadoop_job) @hive_job = args[:hive_job] if args.key?(:hive_job) @job_uuid = args[:job_uuid] if args.key?(:job_uuid) @labels = args[:labels] if args.key?(:labels) @pig_job = args[:pig_job] if args.key?(:pig_job) @placement = args[:placement] if args.key?(:placement) @presto_job = args[:presto_job] if args.key?(:presto_job) @pyspark_job = args[:pyspark_job] if args.key?(:pyspark_job) @reference = args[:reference] if args.key?(:reference) @scheduling = args[:scheduling] if args.key?(:scheduling) @spark_job = args[:spark_job] if args.key?(:spark_job) @spark_r_job = args[:spark_r_job] if args.key?(:spark_r_job) @spark_sql_job = args[:spark_sql_job] if args.key?(:spark_sql_job) @status = args[:status] if args.key?(:status) @status_history = args[:status_history] if args.key?(:status_history) @yarn_applications = args[:yarn_applications] if args.key?(:yarn_applications) end |