Class: Google::Apis::DataprocV1beta2::Job
- Inherits:
-
Object
- Object
- Google::Apis::DataprocV1beta2::Job
- Includes:
- Core::Hashable, Core::JsonObjectSupport
- Defined in:
- generated/google/apis/dataproc_v1beta2/classes.rb,
generated/google/apis/dataproc_v1beta2/representations.rb,
generated/google/apis/dataproc_v1beta2/representations.rb
Overview
A Dataproc job resource.
Instance Attribute Summary collapse
-
#done ⇒ Boolean
(also: #done?)
Output only.
-
#driver_control_files_uri ⇒ String
Output only.
-
#driver_output_resource_uri ⇒ String
Output only.
-
#hadoop_job ⇒ Google::Apis::DataprocV1beta2::HadoopJob
A Dataproc job for running Apache Hadoop MapReduce (https://hadoop.apache.org/ docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/ MapReduceTutorial.html) jobs on Apache Hadoop YARN (https://hadoop.apache.org/ docs/r2.7.1/hadoop-yarn/hadoop-yarn-site/YARN.html).
-
#hive_job ⇒ Google::Apis::DataprocV1beta2::HiveJob
A Dataproc job for running Apache Hive (https://hive.apache.org/) queries on YARN.
-
#job_uuid ⇒ String
Output only.
-
#labels ⇒ Hash<String,String>
Optional.
-
#pig_job ⇒ Google::Apis::DataprocV1beta2::PigJob
A Dataproc job for running Apache Pig (https://pig.apache.org/) queries on YARN.
-
#placement ⇒ Google::Apis::DataprocV1beta2::JobPlacement
Dataproc job config.
-
#presto_job ⇒ Google::Apis::DataprocV1beta2::PrestoJob
A Dataproc job for running Presto (https://prestosql.io/) queries.
-
#pyspark_job ⇒ Google::Apis::DataprocV1beta2::PySparkJob
A Dataproc job for running Apache PySpark (https://spark.apache.org/docs/0.9.0/ python-programming-guide.html) applications on YARN.
-
#reference ⇒ Google::Apis::DataprocV1beta2::JobReference
Encapsulates the full scoping used to reference a job.
-
#scheduling ⇒ Google::Apis::DataprocV1beta2::JobScheduling
Job scheduling options.
-
#spark_job ⇒ Google::Apis::DataprocV1beta2::SparkJob
A Dataproc job for running Apache Spark (http://spark.apache.org/) applications on YARN.
-
#spark_r_job ⇒ Google::Apis::DataprocV1beta2::SparkRJob
A Dataproc job for running Apache SparkR (https://spark.apache.org/docs/latest/ sparkr.html) applications on YARN.
-
#spark_sql_job ⇒ Google::Apis::DataprocV1beta2::SparkSqlJob
A Dataproc job for running Apache Spark SQL (http://spark.apache.org/sql/) queries.
-
#status ⇒ Google::Apis::DataprocV1beta2::JobStatus
Dataproc job status.
-
#status_history ⇒ Array<Google::Apis::DataprocV1beta2::JobStatus>
Output only.
-
#submitted_by ⇒ String
Output only.
-
#yarn_applications ⇒ Array<Google::Apis::DataprocV1beta2::YarnApplication>
Output only.
Instance Method Summary collapse
-
#initialize(**args) ⇒ Job
constructor
A new instance of Job.
-
#update!(**args) ⇒ Object
Update properties of this object.
Constructor Details
#initialize(**args) ⇒ Job
Returns a new instance of Job.
1635 1636 1637 |
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 1635 def initialize(**args) update!(**args) end |
Instance Attribute Details
#done ⇒ Boolean Also known as: done?
Output only. Indicates whether the job is completed. If the value is false,
the job is still in progress. If true, the job is completed, and status.state
field will indicate if it was successful, failed, or cancelled.
Corresponds to the JSON property done
1510 1511 1512 |
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 1510 def done @done end |
#driver_control_files_uri ⇒ String
Output only. If present, the location of miscellaneous control files which may
be used as part of job setup and handling. If not present, control files may
be placed in the same location as driver_output_uri.
Corresponds to the JSON property driverControlFilesUri
1518 1519 1520 |
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 1518 def driver_control_files_uri @driver_control_files_uri end |
#driver_output_resource_uri ⇒ String
Output only. A URI pointing to the location of the stdout of the job's driver
program.
Corresponds to the JSON property driverOutputResourceUri
1524 1525 1526 |
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 1524 def driver_output_resource_uri @driver_output_resource_uri end |
#hadoop_job ⇒ Google::Apis::DataprocV1beta2::HadoopJob
A Dataproc job for running Apache Hadoop MapReduce (https://hadoop.apache.org/
docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/
MapReduceTutorial.html) jobs on Apache Hadoop YARN (https://hadoop.apache.org/
docs/r2.7.1/hadoop-yarn/hadoop-yarn-site/YARN.html).
Corresponds to the JSON property hadoopJob
1532 1533 1534 |
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 1532 def hadoop_job @hadoop_job end |
#hive_job ⇒ Google::Apis::DataprocV1beta2::HiveJob
A Dataproc job for running Apache Hive (https://hive.apache.org/) queries on
YARN.
Corresponds to the JSON property hiveJob
1538 1539 1540 |
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 1538 def hive_job @hive_job end |
#job_uuid ⇒ String
Output only. A UUID that uniquely identifies a job within the project over
time. This is in contrast to a user-settable reference.job_id that may be
reused over time.
Corresponds to the JSON property jobUuid
1545 1546 1547 |
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 1545 def job_uuid @job_uuid end |
#labels ⇒ Hash<String,String>
Optional. The labels to associate with this job. Label keys must contain 1 to
63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.
txt). Label values may be empty, but, if present, must contain 1 to 63
characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt)
. No more than 32 labels can be associated with a job.
Corresponds to the JSON property labels
1554 1555 1556 |
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 1554 def labels @labels end |
#pig_job ⇒ Google::Apis::DataprocV1beta2::PigJob
A Dataproc job for running Apache Pig (https://pig.apache.org/) queries on
YARN.
Corresponds to the JSON property pigJob
1560 1561 1562 |
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 1560 def pig_job @pig_job end |
#placement ⇒ Google::Apis::DataprocV1beta2::JobPlacement
Dataproc job config.
Corresponds to the JSON property placement
1565 1566 1567 |
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 1565 def placement @placement end |
#presto_job ⇒ Google::Apis::DataprocV1beta2::PrestoJob
A Dataproc job for running Presto (https://prestosql.io/) queries. IMPORTANT:
The Dataproc Presto Optional Component (https://cloud.google.com/dataproc/docs/
concepts/components/presto) must be enabled when the cluster is created to
submit a Presto job to the cluster.
Corresponds to the JSON property prestoJob
1573 1574 1575 |
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 1573 def presto_job @presto_job end |
#pyspark_job ⇒ Google::Apis::DataprocV1beta2::PySparkJob
A Dataproc job for running Apache PySpark (https://spark.apache.org/docs/0.9.0/
python-programming-guide.html) applications on YARN.
Corresponds to the JSON property pysparkJob
1579 1580 1581 |
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 1579 def pyspark_job @pyspark_job end |
#reference ⇒ Google::Apis::DataprocV1beta2::JobReference
Encapsulates the full scoping used to reference a job.
Corresponds to the JSON property reference
1584 1585 1586 |
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 1584 def reference @reference end |
#scheduling ⇒ Google::Apis::DataprocV1beta2::JobScheduling
Job scheduling options.
Corresponds to the JSON property scheduling
1589 1590 1591 |
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 1589 def scheduling @scheduling end |
#spark_job ⇒ Google::Apis::DataprocV1beta2::SparkJob
A Dataproc job for running Apache Spark (http://spark.apache.org/)
applications on YARN. The specification of the main method to call to drive
the job. Specify either the jar file that contains the main class or the main
class name. To pass both a main jar and a main class in that jar, add the jar
to CommonJob.jar_file_uris, and then specify the main class name in main_class.
Corresponds to the JSON property sparkJob
1598 1599 1600 |
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 1598 def spark_job @spark_job end |
#spark_r_job ⇒ Google::Apis::DataprocV1beta2::SparkRJob
A Dataproc job for running Apache SparkR (https://spark.apache.org/docs/latest/
sparkr.html) applications on YARN.
Corresponds to the JSON property sparkRJob
1604 1605 1606 |
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 1604 def spark_r_job @spark_r_job end |
#spark_sql_job ⇒ Google::Apis::DataprocV1beta2::SparkSqlJob
A Dataproc job for running Apache Spark SQL (http://spark.apache.org/sql/)
queries.
Corresponds to the JSON property sparkSqlJob
1610 1611 1612 |
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 1610 def spark_sql_job @spark_sql_job end |
#status ⇒ Google::Apis::DataprocV1beta2::JobStatus
Dataproc job status.
Corresponds to the JSON property status
1615 1616 1617 |
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 1615 def status @status end |
#status_history ⇒ Array<Google::Apis::DataprocV1beta2::JobStatus>
Output only. The previous job status.
Corresponds to the JSON property statusHistory
1620 1621 1622 |
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 1620 def status_history @status_history end |
#submitted_by ⇒ String
Output only. The email address of the user submitting the job. For jobs
submitted on the cluster, the address is username@hostname.
Corresponds to the JSON property submittedBy
1626 1627 1628 |
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 1626 def submitted_by @submitted_by end |
#yarn_applications ⇒ Array<Google::Apis::DataprocV1beta2::YarnApplication>
Output only. The collection of YARN applications spun up by this job.Beta
Feature: This report is available for testing purposes only. It may be changed
before final release.
Corresponds to the JSON property yarnApplications
1633 1634 1635 |
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 1633 def yarn_applications @yarn_applications end |
Instance Method Details
#update!(**args) ⇒ Object
Update properties of this object
1640 1641 1642 1643 1644 1645 1646 1647 1648 1649 1650 1651 1652 1653 1654 1655 1656 1657 1658 1659 1660 1661 |
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 1640 def update!(**args) @done = args[:done] if args.key?(:done) @driver_control_files_uri = args[:driver_control_files_uri] if args.key?(:driver_control_files_uri) @driver_output_resource_uri = args[:driver_output_resource_uri] if args.key?(:driver_output_resource_uri) @hadoop_job = args[:hadoop_job] if args.key?(:hadoop_job) @hive_job = args[:hive_job] if args.key?(:hive_job) @job_uuid = args[:job_uuid] if args.key?(:job_uuid) @labels = args[:labels] if args.key?(:labels) @pig_job = args[:pig_job] if args.key?(:pig_job) @placement = args[:placement] if args.key?(:placement) @presto_job = args[:presto_job] if args.key?(:presto_job) @pyspark_job = args[:pyspark_job] if args.key?(:pyspark_job) @reference = args[:reference] if args.key?(:reference) @scheduling = args[:scheduling] if args.key?(:scheduling) @spark_job = args[:spark_job] if args.key?(:spark_job) @spark_r_job = args[:spark_r_job] if args.key?(:spark_r_job) @spark_sql_job = args[:spark_sql_job] if args.key?(:spark_sql_job) @status = args[:status] if args.key?(:status) @status_history = args[:status_history] if args.key?(:status_history) @submitted_by = args[:submitted_by] if args.key?(:submitted_by) @yarn_applications = args[:yarn_applications] if args.key?(:yarn_applications) end |