Class: Google::Apis::DataprocV1::OrderedJob
- Inherits:
-
Object
- Object
- Google::Apis::DataprocV1::OrderedJob
- Includes:
- Core::Hashable, Core::JsonObjectSupport
- Defined in:
- generated/google/apis/dataproc_v1/classes.rb,
generated/google/apis/dataproc_v1/representations.rb,
generated/google/apis/dataproc_v1/representations.rb
Overview
A job executed by the workflow.
Instance Attribute Summary collapse
-
#hadoop_job ⇒ Google::Apis::DataprocV1::HadoopJob
A Cloud Dataproc job for running Apache Hadoop MapReduce (https://hadoop. apache.org/docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/ MapReduceTutorial.html) jobs on Apache Hadoop YARN (https://hadoop.apache.org/ docs/r2.7.1/hadoop-yarn/hadoop-yarn-site/YARN.html).
-
#hive_job ⇒ Google::Apis::DataprocV1::HiveJob
A Cloud Dataproc job for running Apache Hive (https://hive.apache.org/) queries on YARN.
-
#labels ⇒ Hash<String,String>
Optional.
-
#pig_job ⇒ Google::Apis::DataprocV1::PigJob
A Cloud Dataproc job for running Apache Pig (https://pig.apache.org/) queries on YARN.
-
#prerequisite_step_ids ⇒ Array<String>
Optional.
-
#pyspark_job ⇒ Google::Apis::DataprocV1::PySparkJob
A Cloud Dataproc job for running Apache PySpark (https://spark.apache.org/docs/ 0.9.0/python-programming-guide.html) applications on YARN.
-
#scheduling ⇒ Google::Apis::DataprocV1::JobScheduling
Job scheduling options.
-
#spark_job ⇒ Google::Apis::DataprocV1::SparkJob
A Cloud Dataproc job for running Apache Spark (http://spark.apache.org/) applications on YARN.
-
#spark_sql_job ⇒ Google::Apis::DataprocV1::SparkSqlJob
A Cloud Dataproc job for running Apache Spark SQL (http://spark.apache.org/sql/ ) queries.
-
#step_id ⇒ String
Required.
Instance Method Summary collapse
-
#initialize(**args) ⇒ OrderedJob
constructor
A new instance of OrderedJob.
-
#update!(**args) ⇒ Object
Update properties of this object.
Methods included from Core::JsonObjectSupport
Methods included from Core::Hashable
Constructor Details
#initialize(**args) ⇒ OrderedJob
Returns a new instance of OrderedJob
1641 1642 1643 |
# File 'generated/google/apis/dataproc_v1/classes.rb', line 1641 def initialize(**args) update!(**args) end |
Instance Attribute Details
#hadoop_job ⇒ Google::Apis::DataprocV1::HadoopJob
A Cloud Dataproc job for running Apache Hadoop MapReduce (https://hadoop.
apache.org/docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/
MapReduceTutorial.html) jobs on Apache Hadoop YARN (https://hadoop.apache.org/
docs/r2.7.1/hadoop-yarn/hadoop-yarn-site/YARN.html).
Corresponds to the JSON property hadoopJob
1579 1580 1581 |
# File 'generated/google/apis/dataproc_v1/classes.rb', line 1579 def hadoop_job @hadoop_job end |
#hive_job ⇒ Google::Apis::DataprocV1::HiveJob
A Cloud Dataproc job for running Apache Hive (https://hive.apache.org/)
queries on YARN.
Corresponds to the JSON property hiveJob
1585 1586 1587 |
# File 'generated/google/apis/dataproc_v1/classes.rb', line 1585 def hive_job @hive_job end |
#labels ⇒ Hash<String,String>
Optional. The labels to associate with this job.Label keys must be between 1
and 63 characters long, and must conform to the following regular expression: \
pLl
\pLo
0,62
Label values must be between 1 and 63 characters long, and
must conform to the following regular expression: \pLl
\pLo
\pN
_-0,63
No
more than 32 labels can be associated with a given job.
Corresponds to the JSON property labels
1594 1595 1596 |
# File 'generated/google/apis/dataproc_v1/classes.rb', line 1594 def labels @labels end |
#pig_job ⇒ Google::Apis::DataprocV1::PigJob
A Cloud Dataproc job for running Apache Pig (https://pig.apache.org/) queries
on YARN.
Corresponds to the JSON property pigJob
1600 1601 1602 |
# File 'generated/google/apis/dataproc_v1/classes.rb', line 1600 def pig_job @pig_job end |
#prerequisite_step_ids ⇒ Array<String>
Optional. The optional list of prerequisite job step_ids. If not specified,
the job will start at the beginning of workflow.
Corresponds to the JSON property prerequisiteStepIds
1606 1607 1608 |
# File 'generated/google/apis/dataproc_v1/classes.rb', line 1606 def prerequisite_step_ids @prerequisite_step_ids end |
#pyspark_job ⇒ Google::Apis::DataprocV1::PySparkJob
A Cloud Dataproc job for running Apache PySpark (https://spark.apache.org/docs/
0.9.0/python-programming-guide.html) applications on YARN.
Corresponds to the JSON property pysparkJob
1612 1613 1614 |
# File 'generated/google/apis/dataproc_v1/classes.rb', line 1612 def pyspark_job @pyspark_job end |
#scheduling ⇒ Google::Apis::DataprocV1::JobScheduling
Job scheduling options.
Corresponds to the JSON property scheduling
1617 1618 1619 |
# File 'generated/google/apis/dataproc_v1/classes.rb', line 1617 def scheduling @scheduling end |
#spark_job ⇒ Google::Apis::DataprocV1::SparkJob
A Cloud Dataproc job for running Apache Spark (http://spark.apache.org/)
applications on YARN.
Corresponds to the JSON property sparkJob
1623 1624 1625 |
# File 'generated/google/apis/dataproc_v1/classes.rb', line 1623 def spark_job @spark_job end |
#spark_sql_job ⇒ Google::Apis::DataprocV1::SparkSqlJob
A Cloud Dataproc job for running Apache Spark SQL (http://spark.apache.org/sql/
) queries.
Corresponds to the JSON property sparkSqlJob
1629 1630 1631 |
# File 'generated/google/apis/dataproc_v1/classes.rb', line 1629 def spark_sql_job @spark_sql_job end |
#step_id ⇒ String
Required. The step id. The id must be unique among all jobs within the
template.The step id is used as prefix for job id, as job goog-dataproc-
workflow-step-id label, and in prerequisiteStepIds field from other steps.The
id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and
hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of
between 3 and 50 characters.
Corresponds to the JSON property stepId
1639 1640 1641 |
# File 'generated/google/apis/dataproc_v1/classes.rb', line 1639 def step_id @step_id end |
Instance Method Details
#update!(**args) ⇒ Object
Update properties of this object
1646 1647 1648 1649 1650 1651 1652 1653 1654 1655 1656 1657 |
# File 'generated/google/apis/dataproc_v1/classes.rb', line 1646 def update!(**args) @hadoop_job = args[:hadoop_job] if args.key?(:hadoop_job) @hive_job = args[:hive_job] if args.key?(:hive_job) @labels = args[:labels] if args.key?(:labels) @pig_job = args[:pig_job] if args.key?(:pig_job) @prerequisite_step_ids = args[:prerequisite_step_ids] if args.key?(:prerequisite_step_ids) @pyspark_job = args[:pyspark_job] if args.key?(:pyspark_job) @scheduling = args[:scheduling] if args.key?(:scheduling) @spark_job = args[:spark_job] if args.key?(:spark_job) @spark_sql_job = args[:spark_sql_job] if args.key?(:spark_sql_job) @step_id = args[:step_id] if args.key?(:step_id) end |