Class: Google::Apis::DataprocV1beta2::OrderedJob
- Inherits:
-
Object
- Object
- Google::Apis::DataprocV1beta2::OrderedJob
- Includes:
- Core::Hashable, Core::JsonObjectSupport
- Defined in:
- generated/google/apis/dataproc_v1beta2/classes.rb,
generated/google/apis/dataproc_v1beta2/representations.rb,
generated/google/apis/dataproc_v1beta2/representations.rb
Overview
A job executed by the workflow.
Instance Attribute Summary collapse
-
#hadoop_job ⇒ Google::Apis::DataprocV1beta2::HadoopJob
A Dataproc job for running Apache Hadoop MapReduce (https://hadoop.apache.org/ docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/ MapReduceTutorial.html) jobs on Apache Hadoop YARN (https://hadoop.apache.org/ docs/r2.7.1/hadoop-yarn/hadoop-yarn-site/YARN.html).
-
#hive_job ⇒ Google::Apis::DataprocV1beta2::HiveJob
A Dataproc job for running Apache Hive (https://hive.apache.org/) queries on YARN.
-
#labels ⇒ Hash<String,String>
Optional.
-
#pig_job ⇒ Google::Apis::DataprocV1beta2::PigJob
A Dataproc job for running Apache Pig (https://pig.apache.org/) queries on YARN.
-
#prerequisite_step_ids ⇒ Array<String>
Optional.
-
#presto_job ⇒ Google::Apis::DataprocV1beta2::PrestoJob
A Dataproc job for running Presto (https://prestosql.io/) queries.
-
#pyspark_job ⇒ Google::Apis::DataprocV1beta2::PySparkJob
A Dataproc job for running Apache PySpark (https://spark.apache.org/docs/0.9.0/ python-programming-guide.html) applications on YARN.
-
#scheduling ⇒ Google::Apis::DataprocV1beta2::JobScheduling
Job scheduling options.
-
#spark_job ⇒ Google::Apis::DataprocV1beta2::SparkJob
A Dataproc job for running Apache Spark (http://spark.apache.org/) applications on YARN.
-
#spark_r_job ⇒ Google::Apis::DataprocV1beta2::SparkRJob
A Dataproc job for running Apache SparkR (https://spark.apache.org/docs/latest/ sparkr.html) applications on YARN.
-
#spark_sql_job ⇒ Google::Apis::DataprocV1beta2::SparkSqlJob
A Dataproc job for running Apache Spark SQL (http://spark.apache.org/sql/) queries.
-
#step_id ⇒ String
Required.
Instance Method Summary collapse
-
#initialize(**args) ⇒ OrderedJob
constructor
A new instance of OrderedJob.
-
#update!(**args) ⇒ Object
Update properties of this object.
Constructor Details
#initialize(**args) ⇒ OrderedJob
Returns a new instance of OrderedJob.
2463 2464 2465 |
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 2463 def initialize(**args) update!(**args) end |
Instance Attribute Details
#hadoop_job ⇒ Google::Apis::DataprocV1beta2::HadoopJob
A Dataproc job for running Apache Hadoop MapReduce (https://hadoop.apache.org/
docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/
MapReduceTutorial.html) jobs on Apache Hadoop YARN (https://hadoop.apache.org/
docs/r2.7.1/hadoop-yarn/hadoop-yarn-site/YARN.html).
Corresponds to the JSON property hadoopJob
2384 2385 2386 |
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 2384 def hadoop_job @hadoop_job end |
#hive_job ⇒ Google::Apis::DataprocV1beta2::HiveJob
A Dataproc job for running Apache Hive (https://hive.apache.org/) queries on
YARN.
Corresponds to the JSON property hiveJob
2390 2391 2392 |
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 2390 def hive_job @hive_job end |
#labels ⇒ Hash<String,String>
Optional. The labels to associate with this job.Label keys must be between 1
and 63 characters long, and must conform to the following regular expression: \
pLl
\pLo
0,62
Label values must be between 1 and 63 characters long, and
must conform to the following regular expression: \pLl
\pLo
\pN
_-0,63
No
more than 32 labels can be associated with a given job.
Corresponds to the JSON property labels
2399 2400 2401 |
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 2399 def labels @labels end |
#pig_job ⇒ Google::Apis::DataprocV1beta2::PigJob
A Dataproc job for running Apache Pig (https://pig.apache.org/) queries on
YARN.
Corresponds to the JSON property pigJob
2405 2406 2407 |
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 2405 def pig_job @pig_job end |
#prerequisite_step_ids ⇒ Array<String>
Optional. The optional list of prerequisite job step_ids. If not specified,
the job will start at the beginning of workflow.
Corresponds to the JSON property prerequisiteStepIds
2411 2412 2413 |
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 2411 def prerequisite_step_ids @prerequisite_step_ids end |
#presto_job ⇒ Google::Apis::DataprocV1beta2::PrestoJob
A Dataproc job for running Presto (https://prestosql.io/) queries. IMPORTANT:
The Dataproc Presto Optional Component (https://cloud.google.com/dataproc/docs/
concepts/components/presto) must be enabled when the cluster is created to
submit a Presto job to the cluster.
Corresponds to the JSON property prestoJob
2419 2420 2421 |
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 2419 def presto_job @presto_job end |
#pyspark_job ⇒ Google::Apis::DataprocV1beta2::PySparkJob
A Dataproc job for running Apache PySpark (https://spark.apache.org/docs/0.9.0/
python-programming-guide.html) applications on YARN.
Corresponds to the JSON property pysparkJob
2425 2426 2427 |
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 2425 def pyspark_job @pyspark_job end |
#scheduling ⇒ Google::Apis::DataprocV1beta2::JobScheduling
Job scheduling options.
Corresponds to the JSON property scheduling
2430 2431 2432 |
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 2430 def scheduling @scheduling end |
#spark_job ⇒ Google::Apis::DataprocV1beta2::SparkJob
A Dataproc job for running Apache Spark (http://spark.apache.org/)
applications on YARN. The specification of the main method to call to drive
the job. Specify either the jar file that contains the main class or the main
class name. To pass both a main jar and a main class in that jar, add the jar
to CommonJob.jar_file_uris, and then specify the main class name in main_class.
Corresponds to the JSON property sparkJob
2439 2440 2441 |
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 2439 def spark_job @spark_job end |
#spark_r_job ⇒ Google::Apis::DataprocV1beta2::SparkRJob
A Dataproc job for running Apache SparkR (https://spark.apache.org/docs/latest/
sparkr.html) applications on YARN.
Corresponds to the JSON property sparkRJob
2445 2446 2447 |
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 2445 def spark_r_job @spark_r_job end |
#spark_sql_job ⇒ Google::Apis::DataprocV1beta2::SparkSqlJob
A Dataproc job for running Apache Spark SQL (http://spark.apache.org/sql/)
queries.
Corresponds to the JSON property sparkSqlJob
2451 2452 2453 |
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 2451 def spark_sql_job @spark_sql_job end |
#step_id ⇒ String
Required. The step id. The id must be unique among all jobs within the
template.The step id is used as prefix for job id, as job goog-dataproc-
workflow-step-id label, and in prerequisiteStepIds field from other steps.The
id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and
hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of
between 3 and 50 characters.
Corresponds to the JSON property stepId
2461 2462 2463 |
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 2461 def step_id @step_id end |
Instance Method Details
#update!(**args) ⇒ Object
Update properties of this object
2468 2469 2470 2471 2472 2473 2474 2475 2476 2477 2478 2479 2480 2481 |
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 2468 def update!(**args) @hadoop_job = args[:hadoop_job] if args.key?(:hadoop_job) @hive_job = args[:hive_job] if args.key?(:hive_job) @labels = args[:labels] if args.key?(:labels) @pig_job = args[:pig_job] if args.key?(:pig_job) @prerequisite_step_ids = args[:prerequisite_step_ids] if args.key?(:prerequisite_step_ids) @presto_job = args[:presto_job] if args.key?(:presto_job) @pyspark_job = args[:pyspark_job] if args.key?(:pyspark_job) @scheduling = args[:scheduling] if args.key?(:scheduling) @spark_job = args[:spark_job] if args.key?(:spark_job) @spark_r_job = args[:spark_r_job] if args.key?(:spark_r_job) @spark_sql_job = args[:spark_sql_job] if args.key?(:spark_sql_job) @step_id = args[:step_id] if args.key?(:step_id) end |