Class: Google::Apis::DataprocV1beta2::OrderedJob

Inherits:
Object
  • Object
show all
Includes:
Core::Hashable, Core::JsonObjectSupport
Defined in:
lib/google/apis/dataproc_v1beta2/classes.rb,
lib/google/apis/dataproc_v1beta2/representations.rb,
lib/google/apis/dataproc_v1beta2/representations.rb

Overview

A job executed by the workflow.

Instance Attribute Summary collapse

Instance Method Summary collapse

Constructor Details

#initialize(**args) ⇒ OrderedJob

Returns a new instance of OrderedJob.



2465
2466
2467
# File 'lib/google/apis/dataproc_v1beta2/classes.rb', line 2465

def initialize(**args)
   update!(**args)
end

Instance Attribute Details

#hadoop_jobGoogle::Apis::DataprocV1beta2::HadoopJob

A Dataproc job for running Apache Hadoop MapReduce (https://hadoop.apache.org/ docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/ MapReduceTutorial.html) jobs on Apache Hadoop YARN (https://hadoop.apache.org/ docs/r2.7.1/hadoop-yarn/hadoop-yarn-site/YARN.html). Corresponds to the JSON property hadoopJob



2386
2387
2388
# File 'lib/google/apis/dataproc_v1beta2/classes.rb', line 2386

def hadoop_job
  @hadoop_job
end

#hive_jobGoogle::Apis::DataprocV1beta2::HiveJob

A Dataproc job for running Apache Hive (https://hive.apache.org/) queries on YARN. Corresponds to the JSON property hiveJob



2392
2393
2394
# File 'lib/google/apis/dataproc_v1beta2/classes.rb', line 2392

def hive_job
  @hive_job
end

#labelsHash<String,String>

Optional. The labels to associate with this job.Label keys must be between 1 and 63 characters long, and must conform to the following regular expression: \ pLl\pLo0,62Label values must be between 1 and 63 characters long, and must conform to the following regular expression: \pLl\pLo\pN_-0,63No more than 32 labels can be associated with a given job. Corresponds to the JSON property labels

Returns:

  • (Hash<String,String>)


2401
2402
2403
# File 'lib/google/apis/dataproc_v1beta2/classes.rb', line 2401

def labels
  @labels
end

#pig_jobGoogle::Apis::DataprocV1beta2::PigJob

A Dataproc job for running Apache Pig (https://pig.apache.org/) queries on YARN. Corresponds to the JSON property pigJob



2407
2408
2409
# File 'lib/google/apis/dataproc_v1beta2/classes.rb', line 2407

def pig_job
  @pig_job
end

#prerequisite_step_idsArray<String>

Optional. The optional list of prerequisite job step_ids. If not specified, the job will start at the beginning of workflow. Corresponds to the JSON property prerequisiteStepIds

Returns:

  • (Array<String>)


2413
2414
2415
# File 'lib/google/apis/dataproc_v1beta2/classes.rb', line 2413

def prerequisite_step_ids
  @prerequisite_step_ids
end

#presto_jobGoogle::Apis::DataprocV1beta2::PrestoJob

A Dataproc job for running Presto (https://prestosql.io/) queries. IMPORTANT: The Dataproc Presto Optional Component (https://cloud.google.com/dataproc/docs/ concepts/components/presto) must be enabled when the cluster is created to submit a Presto job to the cluster. Corresponds to the JSON property prestoJob



2421
2422
2423
# File 'lib/google/apis/dataproc_v1beta2/classes.rb', line 2421

def presto_job
  @presto_job
end

#pyspark_jobGoogle::Apis::DataprocV1beta2::PySparkJob

A Dataproc job for running Apache PySpark (https://spark.apache.org/docs/0.9.0/ python-programming-guide.html) applications on YARN. Corresponds to the JSON property pysparkJob



2427
2428
2429
# File 'lib/google/apis/dataproc_v1beta2/classes.rb', line 2427

def pyspark_job
  @pyspark_job
end

#schedulingGoogle::Apis::DataprocV1beta2::JobScheduling

Job scheduling options. Corresponds to the JSON property scheduling



2432
2433
2434
# File 'lib/google/apis/dataproc_v1beta2/classes.rb', line 2432

def scheduling
  @scheduling
end

#spark_jobGoogle::Apis::DataprocV1beta2::SparkJob

A Dataproc job for running Apache Spark (http://spark.apache.org/) applications on YARN. The specification of the main method to call to drive the job. Specify either the jar file that contains the main class or the main class name. To pass both a main jar and a main class in that jar, add the jar to CommonJob.jar_file_uris, and then specify the main class name in main_class. Corresponds to the JSON property sparkJob



2441
2442
2443
# File 'lib/google/apis/dataproc_v1beta2/classes.rb', line 2441

def spark_job
  @spark_job
end

#spark_r_jobGoogle::Apis::DataprocV1beta2::SparkRJob

A Dataproc job for running Apache SparkR (https://spark.apache.org/docs/latest/ sparkr.html) applications on YARN. Corresponds to the JSON property sparkRJob



2447
2448
2449
# File 'lib/google/apis/dataproc_v1beta2/classes.rb', line 2447

def spark_r_job
  @spark_r_job
end

#spark_sql_jobGoogle::Apis::DataprocV1beta2::SparkSqlJob

A Dataproc job for running Apache Spark SQL (http://spark.apache.org/sql/) queries. Corresponds to the JSON property sparkSqlJob



2453
2454
2455
# File 'lib/google/apis/dataproc_v1beta2/classes.rb', line 2453

def spark_sql_job
  @spark_sql_job
end

#step_idString

Required. The step id. The id must be unique among all jobs within the template.The step id is used as prefix for job id, as job goog-dataproc- workflow-step-id label, and in prerequisiteStepIds field from other steps.The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of between 3 and 50 characters. Corresponds to the JSON property stepId

Returns:

  • (String)


2463
2464
2465
# File 'lib/google/apis/dataproc_v1beta2/classes.rb', line 2463

def step_id
  @step_id
end

Instance Method Details

#update!(**args) ⇒ Object

Update properties of this object



2470
2471
2472
2473
2474
2475
2476
2477
2478
2479
2480
2481
2482
2483
# File 'lib/google/apis/dataproc_v1beta2/classes.rb', line 2470

def update!(**args)
  @hadoop_job = args[:hadoop_job] if args.key?(:hadoop_job)
  @hive_job = args[:hive_job] if args.key?(:hive_job)
  @labels = args[:labels] if args.key?(:labels)
  @pig_job = args[:pig_job] if args.key?(:pig_job)
  @prerequisite_step_ids = args[:prerequisite_step_ids] if args.key?(:prerequisite_step_ids)
  @presto_job = args[:presto_job] if args.key?(:presto_job)
  @pyspark_job = args[:pyspark_job] if args.key?(:pyspark_job)
  @scheduling = args[:scheduling] if args.key?(:scheduling)
  @spark_job = args[:spark_job] if args.key?(:spark_job)
  @spark_r_job = args[:spark_r_job] if args.key?(:spark_r_job)
  @spark_sql_job = args[:spark_sql_job] if args.key?(:spark_sql_job)
  @step_id = args[:step_id] if args.key?(:step_id)
end