Class: Google::Apis::DataprocV1beta2::OrderedJob

Inherits:
Object
  • Object
show all
Includes:
Core::Hashable, Core::JsonObjectSupport
Defined in:
generated/google/apis/dataproc_v1beta2/classes.rb,
generated/google/apis/dataproc_v1beta2/representations.rb,
generated/google/apis/dataproc_v1beta2/representations.rb

Instance Attribute Summary collapse

Instance Method Summary collapse

Methods included from Core::JsonObjectSupport

#to_json

Methods included from Core::Hashable

process_value, #to_h

Constructor Details

#initialize(**args) ⇒ OrderedJob

Returns a new instance of OrderedJob



1560
1561
1562
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 1560

def initialize(**args)
   update!(**args)
end

Instance Attribute Details

#hadoop_jobGoogle::Apis::DataprocV1beta2::HadoopJob

A Cloud Dataproc job for running Apache Hadoop MapReduce (https://hadoop. apache.org/docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/ MapReduceTutorial.html) jobs on Apache Hadoop YARN (https://hadoop.apache.org/ docs/r2.7.1/hadoop-yarn/hadoop-yarn-site/YARN.html). Corresponds to the JSON property hadoopJob



1500
1501
1502
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 1500

def hadoop_job
  @hadoop_job
end

#hive_jobGoogle::Apis::DataprocV1beta2::HiveJob

A Cloud Dataproc job for running Apache Hive (https://hive.apache.org/) queries on YARN. Corresponds to the JSON property hiveJob



1506
1507
1508
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 1506

def hive_job
  @hive_job
end

#labelsHash<String,String>

Optional. The labels to associate with this job.Label keys must be between 1 and 63 characters long, and must conform to the following regular expression: \ pLl\pLo0,62Label values must be between 1 and 63 characters long, and must conform to the following regular expression: \pLl\pLo\pN_-0,63No more than 64 labels can be associated with a given job. Corresponds to the JSON property labels

Returns:

  • (Hash<String,String>)


1515
1516
1517
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 1515

def labels
  @labels
end

#pig_jobGoogle::Apis::DataprocV1beta2::PigJob

A Cloud Dataproc job for running Apache Pig (https://pig.apache.org/) queries on YARN. Corresponds to the JSON property pigJob



1521
1522
1523
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 1521

def pig_job
  @pig_job
end

#prerequisite_step_idsArray<String>

Optional. The optional list of prerequisite job step_ids. If not specified, the job will start at the beginning of workflow. Corresponds to the JSON property prerequisiteStepIds

Returns:

  • (Array<String>)


1527
1528
1529
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 1527

def prerequisite_step_ids
  @prerequisite_step_ids
end

#pyspark_jobGoogle::Apis::DataprocV1beta2::PySparkJob

A Cloud Dataproc job for running Apache PySpark (https://spark.apache.org/docs/ 0.9.0/python-programming-guide.html) applications on YARN. Corresponds to the JSON property pysparkJob



1533
1534
1535
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 1533

def pyspark_job
  @pyspark_job
end

#schedulingGoogle::Apis::DataprocV1beta2::JobScheduling

Job scheduling options.Beta Feature: These options are available for testing purposes only. They may be changed before final release. Corresponds to the JSON property scheduling



1539
1540
1541
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 1539

def scheduling
  @scheduling
end

#spark_jobGoogle::Apis::DataprocV1beta2::SparkJob

A Cloud Dataproc job for running Apache Spark (http://spark.apache.org/) applications on YARN. Corresponds to the JSON property sparkJob



1545
1546
1547
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 1545

def spark_job
  @spark_job
end

#spark_sql_jobGoogle::Apis::DataprocV1beta2::SparkSqlJob

A Cloud Dataproc job for running Apache Spark SQL (http://spark.apache.org/sql/ ) queries. Corresponds to the JSON property sparkSqlJob



1551
1552
1553
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 1551

def spark_sql_job
  @spark_sql_job
end

#step_idString

Required. The step id. The id must be unique among all jobs within the template.The step id is used as prefix for job id, as job workflow-step-id label, and in prerequisite_step_ids field from other steps. Corresponds to the JSON property stepId

Returns:

  • (String)


1558
1559
1560
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 1558

def step_id
  @step_id
end

Instance Method Details

#update!(**args) ⇒ Object

Update properties of this object



1565
1566
1567
1568
1569
1570
1571
1572
1573
1574
1575
1576
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 1565

def update!(**args)
  @hadoop_job = args[:hadoop_job] if args.key?(:hadoop_job)
  @hive_job = args[:hive_job] if args.key?(:hive_job)
  @labels = args[:labels] if args.key?(:labels)
  @pig_job = args[:pig_job] if args.key?(:pig_job)
  @prerequisite_step_ids = args[:prerequisite_step_ids] if args.key?(:prerequisite_step_ids)
  @pyspark_job = args[:pyspark_job] if args.key?(:pyspark_job)
  @scheduling = args[:scheduling] if args.key?(:scheduling)
  @spark_job = args[:spark_job] if args.key?(:spark_job)
  @spark_sql_job = args[:spark_sql_job] if args.key?(:spark_sql_job)
  @step_id = args[:step_id] if args.key?(:step_id)
end