Class: Google::Apis::DataprocV1beta2::OrderedJob
- Inherits:
 - 
      Object
      
        
- Object
 - Google::Apis::DataprocV1beta2::OrderedJob
 
 
- Includes:
 - Core::Hashable, Core::JsonObjectSupport
 
- Defined in:
 - generated/google/apis/dataproc_v1beta2/classes.rb,
generated/google/apis/dataproc_v1beta2/representations.rb,
generated/google/apis/dataproc_v1beta2/representations.rb 
Overview
A job executed by the workflow.
Instance Attribute Summary collapse
- 
  
    
      #hadoop_job  ⇒ Google::Apis::DataprocV1beta2::HadoopJob 
    
    
  
  
  
  
    
    
  
  
  
  
  
  
    
A Cloud Dataproc job for running Apache Hadoop MapReduce (https://hadoop. apache.org/docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/ MapReduceTutorial.html) jobs on Apache Hadoop YARN (https://hadoop.apache.org/ docs/r2.7.1/hadoop-yarn/hadoop-yarn-site/YARN.html).
 - 
  
    
      #hive_job  ⇒ Google::Apis::DataprocV1beta2::HiveJob 
    
    
  
  
  
  
    
    
  
  
  
  
  
  
    
A Cloud Dataproc job for running Apache Hive (https://hive.apache.org/) queries on YARN.
 - 
  
    
      #labels  ⇒ Hash<String,String> 
    
    
  
  
  
  
    
    
  
  
  
  
  
  
    
Optional.
 - 
  
    
      #pig_job  ⇒ Google::Apis::DataprocV1beta2::PigJob 
    
    
  
  
  
  
    
    
  
  
  
  
  
  
    
A Cloud Dataproc job for running Apache Pig (https://pig.apache.org/) queries on YARN.
 - 
  
    
      #prerequisite_step_ids  ⇒ Array<String> 
    
    
  
  
  
  
    
    
  
  
  
  
  
  
    
Optional.
 - 
  
    
      #pyspark_job  ⇒ Google::Apis::DataprocV1beta2::PySparkJob 
    
    
  
  
  
  
    
    
  
  
  
  
  
  
    
A Cloud Dataproc job for running Apache PySpark (https://spark.apache.org/docs/ 0.9.0/python-programming-guide.html) applications on YARN.
 - 
  
    
      #scheduling  ⇒ Google::Apis::DataprocV1beta2::JobScheduling 
    
    
  
  
  
  
    
    
  
  
  
  
  
  
    
Job scheduling options.
 - 
  
    
      #spark_job  ⇒ Google::Apis::DataprocV1beta2::SparkJob 
    
    
  
  
  
  
    
    
  
  
  
  
  
  
    
A Cloud Dataproc job for running Apache Spark (http://spark.apache.org/) applications on YARN.
 - 
  
    
      #spark_sql_job  ⇒ Google::Apis::DataprocV1beta2::SparkSqlJob 
    
    
  
  
  
  
    
    
  
  
  
  
  
  
    
A Cloud Dataproc job for running Apache Spark SQL (http://spark.apache.org/sql/ ) queries.
 - 
  
    
      #step_id  ⇒ String 
    
    
  
  
  
  
    
    
  
  
  
  
  
  
    
Required.
 
Instance Method Summary collapse
- 
  
    
      #initialize(**args)  ⇒ OrderedJob 
    
    
  
  
  
    constructor
  
  
  
  
  
  
  
    
A new instance of OrderedJob.
 - 
  
    
      #update!(**args)  ⇒ Object 
    
    
  
  
  
  
  
  
  
  
  
    
Update properties of this object.
 
Methods included from Core::JsonObjectSupport
Methods included from Core::Hashable
Constructor Details
#initialize(**args) ⇒ OrderedJob
Returns a new instance of OrderedJob
      1685 1686 1687  | 
    
      # File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 1685 def initialize(**args) update!(**args) end  | 
  
Instance Attribute Details
#hadoop_job ⇒ Google::Apis::DataprocV1beta2::HadoopJob
A Cloud Dataproc job for running Apache Hadoop MapReduce (https://hadoop.
apache.org/docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/
MapReduceTutorial.html) jobs on Apache Hadoop YARN (https://hadoop.apache.org/
docs/r2.7.1/hadoop-yarn/hadoop-yarn-site/YARN.html).
Corresponds to the JSON property hadoopJob
      1623 1624 1625  | 
    
      # File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 1623 def hadoop_job @hadoop_job end  | 
  
#hive_job ⇒ Google::Apis::DataprocV1beta2::HiveJob
A Cloud Dataproc job for running Apache Hive (https://hive.apache.org/)
queries on YARN.
Corresponds to the JSON property hiveJob
      1629 1630 1631  | 
    
      # File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 1629 def hive_job @hive_job end  | 
  
#labels ⇒ Hash<String,String>
Optional. The labels to associate with this job.Label keys must be between 1
and 63 characters long, and must conform to the following regular expression: \
pLl\pLo0,62Label values must be between 1 and 63 characters long, and
must conform to the following regular expression: \pLl\pLo\pN_-0,63No
more than 32 labels can be associated with a given job.
Corresponds to the JSON property labels
      1638 1639 1640  | 
    
      # File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 1638 def labels @labels end  | 
  
#pig_job ⇒ Google::Apis::DataprocV1beta2::PigJob
A Cloud Dataproc job for running Apache Pig (https://pig.apache.org/) queries
on YARN.
Corresponds to the JSON property pigJob
      1644 1645 1646  | 
    
      # File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 1644 def pig_job @pig_job end  | 
  
#prerequisite_step_ids ⇒ Array<String>
Optional. The optional list of prerequisite job step_ids. If not specified,
the job will start at the beginning of workflow.
Corresponds to the JSON property prerequisiteStepIds
      1650 1651 1652  | 
    
      # File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 1650 def prerequisite_step_ids @prerequisite_step_ids end  | 
  
#pyspark_job ⇒ Google::Apis::DataprocV1beta2::PySparkJob
A Cloud Dataproc job for running Apache PySpark (https://spark.apache.org/docs/
0.9.0/python-programming-guide.html) applications on YARN.
Corresponds to the JSON property pysparkJob
      1656 1657 1658  | 
    
      # File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 1656 def pyspark_job @pyspark_job end  | 
  
#scheduling ⇒ Google::Apis::DataprocV1beta2::JobScheduling
Job scheduling options.
Corresponds to the JSON property scheduling
      1661 1662 1663  | 
    
      # File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 1661 def scheduling @scheduling end  | 
  
#spark_job ⇒ Google::Apis::DataprocV1beta2::SparkJob
A Cloud Dataproc job for running Apache Spark (http://spark.apache.org/)
applications on YARN.
Corresponds to the JSON property sparkJob
      1667 1668 1669  | 
    
      # File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 1667 def spark_job @spark_job end  | 
  
#spark_sql_job ⇒ Google::Apis::DataprocV1beta2::SparkSqlJob
A Cloud Dataproc job for running Apache Spark SQL (http://spark.apache.org/sql/
) queries.
Corresponds to the JSON property sparkSqlJob
      1673 1674 1675  | 
    
      # File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 1673 def spark_sql_job @spark_sql_job end  | 
  
#step_id ⇒ String
Required. The step id. The id must be unique among all jobs within the
template.The step id is used as prefix for job id, as job goog-dataproc-
workflow-step-id label, and in prerequisiteStepIds field from other steps.The
id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and
hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of
between 3 and 50 characters.
Corresponds to the JSON property stepId
      1683 1684 1685  | 
    
      # File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 1683 def step_id @step_id end  | 
  
Instance Method Details
#update!(**args) ⇒ Object
Update properties of this object
      1690 1691 1692 1693 1694 1695 1696 1697 1698 1699 1700 1701  | 
    
      # File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 1690 def update!(**args) @hadoop_job = args[:hadoop_job] if args.key?(:hadoop_job) @hive_job = args[:hive_job] if args.key?(:hive_job) @labels = args[:labels] if args.key?(:labels) @pig_job = args[:pig_job] if args.key?(:pig_job) @prerequisite_step_ids = args[:prerequisite_step_ids] if args.key?(:prerequisite_step_ids) @pyspark_job = args[:pyspark_job] if args.key?(:pyspark_job) @scheduling = args[:scheduling] if args.key?(:scheduling) @spark_job = args[:spark_job] if args.key?(:spark_job) @spark_sql_job = args[:spark_sql_job] if args.key?(:spark_sql_job) @step_id = args[:step_id] if args.key?(:step_id) end  |