Class: Google::Apis::DataprocV1beta2::OrderedJob

Inherits:
Object
  • Object
show all
Includes:
Core::Hashable, Core::JsonObjectSupport
Defined in:
generated/google/apis/dataproc_v1beta2/classes.rb,
generated/google/apis/dataproc_v1beta2/representations.rb,
generated/google/apis/dataproc_v1beta2/representations.rb

Overview

A job executed by the workflow.

Instance Attribute Summary collapse

Instance Method Summary collapse

Methods included from Core::JsonObjectSupport

#to_json

Methods included from Core::Hashable

process_value, #to_h

Constructor Details

#initialize(**args) ⇒ OrderedJob

Returns a new instance of OrderedJob



2145
2146
2147
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 2145

def initialize(**args)
   update!(**args)
end

Instance Attribute Details

#hadoop_jobGoogle::Apis::DataprocV1beta2::HadoopJob

A Cloud Dataproc job for running Apache Hadoop MapReduce (https://hadoop. apache.org/docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/ MapReduceTutorial.html) jobs on Apache Hadoop YARN (https://hadoop.apache.org/ docs/r2.7.1/hadoop-yarn/hadoop-yarn-site/YARN.html). Corresponds to the JSON property hadoopJob



2080
2081
2082
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 2080

def hadoop_job
  @hadoop_job
end

#hive_jobGoogle::Apis::DataprocV1beta2::HiveJob

A Cloud Dataproc job for running Apache Hive (https://hive.apache.org/) queries on YARN. Corresponds to the JSON property hiveJob



2086
2087
2088
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 2086

def hive_job
  @hive_job
end

#labelsHash<String,String>

Optional. The labels to associate with this job.Label keys must be between 1 and 63 characters long, and must conform to the following regular expression: \ pLl\pLo0,62Label values must be between 1 and 63 characters long, and must conform to the following regular expression: \pLl\pLo\pN_-0,63No more than 32 labels can be associated with a given job. Corresponds to the JSON property labels

Returns:

  • (Hash<String,String>)


2095
2096
2097
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 2095

def labels
  @labels
end

#pig_jobGoogle::Apis::DataprocV1beta2::PigJob

A Cloud Dataproc job for running Apache Pig (https://pig.apache.org/) queries on YARN. Corresponds to the JSON property pigJob



2101
2102
2103
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 2101

def pig_job
  @pig_job
end

#prerequisite_step_idsArray<String>

Optional. The optional list of prerequisite job step_ids. If not specified, the job will start at the beginning of workflow. Corresponds to the JSON property prerequisiteStepIds

Returns:

  • (Array<String>)


2107
2108
2109
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 2107

def prerequisite_step_ids
  @prerequisite_step_ids
end

#pyspark_jobGoogle::Apis::DataprocV1beta2::PySparkJob

A Cloud Dataproc job for running Apache PySpark (https://spark.apache.org/docs/ 0.9.0/python-programming-guide.html) applications on YARN. Corresponds to the JSON property pysparkJob



2113
2114
2115
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 2113

def pyspark_job
  @pyspark_job
end

#schedulingGoogle::Apis::DataprocV1beta2::JobScheduling

Job scheduling options. Corresponds to the JSON property scheduling



2118
2119
2120
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 2118

def scheduling
  @scheduling
end

#spark_jobGoogle::Apis::DataprocV1beta2::SparkJob

A Cloud Dataproc job for running Apache Spark (http://spark.apache.org/) applications on YARN. The specification of the main method to call to drive the job. Specify either the jar file that contains the main class or the main class name. To pass both a main jar and a main class in that jar, add the jar to CommonJob.jar_file_uris, and then specify the main class name in main_class. Corresponds to the JSON property sparkJob



2127
2128
2129
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 2127

def spark_job
  @spark_job
end

#spark_sql_jobGoogle::Apis::DataprocV1beta2::SparkSqlJob

A Cloud Dataproc job for running Apache Spark SQL (http://spark.apache.org/sql/ ) queries. Corresponds to the JSON property sparkSqlJob



2133
2134
2135
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 2133

def spark_sql_job
  @spark_sql_job
end

#step_idString

Required. The step id. The id must be unique among all jobs within the template.The step id is used as prefix for job id, as job goog-dataproc- workflow-step-id label, and in prerequisiteStepIds field from other steps.The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of between 3 and 50 characters. Corresponds to the JSON property stepId

Returns:

  • (String)


2143
2144
2145
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 2143

def step_id
  @step_id
end

Instance Method Details

#update!(**args) ⇒ Object

Update properties of this object



2150
2151
2152
2153
2154
2155
2156
2157
2158
2159
2160
2161
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 2150

def update!(**args)
  @hadoop_job = args[:hadoop_job] if args.key?(:hadoop_job)
  @hive_job = args[:hive_job] if args.key?(:hive_job)
  @labels = args[:labels] if args.key?(:labels)
  @pig_job = args[:pig_job] if args.key?(:pig_job)
  @prerequisite_step_ids = args[:prerequisite_step_ids] if args.key?(:prerequisite_step_ids)
  @pyspark_job = args[:pyspark_job] if args.key?(:pyspark_job)
  @scheduling = args[:scheduling] if args.key?(:scheduling)
  @spark_job = args[:spark_job] if args.key?(:spark_job)
  @spark_sql_job = args[:spark_sql_job] if args.key?(:spark_sql_job)
  @step_id = args[:step_id] if args.key?(:step_id)
end