Class: Google::Apis::DataprocV1beta2::Job

Inherits:
Object
  • Object
show all
Includes:
Core::Hashable, Core::JsonObjectSupport
Defined in:
generated/google/apis/dataproc_v1beta2/classes.rb,
generated/google/apis/dataproc_v1beta2/representations.rb,
generated/google/apis/dataproc_v1beta2/representations.rb

Overview

A Cloud Dataproc job resource.

Instance Attribute Summary collapse

Instance Method Summary collapse

Methods included from Core::JsonObjectSupport

#to_json

Methods included from Core::Hashable

process_value, #to_h

Constructor Details

#initialize(**args) ⇒ Job

Returns a new instance of Job



1186
1187
1188
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 1186

def initialize(**args)
   update!(**args)
end

Instance Attribute Details

#driver_control_files_uriString

Output only. If present, the location of miscellaneous control files which may be used as part of job setup and handling. If not present, control files may be placed in the same location as driver_output_uri. Corresponds to the JSON property driverControlFilesUri

Returns:

  • (String)


1080
1081
1082
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 1080

def driver_control_files_uri
  @driver_control_files_uri
end

#driver_output_resource_uriString

Output only. A URI pointing to the location of the stdout of the job's driver program. Corresponds to the JSON property driverOutputResourceUri

Returns:

  • (String)


1086
1087
1088
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 1086

def driver_output_resource_uri
  @driver_output_resource_uri
end

#hadoop_jobGoogle::Apis::DataprocV1beta2::HadoopJob

A Cloud Dataproc job for running Apache Hadoop MapReduce (https://hadoop. apache.org/docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/ MapReduceTutorial.html) jobs on Apache Hadoop YARN (https://hadoop.apache.org/ docs/r2.7.1/hadoop-yarn/hadoop-yarn-site/YARN.html). Corresponds to the JSON property hadoopJob



1094
1095
1096
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 1094

def hadoop_job
  @hadoop_job
end

#hive_jobGoogle::Apis::DataprocV1beta2::HiveJob

A Cloud Dataproc job for running Apache Hive (https://hive.apache.org/) queries on YARN. Corresponds to the JSON property hiveJob



1100
1101
1102
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 1100

def hive_job
  @hive_job
end

#job_uuidString

Output only. A UUID that uniquely identifies a job within the project over time. This is in contrast to a user-settable reference.job_id that may be reused over time. Corresponds to the JSON property jobUuid

Returns:

  • (String)


1107
1108
1109
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 1107

def job_uuid
  @job_uuid
end

#labelsHash<String,String>

Optional. The labels to associate with this job. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035. txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt) . No more than 32 labels can be associated with a job. Corresponds to the JSON property labels

Returns:

  • (Hash<String,String>)


1116
1117
1118
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 1116

def labels
  @labels
end

#pig_jobGoogle::Apis::DataprocV1beta2::PigJob

A Cloud Dataproc job for running Apache Pig (https://pig.apache.org/) queries on YARN. Corresponds to the JSON property pigJob



1122
1123
1124
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 1122

def pig_job
  @pig_job
end

#placementGoogle::Apis::DataprocV1beta2::JobPlacement

Cloud Dataproc job config. Corresponds to the JSON property placement



1127
1128
1129
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 1127

def placement
  @placement
end

#pyspark_jobGoogle::Apis::DataprocV1beta2::PySparkJob

A Cloud Dataproc job for running Apache PySpark (https://spark.apache.org/docs/ 0.9.0/python-programming-guide.html) applications on YARN. Corresponds to the JSON property pysparkJob



1133
1134
1135
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 1133

def pyspark_job
  @pyspark_job
end

#referenceGoogle::Apis::DataprocV1beta2::JobReference

Encapsulates the full scoping used to reference a job. Corresponds to the JSON property reference



1138
1139
1140
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 1138

def reference
  @reference
end

#schedulingGoogle::Apis::DataprocV1beta2::JobScheduling

Job scheduling options. Corresponds to the JSON property scheduling



1143
1144
1145
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 1143

def scheduling
  @scheduling
end

#spark_jobGoogle::Apis::DataprocV1beta2::SparkJob

A Cloud Dataproc job for running Apache Spark (http://spark.apache.org/) applications on YARN. Corresponds to the JSON property sparkJob



1149
1150
1151
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 1149

def spark_job
  @spark_job
end

#spark_r_jobGoogle::Apis::DataprocV1beta2::SparkRJob

A Cloud Dataproc job for running Apache SparkR (https://spark.apache.org/docs/ latest/sparkr.html) applications on YARN. Corresponds to the JSON property sparkRJob



1155
1156
1157
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 1155

def spark_r_job
  @spark_r_job
end

#spark_sql_jobGoogle::Apis::DataprocV1beta2::SparkSqlJob

A Cloud Dataproc job for running Apache Spark SQL (http://spark.apache.org/sql/ ) queries. Corresponds to the JSON property sparkSqlJob



1161
1162
1163
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 1161

def spark_sql_job
  @spark_sql_job
end

#statusGoogle::Apis::DataprocV1beta2::JobStatus

Cloud Dataproc job status. Corresponds to the JSON property status



1166
1167
1168
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 1166

def status
  @status
end

#status_historyArray<Google::Apis::DataprocV1beta2::JobStatus>

Output only. The previous job status. Corresponds to the JSON property statusHistory



1171
1172
1173
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 1171

def status_history
  @status_history
end

#submitted_byString

Output only. The email address of the user submitting the job. For jobs submitted on the cluster, the address is username@hostname. Corresponds to the JSON property submittedBy

Returns:

  • (String)


1177
1178
1179
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 1177

def 
  @submitted_by
end

#yarn_applicationsArray<Google::Apis::DataprocV1beta2::YarnApplication>

Output only. The collection of YARN applications spun up by this job.Beta Feature: This report is available for testing purposes only. It may be changed before final release. Corresponds to the JSON property yarnApplications



1184
1185
1186
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 1184

def yarn_applications
  @yarn_applications
end

Instance Method Details

#update!(**args) ⇒ Object

Update properties of this object



1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 1191

def update!(**args)
  @driver_control_files_uri = args[:driver_control_files_uri] if args.key?(:driver_control_files_uri)
  @driver_output_resource_uri = args[:driver_output_resource_uri] if args.key?(:driver_output_resource_uri)
  @hadoop_job = args[:hadoop_job] if args.key?(:hadoop_job)
  @hive_job = args[:hive_job] if args.key?(:hive_job)
  @job_uuid = args[:job_uuid] if args.key?(:job_uuid)
  @labels = args[:labels] if args.key?(:labels)
  @pig_job = args[:pig_job] if args.key?(:pig_job)
  @placement = args[:placement] if args.key?(:placement)
  @pyspark_job = args[:pyspark_job] if args.key?(:pyspark_job)
  @reference = args[:reference] if args.key?(:reference)
  @scheduling = args[:scheduling] if args.key?(:scheduling)
  @spark_job = args[:spark_job] if args.key?(:spark_job)
  @spark_r_job = args[:spark_r_job] if args.key?(:spark_r_job)
  @spark_sql_job = args[:spark_sql_job] if args.key?(:spark_sql_job)
  @status = args[:status] if args.key?(:status)
  @status_history = args[:status_history] if args.key?(:status_history)
  @submitted_by = args[:submitted_by] if args.key?(:submitted_by)
  @yarn_applications = args[:yarn_applications] if args.key?(:yarn_applications)
end