Class: Google::Apis::DataprocV1beta2::Job
- Inherits:
-
Object
- Object
- Google::Apis::DataprocV1beta2::Job
- Includes:
- Core::Hashable, Core::JsonObjectSupport
- Defined in:
- generated/google/apis/dataproc_v1beta2/classes.rb,
generated/google/apis/dataproc_v1beta2/representations.rb,
generated/google/apis/dataproc_v1beta2/representations.rb
Overview
A Cloud Dataproc job resource.
Instance Attribute Summary collapse
-
#driver_control_files_uri ⇒ String
Output only.
-
#driver_output_resource_uri ⇒ String
Output only.
-
#hadoop_job ⇒ Google::Apis::DataprocV1beta2::HadoopJob
A Cloud Dataproc job for running Apache Hadoop MapReduce (https://hadoop. apache.org/docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/ MapReduceTutorial.html) jobs on Apache Hadoop YARN (https://hadoop.apache.org/ docs/r2.7.1/hadoop-yarn/hadoop-yarn-site/YARN.html).
-
#hive_job ⇒ Google::Apis::DataprocV1beta2::HiveJob
A Cloud Dataproc job for running Apache Hive (https://hive.apache.org/) queries on YARN.
-
#job_uuid ⇒ String
Output only.
-
#labels ⇒ Hash<String,String>
Optional.
-
#pig_job ⇒ Google::Apis::DataprocV1beta2::PigJob
A Cloud Dataproc job for running Apache Pig (https://pig.apache.org/) queries on YARN.
-
#placement ⇒ Google::Apis::DataprocV1beta2::JobPlacement
Cloud Dataproc job config.
-
#presto_job ⇒ Google::Apis::DataprocV1beta2::PrestoJob
A Cloud Dataproc job for running Presto (https://prestosql.io/) queries Corresponds to the JSON property
prestoJob
. -
#pyspark_job ⇒ Google::Apis::DataprocV1beta2::PySparkJob
A Cloud Dataproc job for running Apache PySpark (https://spark.apache.org/docs/ 0.9.0/python-programming-guide.html) applications on YARN.
-
#reference ⇒ Google::Apis::DataprocV1beta2::JobReference
Encapsulates the full scoping used to reference a job.
-
#scheduling ⇒ Google::Apis::DataprocV1beta2::JobScheduling
Job scheduling options.
-
#spark_job ⇒ Google::Apis::DataprocV1beta2::SparkJob
A Cloud Dataproc job for running Apache Spark (http://spark.apache.org/) applications on YARN.
-
#spark_r_job ⇒ Google::Apis::DataprocV1beta2::SparkRJob
A Cloud Dataproc job for running Apache SparkR (https://spark.apache.org/docs/ latest/sparkr.html) applications on YARN.
-
#spark_sql_job ⇒ Google::Apis::DataprocV1beta2::SparkSqlJob
A Cloud Dataproc job for running Apache Spark SQL (http://spark.apache.org/sql/ ) queries.
-
#status ⇒ Google::Apis::DataprocV1beta2::JobStatus
Cloud Dataproc job status.
-
#status_history ⇒ Array<Google::Apis::DataprocV1beta2::JobStatus>
Output only.
-
#submitted_by ⇒ String
Output only.
-
#yarn_applications ⇒ Array<Google::Apis::DataprocV1beta2::YarnApplication>
Output only.
Instance Method Summary collapse
-
#initialize(**args) ⇒ Job
constructor
A new instance of Job.
-
#update!(**args) ⇒ Object
Update properties of this object.
Methods included from Core::JsonObjectSupport
Methods included from Core::Hashable
Constructor Details
#initialize(**args) ⇒ Job
Returns a new instance of Job
1455 1456 1457 |
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 1455 def initialize(**args) update!(**args) end |
Instance Attribute Details
#driver_control_files_uri ⇒ String
Output only. If present, the location of miscellaneous control files which may
be used as part of job setup and handling. If not present, control files may
be placed in the same location as driver_output_uri.
Corresponds to the JSON property driverControlFilesUri
1341 1342 1343 |
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 1341 def driver_control_files_uri @driver_control_files_uri end |
#driver_output_resource_uri ⇒ String
Output only. A URI pointing to the location of the stdout of the job's driver
program.
Corresponds to the JSON property driverOutputResourceUri
1347 1348 1349 |
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 1347 def driver_output_resource_uri @driver_output_resource_uri end |
#hadoop_job ⇒ Google::Apis::DataprocV1beta2::HadoopJob
A Cloud Dataproc job for running Apache Hadoop MapReduce (https://hadoop.
apache.org/docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/
MapReduceTutorial.html) jobs on Apache Hadoop YARN (https://hadoop.apache.org/
docs/r2.7.1/hadoop-yarn/hadoop-yarn-site/YARN.html).
Corresponds to the JSON property hadoopJob
1355 1356 1357 |
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 1355 def hadoop_job @hadoop_job end |
#hive_job ⇒ Google::Apis::DataprocV1beta2::HiveJob
A Cloud Dataproc job for running Apache Hive (https://hive.apache.org/)
queries on YARN.
Corresponds to the JSON property hiveJob
1361 1362 1363 |
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 1361 def hive_job @hive_job end |
#job_uuid ⇒ String
Output only. A UUID that uniquely identifies a job within the project over
time. This is in contrast to a user-settable reference.job_id that may be
reused over time.
Corresponds to the JSON property jobUuid
1368 1369 1370 |
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 1368 def job_uuid @job_uuid end |
#labels ⇒ Hash<String,String>
Optional. The labels to associate with this job. Label keys must contain 1 to
63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.
txt). Label values may be empty, but, if present, must contain 1 to 63
characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt)
. No more than 32 labels can be associated with a job.
Corresponds to the JSON property labels
1377 1378 1379 |
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 1377 def labels @labels end |
#pig_job ⇒ Google::Apis::DataprocV1beta2::PigJob
A Cloud Dataproc job for running Apache Pig (https://pig.apache.org/) queries
on YARN.
Corresponds to the JSON property pigJob
1383 1384 1385 |
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 1383 def pig_job @pig_job end |
#placement ⇒ Google::Apis::DataprocV1beta2::JobPlacement
Cloud Dataproc job config.
Corresponds to the JSON property placement
1388 1389 1390 |
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 1388 def placement @placement end |
#presto_job ⇒ Google::Apis::DataprocV1beta2::PrestoJob
A Cloud Dataproc job for running Presto (https://prestosql.io/) queries
Corresponds to the JSON property prestoJob
1393 1394 1395 |
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 1393 def presto_job @presto_job end |
#pyspark_job ⇒ Google::Apis::DataprocV1beta2::PySparkJob
A Cloud Dataproc job for running Apache PySpark (https://spark.apache.org/docs/
0.9.0/python-programming-guide.html) applications on YARN.
Corresponds to the JSON property pysparkJob
1399 1400 1401 |
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 1399 def pyspark_job @pyspark_job end |
#reference ⇒ Google::Apis::DataprocV1beta2::JobReference
Encapsulates the full scoping used to reference a job.
Corresponds to the JSON property reference
1404 1405 1406 |
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 1404 def reference @reference end |
#scheduling ⇒ Google::Apis::DataprocV1beta2::JobScheduling
Job scheduling options.
Corresponds to the JSON property scheduling
1409 1410 1411 |
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 1409 def scheduling @scheduling end |
#spark_job ⇒ Google::Apis::DataprocV1beta2::SparkJob
A Cloud Dataproc job for running Apache Spark (http://spark.apache.org/)
applications on YARN. The specification of the main method to call to drive
the job. Specify either the jar file that contains the main class or the main
class name. To pass both a main jar and a main class in that jar, add the jar
to CommonJob.jar_file_uris, and then specify the main class name in main_class.
Corresponds to the JSON property sparkJob
1418 1419 1420 |
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 1418 def spark_job @spark_job end |
#spark_r_job ⇒ Google::Apis::DataprocV1beta2::SparkRJob
A Cloud Dataproc job for running Apache SparkR (https://spark.apache.org/docs/
latest/sparkr.html) applications on YARN.
Corresponds to the JSON property sparkRJob
1424 1425 1426 |
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 1424 def spark_r_job @spark_r_job end |
#spark_sql_job ⇒ Google::Apis::DataprocV1beta2::SparkSqlJob
A Cloud Dataproc job for running Apache Spark SQL (http://spark.apache.org/sql/
) queries.
Corresponds to the JSON property sparkSqlJob
1430 1431 1432 |
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 1430 def spark_sql_job @spark_sql_job end |
#status ⇒ Google::Apis::DataprocV1beta2::JobStatus
Cloud Dataproc job status.
Corresponds to the JSON property status
1435 1436 1437 |
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 1435 def status @status end |
#status_history ⇒ Array<Google::Apis::DataprocV1beta2::JobStatus>
Output only. The previous job status.
Corresponds to the JSON property statusHistory
1440 1441 1442 |
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 1440 def status_history @status_history end |
#submitted_by ⇒ String
Output only. The email address of the user submitting the job. For jobs
submitted on the cluster, the address is username@hostname
.
Corresponds to the JSON property submittedBy
1446 1447 1448 |
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 1446 def submitted_by @submitted_by end |
#yarn_applications ⇒ Array<Google::Apis::DataprocV1beta2::YarnApplication>
Output only. The collection of YARN applications spun up by this job.Beta
Feature: This report is available for testing purposes only. It may be changed
before final release.
Corresponds to the JSON property yarnApplications
1453 1454 1455 |
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 1453 def yarn_applications @yarn_applications end |
Instance Method Details
#update!(**args) ⇒ Object
Update properties of this object
1460 1461 1462 1463 1464 1465 1466 1467 1468 1469 1470 1471 1472 1473 1474 1475 1476 1477 1478 1479 1480 |
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 1460 def update!(**args) @driver_control_files_uri = args[:driver_control_files_uri] if args.key?(:driver_control_files_uri) @driver_output_resource_uri = args[:driver_output_resource_uri] if args.key?(:driver_output_resource_uri) @hadoop_job = args[:hadoop_job] if args.key?(:hadoop_job) @hive_job = args[:hive_job] if args.key?(:hive_job) @job_uuid = args[:job_uuid] if args.key?(:job_uuid) @labels = args[:labels] if args.key?(:labels) @pig_job = args[:pig_job] if args.key?(:pig_job) @placement = args[:placement] if args.key?(:placement) @presto_job = args[:presto_job] if args.key?(:presto_job) @pyspark_job = args[:pyspark_job] if args.key?(:pyspark_job) @reference = args[:reference] if args.key?(:reference) @scheduling = args[:scheduling] if args.key?(:scheduling) @spark_job = args[:spark_job] if args.key?(:spark_job) @spark_r_job = args[:spark_r_job] if args.key?(:spark_r_job) @spark_sql_job = args[:spark_sql_job] if args.key?(:spark_sql_job) @status = args[:status] if args.key?(:status) @status_history = args[:status_history] if args.key?(:status_history) @submitted_by = args[:submitted_by] if args.key?(:submitted_by) @yarn_applications = args[:yarn_applications] if args.key?(:yarn_applications) end |