Class: Google::Apis::DataprocV1::Job
- Inherits:
-
Object
- Object
- Google::Apis::DataprocV1::Job
- Includes:
- Core::Hashable, Core::JsonObjectSupport
- Defined in:
- generated/google/apis/dataproc_v1/classes.rb,
generated/google/apis/dataproc_v1/representations.rb,
generated/google/apis/dataproc_v1/representations.rb
Overview
A Dataproc job resource.
Instance Attribute Summary collapse
-
#done ⇒ Boolean
(also: #done?)
Output only.
-
#driver_control_files_uri ⇒ String
Output only.
-
#driver_output_resource_uri ⇒ String
Output only.
-
#hadoop_job ⇒ Google::Apis::DataprocV1::HadoopJob
A Dataproc job for running Apache Hadoop MapReduce (https://hadoop.apache.org/ docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/ MapReduceTutorial.html) jobs on Apache Hadoop YARN (https://hadoop.apache.org/ docs/r2.7.1/hadoop-yarn/hadoop-yarn-site/YARN.html).
-
#hive_job ⇒ Google::Apis::DataprocV1::HiveJob
A Dataproc job for running Apache Hive (https://hive.apache.org/) queries on YARN.
-
#job_uuid ⇒ String
Output only.
-
#labels ⇒ Hash<String,String>
Optional.
-
#pig_job ⇒ Google::Apis::DataprocV1::PigJob
A Dataproc job for running Apache Pig (https://pig.apache.org/) queries on YARN.
-
#placement ⇒ Google::Apis::DataprocV1::JobPlacement
Dataproc job config.
-
#presto_job ⇒ Google::Apis::DataprocV1::PrestoJob
A Dataproc job for running Presto (https://prestosql.io/) queries Corresponds to the JSON property
prestoJob
. -
#pyspark_job ⇒ Google::Apis::DataprocV1::PySparkJob
A Dataproc job for running Apache PySpark (https://spark.apache.org/docs/0.9.0/ python-programming-guide.html) applications on YARN.
-
#reference ⇒ Google::Apis::DataprocV1::JobReference
Encapsulates the full scoping used to reference a job.
-
#scheduling ⇒ Google::Apis::DataprocV1::JobScheduling
Job scheduling options.
-
#spark_job ⇒ Google::Apis::DataprocV1::SparkJob
A Dataproc job for running Apache Spark (http://spark.apache.org/) applications on YARN.
-
#spark_r_job ⇒ Google::Apis::DataprocV1::SparkRJob
A Dataproc job for running Apache SparkR (https://spark.apache.org/docs/latest/ sparkr.html) applications on YARN.
-
#spark_sql_job ⇒ Google::Apis::DataprocV1::SparkSqlJob
A Dataproc job for running Apache Spark SQL (http://spark.apache.org/sql/) queries.
-
#status ⇒ Google::Apis::DataprocV1::JobStatus
Dataproc job status.
-
#status_history ⇒ Array<Google::Apis::DataprocV1::JobStatus>
Output only.
-
#yarn_applications ⇒ Array<Google::Apis::DataprocV1::YarnApplication>
Output only.
Instance Method Summary collapse
-
#initialize(**args) ⇒ Job
constructor
A new instance of Job.
-
#update!(**args) ⇒ Object
Update properties of this object.
Methods included from Core::JsonObjectSupport
Methods included from Core::Hashable
Constructor Details
#initialize(**args) ⇒ Job
Returns a new instance of Job.
1463 1464 1465 |
# File 'generated/google/apis/dataproc_v1/classes.rb', line 1463 def initialize(**args) update!(**args) end |
Instance Attribute Details
#done ⇒ Boolean Also known as: done?
Output only. Indicates whether the job is completed. If the value is false,
the job is still in progress. If true, the job is completed, and status.state
field will indicate if it was successful, failed, or cancelled.
Corresponds to the JSON property done
1350 1351 1352 |
# File 'generated/google/apis/dataproc_v1/classes.rb', line 1350 def done @done end |
#driver_control_files_uri ⇒ String
Output only. If present, the location of miscellaneous control files which may
be used as part of job setup and handling. If not present, control files may
be placed in the same location as driver_output_uri.
Corresponds to the JSON property driverControlFilesUri
1358 1359 1360 |
# File 'generated/google/apis/dataproc_v1/classes.rb', line 1358 def driver_control_files_uri @driver_control_files_uri end |
#driver_output_resource_uri ⇒ String
Output only. A URI pointing to the location of the stdout of the job's driver
program.
Corresponds to the JSON property driverOutputResourceUri
1364 1365 1366 |
# File 'generated/google/apis/dataproc_v1/classes.rb', line 1364 def driver_output_resource_uri @driver_output_resource_uri end |
#hadoop_job ⇒ Google::Apis::DataprocV1::HadoopJob
A Dataproc job for running Apache Hadoop MapReduce (https://hadoop.apache.org/
docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/
MapReduceTutorial.html) jobs on Apache Hadoop YARN (https://hadoop.apache.org/
docs/r2.7.1/hadoop-yarn/hadoop-yarn-site/YARN.html).
Corresponds to the JSON property hadoopJob
1372 1373 1374 |
# File 'generated/google/apis/dataproc_v1/classes.rb', line 1372 def hadoop_job @hadoop_job end |
#hive_job ⇒ Google::Apis::DataprocV1::HiveJob
A Dataproc job for running Apache Hive (https://hive.apache.org/) queries on
YARN.
Corresponds to the JSON property hiveJob
1378 1379 1380 |
# File 'generated/google/apis/dataproc_v1/classes.rb', line 1378 def hive_job @hive_job end |
#job_uuid ⇒ String
Output only. A UUID that uniquely identifies a job within the project over
time. This is in contrast to a user-settable reference.job_id that may be
reused over time.
Corresponds to the JSON property jobUuid
1385 1386 1387 |
# File 'generated/google/apis/dataproc_v1/classes.rb', line 1385 def job_uuid @job_uuid end |
#labels ⇒ Hash<String,String>
Optional. The labels to associate with this job. Label keys must contain 1 to
63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.
txt). Label values may be empty, but, if present, must contain 1 to 63
characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt)
. No more than 32 labels can be associated with a job.
Corresponds to the JSON property labels
1394 1395 1396 |
# File 'generated/google/apis/dataproc_v1/classes.rb', line 1394 def labels @labels end |
#pig_job ⇒ Google::Apis::DataprocV1::PigJob
A Dataproc job for running Apache Pig (https://pig.apache.org/) queries on
YARN.
Corresponds to the JSON property pigJob
1400 1401 1402 |
# File 'generated/google/apis/dataproc_v1/classes.rb', line 1400 def pig_job @pig_job end |
#placement ⇒ Google::Apis::DataprocV1::JobPlacement
Dataproc job config.
Corresponds to the JSON property placement
1405 1406 1407 |
# File 'generated/google/apis/dataproc_v1/classes.rb', line 1405 def placement @placement end |
#presto_job ⇒ Google::Apis::DataprocV1::PrestoJob
A Dataproc job for running Presto (https://prestosql.io/) queries
Corresponds to the JSON property prestoJob
1410 1411 1412 |
# File 'generated/google/apis/dataproc_v1/classes.rb', line 1410 def presto_job @presto_job end |
#pyspark_job ⇒ Google::Apis::DataprocV1::PySparkJob
A Dataproc job for running Apache PySpark (https://spark.apache.org/docs/0.9.0/
python-programming-guide.html) applications on YARN.
Corresponds to the JSON property pysparkJob
1416 1417 1418 |
# File 'generated/google/apis/dataproc_v1/classes.rb', line 1416 def pyspark_job @pyspark_job end |
#reference ⇒ Google::Apis::DataprocV1::JobReference
Encapsulates the full scoping used to reference a job.
Corresponds to the JSON property reference
1421 1422 1423 |
# File 'generated/google/apis/dataproc_v1/classes.rb', line 1421 def reference @reference end |
#scheduling ⇒ Google::Apis::DataprocV1::JobScheduling
Job scheduling options.
Corresponds to the JSON property scheduling
1426 1427 1428 |
# File 'generated/google/apis/dataproc_v1/classes.rb', line 1426 def scheduling @scheduling end |
#spark_job ⇒ Google::Apis::DataprocV1::SparkJob
A Dataproc job for running Apache Spark (http://spark.apache.org/)
applications on YARN.
Corresponds to the JSON property sparkJob
1432 1433 1434 |
# File 'generated/google/apis/dataproc_v1/classes.rb', line 1432 def spark_job @spark_job end |
#spark_r_job ⇒ Google::Apis::DataprocV1::SparkRJob
A Dataproc job for running Apache SparkR (https://spark.apache.org/docs/latest/
sparkr.html) applications on YARN.
Corresponds to the JSON property sparkRJob
1438 1439 1440 |
# File 'generated/google/apis/dataproc_v1/classes.rb', line 1438 def spark_r_job @spark_r_job end |
#spark_sql_job ⇒ Google::Apis::DataprocV1::SparkSqlJob
A Dataproc job for running Apache Spark SQL (http://spark.apache.org/sql/)
queries.
Corresponds to the JSON property sparkSqlJob
1444 1445 1446 |
# File 'generated/google/apis/dataproc_v1/classes.rb', line 1444 def spark_sql_job @spark_sql_job end |
#status ⇒ Google::Apis::DataprocV1::JobStatus
Dataproc job status.
Corresponds to the JSON property status
1449 1450 1451 |
# File 'generated/google/apis/dataproc_v1/classes.rb', line 1449 def status @status end |
#status_history ⇒ Array<Google::Apis::DataprocV1::JobStatus>
Output only. The previous job status.
Corresponds to the JSON property statusHistory
1454 1455 1456 |
# File 'generated/google/apis/dataproc_v1/classes.rb', line 1454 def status_history @status_history end |
#yarn_applications ⇒ Array<Google::Apis::DataprocV1::YarnApplication>
Output only. The collection of YARN applications spun up by this job.Beta
Feature: This report is available for testing purposes only. It may be changed
before final release.
Corresponds to the JSON property yarnApplications
1461 1462 1463 |
# File 'generated/google/apis/dataproc_v1/classes.rb', line 1461 def yarn_applications @yarn_applications end |
Instance Method Details
#update!(**args) ⇒ Object
Update properties of this object
1468 1469 1470 1471 1472 1473 1474 1475 1476 1477 1478 1479 1480 1481 1482 1483 1484 1485 1486 1487 1488 |
# File 'generated/google/apis/dataproc_v1/classes.rb', line 1468 def update!(**args) @done = args[:done] if args.key?(:done) @driver_control_files_uri = args[:driver_control_files_uri] if args.key?(:driver_control_files_uri) @driver_output_resource_uri = args[:driver_output_resource_uri] if args.key?(:driver_output_resource_uri) @hadoop_job = args[:hadoop_job] if args.key?(:hadoop_job) @hive_job = args[:hive_job] if args.key?(:hive_job) @job_uuid = args[:job_uuid] if args.key?(:job_uuid) @labels = args[:labels] if args.key?(:labels) @pig_job = args[:pig_job] if args.key?(:pig_job) @placement = args[:placement] if args.key?(:placement) @presto_job = args[:presto_job] if args.key?(:presto_job) @pyspark_job = args[:pyspark_job] if args.key?(:pyspark_job) @reference = args[:reference] if args.key?(:reference) @scheduling = args[:scheduling] if args.key?(:scheduling) @spark_job = args[:spark_job] if args.key?(:spark_job) @spark_r_job = args[:spark_r_job] if args.key?(:spark_r_job) @spark_sql_job = args[:spark_sql_job] if args.key?(:spark_sql_job) @status = args[:status] if args.key?(:status) @status_history = args[:status_history] if args.key?(:status_history) @yarn_applications = args[:yarn_applications] if args.key?(:yarn_applications) end |