Class: Google::Apis::DataprocV1beta2::Job
- Inherits:
-
Object
- Object
- Google::Apis::DataprocV1beta2::Job
- Includes:
- Core::Hashable, Core::JsonObjectSupport
- Defined in:
- generated/google/apis/dataproc_v1beta2/classes.rb,
generated/google/apis/dataproc_v1beta2/representations.rb,
generated/google/apis/dataproc_v1beta2/representations.rb
Overview
A Dataproc job resource.
Instance Attribute Summary collapse
-
#done ⇒ Boolean
(also: #done?)
Output only.
-
#driver_control_files_uri ⇒ String
Output only.
-
#driver_output_resource_uri ⇒ String
Output only.
-
#hadoop_job ⇒ Google::Apis::DataprocV1beta2::HadoopJob
A Dataproc job for running Apache Hadoop MapReduce (https://hadoop.apache.org/ docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/ MapReduceTutorial.html) jobs on Apache Hadoop YARN (https://hadoop.apache.org/ docs/r2.7.1/hadoop-yarn/hadoop-yarn-site/YARN.html).
-
#hive_job ⇒ Google::Apis::DataprocV1beta2::HiveJob
A Dataproc job for running Apache Hive (https://hive.apache.org/) queries on YARN.
-
#job_uuid ⇒ String
Output only.
-
#labels ⇒ Hash<String,String>
Optional.
-
#pig_job ⇒ Google::Apis::DataprocV1beta2::PigJob
A Dataproc job for running Apache Pig (https://pig.apache.org/) queries on YARN.
-
#placement ⇒ Google::Apis::DataprocV1beta2::JobPlacement
Dataproc job config.
-
#presto_job ⇒ Google::Apis::DataprocV1beta2::PrestoJob
A Dataproc job for running Presto (https://prestosql.io/) queries Corresponds to the JSON property
prestoJob
. -
#pyspark_job ⇒ Google::Apis::DataprocV1beta2::PySparkJob
A Dataproc job for running Apache PySpark (https://spark.apache.org/docs/0.9.0/ python-programming-guide.html) applications on YARN.
-
#reference ⇒ Google::Apis::DataprocV1beta2::JobReference
Encapsulates the full scoping used to reference a job.
-
#scheduling ⇒ Google::Apis::DataprocV1beta2::JobScheduling
Job scheduling options.
-
#spark_job ⇒ Google::Apis::DataprocV1beta2::SparkJob
A Dataproc job for running Apache Spark (http://spark.apache.org/) applications on YARN.
-
#spark_r_job ⇒ Google::Apis::DataprocV1beta2::SparkRJob
A Dataproc job for running Apache SparkR (https://spark.apache.org/docs/latest/ sparkr.html) applications on YARN.
-
#spark_sql_job ⇒ Google::Apis::DataprocV1beta2::SparkSqlJob
A Dataproc job for running Apache Spark SQL (http://spark.apache.org/sql/) queries.
-
#status ⇒ Google::Apis::DataprocV1beta2::JobStatus
Dataproc job status.
-
#status_history ⇒ Array<Google::Apis::DataprocV1beta2::JobStatus>
Output only.
-
#submitted_by ⇒ String
Output only.
-
#yarn_applications ⇒ Array<Google::Apis::DataprocV1beta2::YarnApplication>
Output only.
Instance Method Summary collapse
-
#initialize(**args) ⇒ Job
constructor
A new instance of Job.
-
#update!(**args) ⇒ Object
Update properties of this object.
Methods included from Core::JsonObjectSupport
Methods included from Core::Hashable
Constructor Details
#initialize(**args) ⇒ Job
Returns a new instance of Job.
1511 1512 1513 |
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 1511 def initialize(**args) update!(**args) end |
Instance Attribute Details
#done ⇒ Boolean Also known as: done?
Output only. Indicates whether the job is completed. If the value is false,
the job is still in progress. If true, the job is completed, and status.state
field will indicate if it was successful, failed, or cancelled.
Corresponds to the JSON property done
1389 1390 1391 |
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 1389 def done @done end |
#driver_control_files_uri ⇒ String
Output only. If present, the location of miscellaneous control files which may
be used as part of job setup and handling. If not present, control files may
be placed in the same location as driver_output_uri.
Corresponds to the JSON property driverControlFilesUri
1397 1398 1399 |
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 1397 def driver_control_files_uri @driver_control_files_uri end |
#driver_output_resource_uri ⇒ String
Output only. A URI pointing to the location of the stdout of the job's driver
program.
Corresponds to the JSON property driverOutputResourceUri
1403 1404 1405 |
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 1403 def driver_output_resource_uri @driver_output_resource_uri end |
#hadoop_job ⇒ Google::Apis::DataprocV1beta2::HadoopJob
A Dataproc job for running Apache Hadoop MapReduce (https://hadoop.apache.org/
docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/
MapReduceTutorial.html) jobs on Apache Hadoop YARN (https://hadoop.apache.org/
docs/r2.7.1/hadoop-yarn/hadoop-yarn-site/YARN.html).
Corresponds to the JSON property hadoopJob
1411 1412 1413 |
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 1411 def hadoop_job @hadoop_job end |
#hive_job ⇒ Google::Apis::DataprocV1beta2::HiveJob
A Dataproc job for running Apache Hive (https://hive.apache.org/) queries on
YARN.
Corresponds to the JSON property hiveJob
1417 1418 1419 |
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 1417 def hive_job @hive_job end |
#job_uuid ⇒ String
Output only. A UUID that uniquely identifies a job within the project over
time. This is in contrast to a user-settable reference.job_id that may be
reused over time.
Corresponds to the JSON property jobUuid
1424 1425 1426 |
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 1424 def job_uuid @job_uuid end |
#labels ⇒ Hash<String,String>
Optional. The labels to associate with this job. Label keys must contain 1 to
63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.
txt). Label values may be empty, but, if present, must contain 1 to 63
characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt)
. No more than 32 labels can be associated with a job.
Corresponds to the JSON property labels
1433 1434 1435 |
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 1433 def labels @labels end |
#pig_job ⇒ Google::Apis::DataprocV1beta2::PigJob
A Dataproc job for running Apache Pig (https://pig.apache.org/) queries on
YARN.
Corresponds to the JSON property pigJob
1439 1440 1441 |
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 1439 def pig_job @pig_job end |
#placement ⇒ Google::Apis::DataprocV1beta2::JobPlacement
Dataproc job config.
Corresponds to the JSON property placement
1444 1445 1446 |
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 1444 def placement @placement end |
#presto_job ⇒ Google::Apis::DataprocV1beta2::PrestoJob
A Dataproc job for running Presto (https://prestosql.io/) queries
Corresponds to the JSON property prestoJob
1449 1450 1451 |
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 1449 def presto_job @presto_job end |
#pyspark_job ⇒ Google::Apis::DataprocV1beta2::PySparkJob
A Dataproc job for running Apache PySpark (https://spark.apache.org/docs/0.9.0/
python-programming-guide.html) applications on YARN.
Corresponds to the JSON property pysparkJob
1455 1456 1457 |
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 1455 def pyspark_job @pyspark_job end |
#reference ⇒ Google::Apis::DataprocV1beta2::JobReference
Encapsulates the full scoping used to reference a job.
Corresponds to the JSON property reference
1460 1461 1462 |
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 1460 def reference @reference end |
#scheduling ⇒ Google::Apis::DataprocV1beta2::JobScheduling
Job scheduling options.
Corresponds to the JSON property scheduling
1465 1466 1467 |
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 1465 def scheduling @scheduling end |
#spark_job ⇒ Google::Apis::DataprocV1beta2::SparkJob
A Dataproc job for running Apache Spark (http://spark.apache.org/)
applications on YARN. The specification of the main method to call to drive
the job. Specify either the jar file that contains the main class or the main
class name. To pass both a main jar and a main class in that jar, add the jar
to CommonJob.jar_file_uris, and then specify the main class name in main_class.
Corresponds to the JSON property sparkJob
1474 1475 1476 |
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 1474 def spark_job @spark_job end |
#spark_r_job ⇒ Google::Apis::DataprocV1beta2::SparkRJob
A Dataproc job for running Apache SparkR (https://spark.apache.org/docs/latest/
sparkr.html) applications on YARN.
Corresponds to the JSON property sparkRJob
1480 1481 1482 |
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 1480 def spark_r_job @spark_r_job end |
#spark_sql_job ⇒ Google::Apis::DataprocV1beta2::SparkSqlJob
A Dataproc job for running Apache Spark SQL (http://spark.apache.org/sql/)
queries.
Corresponds to the JSON property sparkSqlJob
1486 1487 1488 |
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 1486 def spark_sql_job @spark_sql_job end |
#status ⇒ Google::Apis::DataprocV1beta2::JobStatus
Dataproc job status.
Corresponds to the JSON property status
1491 1492 1493 |
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 1491 def status @status end |
#status_history ⇒ Array<Google::Apis::DataprocV1beta2::JobStatus>
Output only. The previous job status.
Corresponds to the JSON property statusHistory
1496 1497 1498 |
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 1496 def status_history @status_history end |
#submitted_by ⇒ String
Output only. The email address of the user submitting the job. For jobs
submitted on the cluster, the address is username@hostname
.
Corresponds to the JSON property submittedBy
1502 1503 1504 |
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 1502 def submitted_by @submitted_by end |
#yarn_applications ⇒ Array<Google::Apis::DataprocV1beta2::YarnApplication>
Output only. The collection of YARN applications spun up by this job.Beta
Feature: This report is available for testing purposes only. It may be changed
before final release.
Corresponds to the JSON property yarnApplications
1509 1510 1511 |
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 1509 def yarn_applications @yarn_applications end |
Instance Method Details
#update!(**args) ⇒ Object
Update properties of this object
1516 1517 1518 1519 1520 1521 1522 1523 1524 1525 1526 1527 1528 1529 1530 1531 1532 1533 1534 1535 1536 1537 |
# File 'generated/google/apis/dataproc_v1beta2/classes.rb', line 1516 def update!(**args) @done = args[:done] if args.key?(:done) @driver_control_files_uri = args[:driver_control_files_uri] if args.key?(:driver_control_files_uri) @driver_output_resource_uri = args[:driver_output_resource_uri] if args.key?(:driver_output_resource_uri) @hadoop_job = args[:hadoop_job] if args.key?(:hadoop_job) @hive_job = args[:hive_job] if args.key?(:hive_job) @job_uuid = args[:job_uuid] if args.key?(:job_uuid) @labels = args[:labels] if args.key?(:labels) @pig_job = args[:pig_job] if args.key?(:pig_job) @placement = args[:placement] if args.key?(:placement) @presto_job = args[:presto_job] if args.key?(:presto_job) @pyspark_job = args[:pyspark_job] if args.key?(:pyspark_job) @reference = args[:reference] if args.key?(:reference) @scheduling = args[:scheduling] if args.key?(:scheduling) @spark_job = args[:spark_job] if args.key?(:spark_job) @spark_r_job = args[:spark_r_job] if args.key?(:spark_r_job) @spark_sql_job = args[:spark_sql_job] if args.key?(:spark_sql_job) @status = args[:status] if args.key?(:status) @status_history = args[:status_history] if args.key?(:status_history) @submitted_by = args[:submitted_by] if args.key?(:submitted_by) @yarn_applications = args[:yarn_applications] if args.key?(:yarn_applications) end |