Class: Google::Apis::DataprocV1::Job

Inherits:
Object
  • Object
show all
Includes:
Core::Hashable, Core::JsonObjectSupport
Defined in:
lib/google/apis/dataproc_v1/classes.rb,
lib/google/apis/dataproc_v1/representations.rb,
lib/google/apis/dataproc_v1/representations.rb

Overview

A Dataproc job resource.

Instance Attribute Summary collapse

Instance Method Summary collapse

Constructor Details

#initialize(**args) ⇒ Job

Returns a new instance of Job.



2461
2462
2463
# File 'lib/google/apis/dataproc_v1/classes.rb', line 2461

def initialize(**args)
   update!(**args)
end

Instance Attribute Details

#doneBoolean Also known as: done?

Output only. Indicates whether the job is completed. If the value is false, the job is still in progress. If true, the job is completed, and status.state field will indicate if it was successful, failed, or cancelled. Corresponds to the JSON property done

Returns:

  • (Boolean)


2332
2333
2334
# File 'lib/google/apis/dataproc_v1/classes.rb', line 2332

def done
  @done
end

#driver_control_files_uriString

Output only. If present, the location of miscellaneous control files which may be used as part of job setup and handling. If not present, control files may be placed in the same location as driver_output_uri. Corresponds to the JSON property driverControlFilesUri

Returns:

  • (String)


2340
2341
2342
# File 'lib/google/apis/dataproc_v1/classes.rb', line 2340

def driver_control_files_uri
  @driver_control_files_uri
end

#driver_output_resource_uriString

Output only. A URI pointing to the location of the stdout of the job's driver program. Corresponds to the JSON property driverOutputResourceUri

Returns:

  • (String)


2346
2347
2348
# File 'lib/google/apis/dataproc_v1/classes.rb', line 2346

def driver_output_resource_uri
  @driver_output_resource_uri
end

#driver_scheduling_configGoogle::Apis::DataprocV1::DriverSchedulingConfig

Driver scheduling configuration. Corresponds to the JSON property driverSchedulingConfig



2351
2352
2353
# File 'lib/google/apis/dataproc_v1/classes.rb', line 2351

def driver_scheduling_config
  @driver_scheduling_config
end

#hadoop_jobGoogle::Apis::DataprocV1::HadoopJob

A Dataproc job for running Apache Hadoop MapReduce (https://hadoop.apache.org/ docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/ MapReduceTutorial.html) jobs on Apache Hadoop YARN (https://hadoop.apache.org/ docs/r2.7.1/hadoop-yarn/hadoop-yarn-site/YARN.html). Corresponds to the JSON property hadoopJob



2359
2360
2361
# File 'lib/google/apis/dataproc_v1/classes.rb', line 2359

def hadoop_job
  @hadoop_job
end

#hive_jobGoogle::Apis::DataprocV1::HiveJob

A Dataproc job for running Apache Hive (https://hive.apache.org/) queries on YARN. Corresponds to the JSON property hiveJob



2365
2366
2367
# File 'lib/google/apis/dataproc_v1/classes.rb', line 2365

def hive_job
  @hive_job
end

#job_uuidString

Output only. A UUID that uniquely identifies a job within the project over time. This is in contrast to a user-settable reference.job_id that may be reused over time. Corresponds to the JSON property jobUuid

Returns:

  • (String)


2372
2373
2374
# File 'lib/google/apis/dataproc_v1/classes.rb', line 2372

def job_uuid
  @job_uuid
end

#labelsHash<String,String>

Optional. The labels to associate with this job. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035. txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt) . No more than 32 labels can be associated with a job. Corresponds to the JSON property labels

Returns:

  • (Hash<String,String>)


2381
2382
2383
# File 'lib/google/apis/dataproc_v1/classes.rb', line 2381

def labels
  @labels
end

#pig_jobGoogle::Apis::DataprocV1::PigJob

A Dataproc job for running Apache Pig (https://pig.apache.org/) queries on YARN. Corresponds to the JSON property pigJob



2387
2388
2389
# File 'lib/google/apis/dataproc_v1/classes.rb', line 2387

def pig_job
  @pig_job
end

#placementGoogle::Apis::DataprocV1::JobPlacement

Dataproc job config. Corresponds to the JSON property placement



2392
2393
2394
# File 'lib/google/apis/dataproc_v1/classes.rb', line 2392

def placement
  @placement
end

#presto_jobGoogle::Apis::DataprocV1::PrestoJob

A Dataproc job for running Presto (https://prestosql.io/) queries. IMPORTANT: The Dataproc Presto Optional Component (https://cloud.google.com/dataproc/docs/ concepts/components/presto) must be enabled when the cluster is created to submit a Presto job to the cluster. Corresponds to the JSON property prestoJob



2400
2401
2402
# File 'lib/google/apis/dataproc_v1/classes.rb', line 2400

def presto_job
  @presto_job
end

#pyspark_jobGoogle::Apis::DataprocV1::PySparkJob

A Dataproc job for running Apache PySpark (https://spark.apache.org/docs/0.9.0/ python-programming-guide.html) applications on YARN. Corresponds to the JSON property pysparkJob



2406
2407
2408
# File 'lib/google/apis/dataproc_v1/classes.rb', line 2406

def pyspark_job
  @pyspark_job
end

#referenceGoogle::Apis::DataprocV1::JobReference

Encapsulates the full scoping used to reference a job. Corresponds to the JSON property reference



2411
2412
2413
# File 'lib/google/apis/dataproc_v1/classes.rb', line 2411

def reference
  @reference
end

#schedulingGoogle::Apis::DataprocV1::JobScheduling

Job scheduling options. Corresponds to the JSON property scheduling



2416
2417
2418
# File 'lib/google/apis/dataproc_v1/classes.rb', line 2416

def scheduling
  @scheduling
end

#spark_jobGoogle::Apis::DataprocV1::SparkJob

A Dataproc job for running Apache Spark (https://spark.apache.org/) applications on YARN. Corresponds to the JSON property sparkJob



2422
2423
2424
# File 'lib/google/apis/dataproc_v1/classes.rb', line 2422

def spark_job
  @spark_job
end

#spark_r_jobGoogle::Apis::DataprocV1::SparkRJob

A Dataproc job for running Apache SparkR (https://spark.apache.org/docs/latest/ sparkr.html) applications on YARN. Corresponds to the JSON property sparkRJob



2428
2429
2430
# File 'lib/google/apis/dataproc_v1/classes.rb', line 2428

def spark_r_job
  @spark_r_job
end

#spark_sql_jobGoogle::Apis::DataprocV1::SparkSqlJob

A Dataproc job for running Apache Spark SQL (https://spark.apache.org/sql/) queries. Corresponds to the JSON property sparkSqlJob



2434
2435
2436
# File 'lib/google/apis/dataproc_v1/classes.rb', line 2434

def spark_sql_job
  @spark_sql_job
end

#statusGoogle::Apis::DataprocV1::JobStatus

Dataproc job status. Corresponds to the JSON property status



2439
2440
2441
# File 'lib/google/apis/dataproc_v1/classes.rb', line 2439

def status
  @status
end

#status_historyArray<Google::Apis::DataprocV1::JobStatus>

Output only. The previous job status. Corresponds to the JSON property statusHistory



2444
2445
2446
# File 'lib/google/apis/dataproc_v1/classes.rb', line 2444

def status_history
  @status_history
end

#trino_jobGoogle::Apis::DataprocV1::TrinoJob

A Dataproc job for running Trino (https://trino.io/) queries. IMPORTANT: The Dataproc Trino Optional Component (https://cloud.google.com/dataproc/docs/ concepts/components/trino) must be enabled when the cluster is created to submit a Trino job to the cluster. Corresponds to the JSON property trinoJob



2452
2453
2454
# File 'lib/google/apis/dataproc_v1/classes.rb', line 2452

def trino_job
  @trino_job
end

#yarn_applicationsArray<Google::Apis::DataprocV1::YarnApplication>

Output only. The collection of YARN applications spun up by this job.Beta Feature: This report is available for testing purposes only. It may be changed before final release. Corresponds to the JSON property yarnApplications



2459
2460
2461
# File 'lib/google/apis/dataproc_v1/classes.rb', line 2459

def yarn_applications
  @yarn_applications
end

Instance Method Details

#update!(**args) ⇒ Object

Update properties of this object



2466
2467
2468
2469
2470
2471
2472
2473
2474
2475
2476
2477
2478
2479
2480
2481
2482
2483
2484
2485
2486
2487
2488
# File 'lib/google/apis/dataproc_v1/classes.rb', line 2466

def update!(**args)
  @done = args[:done] if args.key?(:done)
  @driver_control_files_uri = args[:driver_control_files_uri] if args.key?(:driver_control_files_uri)
  @driver_output_resource_uri = args[:driver_output_resource_uri] if args.key?(:driver_output_resource_uri)
  @driver_scheduling_config = args[:driver_scheduling_config] if args.key?(:driver_scheduling_config)
  @hadoop_job = args[:hadoop_job] if args.key?(:hadoop_job)
  @hive_job = args[:hive_job] if args.key?(:hive_job)
  @job_uuid = args[:job_uuid] if args.key?(:job_uuid)
  @labels = args[:labels] if args.key?(:labels)
  @pig_job = args[:pig_job] if args.key?(:pig_job)
  @placement = args[:placement] if args.key?(:placement)
  @presto_job = args[:presto_job] if args.key?(:presto_job)
  @pyspark_job = args[:pyspark_job] if args.key?(:pyspark_job)
  @reference = args[:reference] if args.key?(:reference)
  @scheduling = args[:scheduling] if args.key?(:scheduling)
  @spark_job = args[:spark_job] if args.key?(:spark_job)
  @spark_r_job = args[:spark_r_job] if args.key?(:spark_r_job)
  @spark_sql_job = args[:spark_sql_job] if args.key?(:spark_sql_job)
  @status = args[:status] if args.key?(:status)
  @status_history = args[:status_history] if args.key?(:status_history)
  @trino_job = args[:trino_job] if args.key?(:trino_job)
  @yarn_applications = args[:yarn_applications] if args.key?(:yarn_applications)
end