Class: Google::Apis::DataprocV1::Job

Inherits:
Object
  • Object
show all
Includes:
Core::Hashable, Core::JsonObjectSupport
Defined in:
lib/google/apis/dataproc_v1/classes.rb,
lib/google/apis/dataproc_v1/representations.rb,
lib/google/apis/dataproc_v1/representations.rb

Overview

A Dataproc job resource.

Instance Attribute Summary collapse

Instance Method Summary collapse

Constructor Details

#initialize(**args) ⇒ Job

Returns a new instance of Job.



2435
2436
2437
# File 'lib/google/apis/dataproc_v1/classes.rb', line 2435

def initialize(**args)
   update!(**args)
end

Instance Attribute Details

#doneBoolean Also known as: done?

Output only. Indicates whether the job is completed. If the value is false, the job is still in progress. If true, the job is completed, and status.state field will indicate if it was successful, failed, or cancelled. Corresponds to the JSON property done

Returns:

  • (Boolean)


2306
2307
2308
# File 'lib/google/apis/dataproc_v1/classes.rb', line 2306

def done
  @done
end

#driver_control_files_uriString

Output only. If present, the location of miscellaneous control files which may be used as part of job setup and handling. If not present, control files may be placed in the same location as driver_output_uri. Corresponds to the JSON property driverControlFilesUri

Returns:

  • (String)


2314
2315
2316
# File 'lib/google/apis/dataproc_v1/classes.rb', line 2314

def driver_control_files_uri
  @driver_control_files_uri
end

#driver_output_resource_uriString

Output only. A URI pointing to the location of the stdout of the job's driver program. Corresponds to the JSON property driverOutputResourceUri

Returns:

  • (String)


2320
2321
2322
# File 'lib/google/apis/dataproc_v1/classes.rb', line 2320

def driver_output_resource_uri
  @driver_output_resource_uri
end

#driver_scheduling_configGoogle::Apis::DataprocV1::DriverSchedulingConfig

Driver scheduling configuration. Corresponds to the JSON property driverSchedulingConfig



2325
2326
2327
# File 'lib/google/apis/dataproc_v1/classes.rb', line 2325

def driver_scheduling_config
  @driver_scheduling_config
end

#hadoop_jobGoogle::Apis::DataprocV1::HadoopJob

A Dataproc job for running Apache Hadoop MapReduce (https://hadoop.apache.org/ docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/ MapReduceTutorial.html) jobs on Apache Hadoop YARN (https://hadoop.apache.org/ docs/r2.7.1/hadoop-yarn/hadoop-yarn-site/YARN.html). Corresponds to the JSON property hadoopJob



2333
2334
2335
# File 'lib/google/apis/dataproc_v1/classes.rb', line 2333

def hadoop_job
  @hadoop_job
end

#hive_jobGoogle::Apis::DataprocV1::HiveJob

A Dataproc job for running Apache Hive (https://hive.apache.org/) queries on YARN. Corresponds to the JSON property hiveJob



2339
2340
2341
# File 'lib/google/apis/dataproc_v1/classes.rb', line 2339

def hive_job
  @hive_job
end

#job_uuidString

Output only. A UUID that uniquely identifies a job within the project over time. This is in contrast to a user-settable reference.job_id that may be reused over time. Corresponds to the JSON property jobUuid

Returns:

  • (String)


2346
2347
2348
# File 'lib/google/apis/dataproc_v1/classes.rb', line 2346

def job_uuid
  @job_uuid
end

#labelsHash<String,String>

Optional. The labels to associate with this job. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035. txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt) . No more than 32 labels can be associated with a job. Corresponds to the JSON property labels

Returns:

  • (Hash<String,String>)


2355
2356
2357
# File 'lib/google/apis/dataproc_v1/classes.rb', line 2355

def labels
  @labels
end

#pig_jobGoogle::Apis::DataprocV1::PigJob

A Dataproc job for running Apache Pig (https://pig.apache.org/) queries on YARN. Corresponds to the JSON property pigJob



2361
2362
2363
# File 'lib/google/apis/dataproc_v1/classes.rb', line 2361

def pig_job
  @pig_job
end

#placementGoogle::Apis::DataprocV1::JobPlacement

Dataproc job config. Corresponds to the JSON property placement



2366
2367
2368
# File 'lib/google/apis/dataproc_v1/classes.rb', line 2366

def placement
  @placement
end

#presto_jobGoogle::Apis::DataprocV1::PrestoJob

A Dataproc job for running Presto (https://prestosql.io/) queries. IMPORTANT: The Dataproc Presto Optional Component (https://cloud.google.com/dataproc/docs/ concepts/components/presto) must be enabled when the cluster is created to submit a Presto job to the cluster. Corresponds to the JSON property prestoJob



2374
2375
2376
# File 'lib/google/apis/dataproc_v1/classes.rb', line 2374

def presto_job
  @presto_job
end

#pyspark_jobGoogle::Apis::DataprocV1::PySparkJob

A Dataproc job for running Apache PySpark (https://spark.apache.org/docs/0.9.0/ python-programming-guide.html) applications on YARN. Corresponds to the JSON property pysparkJob



2380
2381
2382
# File 'lib/google/apis/dataproc_v1/classes.rb', line 2380

def pyspark_job
  @pyspark_job
end

#referenceGoogle::Apis::DataprocV1::JobReference

Encapsulates the full scoping used to reference a job. Corresponds to the JSON property reference



2385
2386
2387
# File 'lib/google/apis/dataproc_v1/classes.rb', line 2385

def reference
  @reference
end

#schedulingGoogle::Apis::DataprocV1::JobScheduling

Job scheduling options. Corresponds to the JSON property scheduling



2390
2391
2392
# File 'lib/google/apis/dataproc_v1/classes.rb', line 2390

def scheduling
  @scheduling
end

#spark_jobGoogle::Apis::DataprocV1::SparkJob

A Dataproc job for running Apache Spark (https://spark.apache.org/) applications on YARN. Corresponds to the JSON property sparkJob



2396
2397
2398
# File 'lib/google/apis/dataproc_v1/classes.rb', line 2396

def spark_job
  @spark_job
end

#spark_r_jobGoogle::Apis::DataprocV1::SparkRJob

A Dataproc job for running Apache SparkR (https://spark.apache.org/docs/latest/ sparkr.html) applications on YARN. Corresponds to the JSON property sparkRJob



2402
2403
2404
# File 'lib/google/apis/dataproc_v1/classes.rb', line 2402

def spark_r_job
  @spark_r_job
end

#spark_sql_jobGoogle::Apis::DataprocV1::SparkSqlJob

A Dataproc job for running Apache Spark SQL (https://spark.apache.org/sql/) queries. Corresponds to the JSON property sparkSqlJob



2408
2409
2410
# File 'lib/google/apis/dataproc_v1/classes.rb', line 2408

def spark_sql_job
  @spark_sql_job
end

#statusGoogle::Apis::DataprocV1::JobStatus

Dataproc job status. Corresponds to the JSON property status



2413
2414
2415
# File 'lib/google/apis/dataproc_v1/classes.rb', line 2413

def status
  @status
end

#status_historyArray<Google::Apis::DataprocV1::JobStatus>

Output only. The previous job status. Corresponds to the JSON property statusHistory



2418
2419
2420
# File 'lib/google/apis/dataproc_v1/classes.rb', line 2418

def status_history
  @status_history
end

#trino_jobGoogle::Apis::DataprocV1::TrinoJob

A Dataproc job for running Trino (https://trino.io/) queries. IMPORTANT: The Dataproc Trino Optional Component (https://cloud.google.com/dataproc/docs/ concepts/components/trino) must be enabled when the cluster is created to submit a Trino job to the cluster. Corresponds to the JSON property trinoJob



2426
2427
2428
# File 'lib/google/apis/dataproc_v1/classes.rb', line 2426

def trino_job
  @trino_job
end

#yarn_applicationsArray<Google::Apis::DataprocV1::YarnApplication>

Output only. The collection of YARN applications spun up by this job.Beta Feature: This report is available for testing purposes only. It may be changed before final release. Corresponds to the JSON property yarnApplications



2433
2434
2435
# File 'lib/google/apis/dataproc_v1/classes.rb', line 2433

def yarn_applications
  @yarn_applications
end

Instance Method Details

#update!(**args) ⇒ Object

Update properties of this object



2440
2441
2442
2443
2444
2445
2446
2447
2448
2449
2450
2451
2452
2453
2454
2455
2456
2457
2458
2459
2460
2461
2462
# File 'lib/google/apis/dataproc_v1/classes.rb', line 2440

def update!(**args)
  @done = args[:done] if args.key?(:done)
  @driver_control_files_uri = args[:driver_control_files_uri] if args.key?(:driver_control_files_uri)
  @driver_output_resource_uri = args[:driver_output_resource_uri] if args.key?(:driver_output_resource_uri)
  @driver_scheduling_config = args[:driver_scheduling_config] if args.key?(:driver_scheduling_config)
  @hadoop_job = args[:hadoop_job] if args.key?(:hadoop_job)
  @hive_job = args[:hive_job] if args.key?(:hive_job)
  @job_uuid = args[:job_uuid] if args.key?(:job_uuid)
  @labels = args[:labels] if args.key?(:labels)
  @pig_job = args[:pig_job] if args.key?(:pig_job)
  @placement = args[:placement] if args.key?(:placement)
  @presto_job = args[:presto_job] if args.key?(:presto_job)
  @pyspark_job = args[:pyspark_job] if args.key?(:pyspark_job)
  @reference = args[:reference] if args.key?(:reference)
  @scheduling = args[:scheduling] if args.key?(:scheduling)
  @spark_job = args[:spark_job] if args.key?(:spark_job)
  @spark_r_job = args[:spark_r_job] if args.key?(:spark_r_job)
  @spark_sql_job = args[:spark_sql_job] if args.key?(:spark_sql_job)
  @status = args[:status] if args.key?(:status)
  @status_history = args[:status_history] if args.key?(:status_history)
  @trino_job = args[:trino_job] if args.key?(:trino_job)
  @yarn_applications = args[:yarn_applications] if args.key?(:yarn_applications)
end