Class: Google::Apis::DataprocV1::Job

Inherits:
Object
  • Object
show all
Includes:
Core::Hashable, Core::JsonObjectSupport
Defined in:
lib/google/apis/dataproc_v1/classes.rb,
lib/google/apis/dataproc_v1/representations.rb,
lib/google/apis/dataproc_v1/representations.rb

Overview

A Dataproc job resource.

Instance Attribute Summary collapse

Instance Method Summary collapse

Constructor Details

#initialize(**args) ⇒ Job

Returns a new instance of Job.



1763
1764
1765
# File 'lib/google/apis/dataproc_v1/classes.rb', line 1763

def initialize(**args)
   update!(**args)
end

Instance Attribute Details

#doneBoolean Also known as: done?

Output only. Indicates whether the job is completed. If the value is false, the job is still in progress. If true, the job is completed, and status.state field will indicate if it was successful, failed, or cancelled. Corresponds to the JSON property done

Returns:

  • (Boolean)


1647
1648
1649
# File 'lib/google/apis/dataproc_v1/classes.rb', line 1647

def done
  @done
end

#driver_control_files_uriString

Output only. If present, the location of miscellaneous control files which may be used as part of job setup and handling. If not present, control files may be placed in the same location as driver_output_uri. Corresponds to the JSON property driverControlFilesUri

Returns:

  • (String)


1655
1656
1657
# File 'lib/google/apis/dataproc_v1/classes.rb', line 1655

def driver_control_files_uri
  @driver_control_files_uri
end

#driver_output_resource_uriString

Output only. A URI pointing to the location of the stdout of the job's driver program. Corresponds to the JSON property driverOutputResourceUri

Returns:

  • (String)


1661
1662
1663
# File 'lib/google/apis/dataproc_v1/classes.rb', line 1661

def driver_output_resource_uri
  @driver_output_resource_uri
end

#hadoop_jobGoogle::Apis::DataprocV1::HadoopJob

A Dataproc job for running Apache Hadoop MapReduce (https://hadoop.apache.org/ docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/ MapReduceTutorial.html) jobs on Apache Hadoop YARN (https://hadoop.apache.org/ docs/r2.7.1/hadoop-yarn/hadoop-yarn-site/YARN.html). Corresponds to the JSON property hadoopJob



1669
1670
1671
# File 'lib/google/apis/dataproc_v1/classes.rb', line 1669

def hadoop_job
  @hadoop_job
end

#hive_jobGoogle::Apis::DataprocV1::HiveJob

A Dataproc job for running Apache Hive (https://hive.apache.org/) queries on YARN. Corresponds to the JSON property hiveJob



1675
1676
1677
# File 'lib/google/apis/dataproc_v1/classes.rb', line 1675

def hive_job
  @hive_job
end

#job_uuidString

Output only. A UUID that uniquely identifies a job within the project over time. This is in contrast to a user-settable reference.job_id that may be reused over time. Corresponds to the JSON property jobUuid

Returns:

  • (String)


1682
1683
1684
# File 'lib/google/apis/dataproc_v1/classes.rb', line 1682

def job_uuid
  @job_uuid
end

#labelsHash<String,String>

Optional. The labels to associate with this job. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035. txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt) . No more than 32 labels can be associated with a job. Corresponds to the JSON property labels

Returns:

  • (Hash<String,String>)


1691
1692
1693
# File 'lib/google/apis/dataproc_v1/classes.rb', line 1691

def labels
  @labels
end

#pig_jobGoogle::Apis::DataprocV1::PigJob

A Dataproc job for running Apache Pig (https://pig.apache.org/) queries on YARN. Corresponds to the JSON property pigJob



1697
1698
1699
# File 'lib/google/apis/dataproc_v1/classes.rb', line 1697

def pig_job
  @pig_job
end

#placementGoogle::Apis::DataprocV1::JobPlacement

Dataproc job config. Corresponds to the JSON property placement



1702
1703
1704
# File 'lib/google/apis/dataproc_v1/classes.rb', line 1702

def placement
  @placement
end

#presto_jobGoogle::Apis::DataprocV1::PrestoJob

A Dataproc job for running Presto (https://prestosql.io/) queries. IMPORTANT: The Dataproc Presto Optional Component (https://cloud.google.com/dataproc/docs/ concepts/components/presto) must be enabled when the cluster is created to submit a Presto job to the cluster. Corresponds to the JSON property prestoJob



1710
1711
1712
# File 'lib/google/apis/dataproc_v1/classes.rb', line 1710

def presto_job
  @presto_job
end

#pyspark_jobGoogle::Apis::DataprocV1::PySparkJob

A Dataproc job for running Apache PySpark (https://spark.apache.org/docs/0.9.0/ python-programming-guide.html) applications on YARN. Corresponds to the JSON property pysparkJob



1716
1717
1718
# File 'lib/google/apis/dataproc_v1/classes.rb', line 1716

def pyspark_job
  @pyspark_job
end

#referenceGoogle::Apis::DataprocV1::JobReference

Encapsulates the full scoping used to reference a job. Corresponds to the JSON property reference



1721
1722
1723
# File 'lib/google/apis/dataproc_v1/classes.rb', line 1721

def reference
  @reference
end

#schedulingGoogle::Apis::DataprocV1::JobScheduling

Job scheduling options. Corresponds to the JSON property scheduling



1726
1727
1728
# File 'lib/google/apis/dataproc_v1/classes.rb', line 1726

def scheduling
  @scheduling
end

#spark_jobGoogle::Apis::DataprocV1::SparkJob

A Dataproc job for running Apache Spark (http://spark.apache.org/) applications on YARN. Corresponds to the JSON property sparkJob



1732
1733
1734
# File 'lib/google/apis/dataproc_v1/classes.rb', line 1732

def spark_job
  @spark_job
end

#spark_r_jobGoogle::Apis::DataprocV1::SparkRJob

A Dataproc job for running Apache SparkR (https://spark.apache.org/docs/latest/ sparkr.html) applications on YARN. Corresponds to the JSON property sparkRJob



1738
1739
1740
# File 'lib/google/apis/dataproc_v1/classes.rb', line 1738

def spark_r_job
  @spark_r_job
end

#spark_sql_jobGoogle::Apis::DataprocV1::SparkSqlJob

A Dataproc job for running Apache Spark SQL (http://spark.apache.org/sql/) queries. Corresponds to the JSON property sparkSqlJob



1744
1745
1746
# File 'lib/google/apis/dataproc_v1/classes.rb', line 1744

def spark_sql_job
  @spark_sql_job
end

#statusGoogle::Apis::DataprocV1::JobStatus

Dataproc job status. Corresponds to the JSON property status



1749
1750
1751
# File 'lib/google/apis/dataproc_v1/classes.rb', line 1749

def status
  @status
end

#status_historyArray<Google::Apis::DataprocV1::JobStatus>

Output only. The previous job status. Corresponds to the JSON property statusHistory



1754
1755
1756
# File 'lib/google/apis/dataproc_v1/classes.rb', line 1754

def status_history
  @status_history
end

#yarn_applicationsArray<Google::Apis::DataprocV1::YarnApplication>

Output only. The collection of YARN applications spun up by this job.Beta Feature: This report is available for testing purposes only. It may be changed before final release. Corresponds to the JSON property yarnApplications



1761
1762
1763
# File 'lib/google/apis/dataproc_v1/classes.rb', line 1761

def yarn_applications
  @yarn_applications
end

Instance Method Details

#update!(**args) ⇒ Object

Update properties of this object



1768
1769
1770
1771
1772
1773
1774
1775
1776
1777
1778
1779
1780
1781
1782
1783
1784
1785
1786
1787
1788
# File 'lib/google/apis/dataproc_v1/classes.rb', line 1768

def update!(**args)
  @done = args[:done] if args.key?(:done)
  @driver_control_files_uri = args[:driver_control_files_uri] if args.key?(:driver_control_files_uri)
  @driver_output_resource_uri = args[:driver_output_resource_uri] if args.key?(:driver_output_resource_uri)
  @hadoop_job = args[:hadoop_job] if args.key?(:hadoop_job)
  @hive_job = args[:hive_job] if args.key?(:hive_job)
  @job_uuid = args[:job_uuid] if args.key?(:job_uuid)
  @labels = args[:labels] if args.key?(:labels)
  @pig_job = args[:pig_job] if args.key?(:pig_job)
  @placement = args[:placement] if args.key?(:placement)
  @presto_job = args[:presto_job] if args.key?(:presto_job)
  @pyspark_job = args[:pyspark_job] if args.key?(:pyspark_job)
  @reference = args[:reference] if args.key?(:reference)
  @scheduling = args[:scheduling] if args.key?(:scheduling)
  @spark_job = args[:spark_job] if args.key?(:spark_job)
  @spark_r_job = args[:spark_r_job] if args.key?(:spark_r_job)
  @spark_sql_job = args[:spark_sql_job] if args.key?(:spark_sql_job)
  @status = args[:status] if args.key?(:status)
  @status_history = args[:status_history] if args.key?(:status_history)
  @yarn_applications = args[:yarn_applications] if args.key?(:yarn_applications)
end