Class: Google::Apis::DataprocV1::Job
- Inherits:
-
Object
- Object
- Google::Apis::DataprocV1::Job
- Includes:
- Core::Hashable, Core::JsonObjectSupport
- Defined in:
- lib/google/apis/dataproc_v1/classes.rb,
lib/google/apis/dataproc_v1/representations.rb,
lib/google/apis/dataproc_v1/representations.rb
Overview
A Dataproc job resource.
Instance Attribute Summary collapse
-
#done ⇒ Boolean
(also: #done?)
Output only.
-
#driver_control_files_uri ⇒ String
Output only.
-
#driver_output_resource_uri ⇒ String
Output only.
-
#driver_scheduling_config ⇒ Google::Apis::DataprocV1::DriverSchedulingConfig
Driver scheduling configuration.
-
#flink_job ⇒ Google::Apis::DataprocV1::FlinkJob
A Dataproc job for running Apache Flink applications on YARN.
-
#hadoop_job ⇒ Google::Apis::DataprocV1::HadoopJob
A Dataproc job for running Apache Hadoop MapReduce (https://hadoop.apache.org/ docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/ MapReduceTutorial.html) jobs on Apache Hadoop YARN (https://hadoop.apache.org/ docs/r2.7.1/hadoop-yarn/hadoop-yarn-site/YARN.html).
-
#hive_job ⇒ Google::Apis::DataprocV1::HiveJob
A Dataproc job for running Apache Hive (https://hive.apache.org/) queries on YARN.
-
#job_uuid ⇒ String
Output only.
-
#labels ⇒ Hash<String,String>
Optional.
-
#pig_job ⇒ Google::Apis::DataprocV1::PigJob
A Dataproc job for running Apache Pig (https://pig.apache.org/) queries on YARN.
-
#placement ⇒ Google::Apis::DataprocV1::JobPlacement
Dataproc job config.
-
#presto_job ⇒ Google::Apis::DataprocV1::PrestoJob
A Dataproc job for running Presto (https://prestosql.io/) queries.
-
#pyspark_job ⇒ Google::Apis::DataprocV1::PySparkJob
A Dataproc job for running Apache PySpark (https://spark.apache.org/docs/0.9.0/ python-programming-guide.html) applications on YARN.
-
#reference ⇒ Google::Apis::DataprocV1::JobReference
Encapsulates the full scoping used to reference a job.
-
#scheduling ⇒ Google::Apis::DataprocV1::JobScheduling
Job scheduling options.
-
#spark_job ⇒ Google::Apis::DataprocV1::SparkJob
A Dataproc job for running Apache Spark (https://spark.apache.org/) applications on YARN.
-
#spark_r_job ⇒ Google::Apis::DataprocV1::SparkRJob
A Dataproc job for running Apache SparkR (https://spark.apache.org/docs/latest/ sparkr.html) applications on YARN.
-
#spark_sql_job ⇒ Google::Apis::DataprocV1::SparkSqlJob
A Dataproc job for running Apache Spark SQL (https://spark.apache.org/sql/) queries.
-
#status ⇒ Google::Apis::DataprocV1::JobStatus
Dataproc job status.
-
#status_history ⇒ Array<Google::Apis::DataprocV1::JobStatus>
Output only.
-
#trino_job ⇒ Google::Apis::DataprocV1::TrinoJob
A Dataproc job for running Trino (https://trino.io/) queries.
-
#yarn_applications ⇒ Array<Google::Apis::DataprocV1::YarnApplication>
Output only.
Instance Method Summary collapse
-
#initialize(**args) ⇒ Job
constructor
A new instance of Job.
-
#update!(**args) ⇒ Object
Update properties of this object.
Constructor Details
#initialize(**args) ⇒ Job
Returns a new instance of Job.
2859 2860 2861 |
# File 'lib/google/apis/dataproc_v1/classes.rb', line 2859 def initialize(**args) update!(**args) end |
Instance Attribute Details
#done ⇒ Boolean Also known as: done?
Output only. Indicates whether the job is completed. If the value is false,
the job is still in progress. If true, the job is completed, and status.state
field will indicate if it was successful, failed, or cancelled.
Corresponds to the JSON property done
2725 2726 2727 |
# File 'lib/google/apis/dataproc_v1/classes.rb', line 2725 def done @done end |
#driver_control_files_uri ⇒ String
Output only. If present, the location of miscellaneous control files which can
be used as part of job setup and handling. If not present, control files might
be placed in the same location as driver_output_uri.
Corresponds to the JSON property driverControlFilesUri
2733 2734 2735 |
# File 'lib/google/apis/dataproc_v1/classes.rb', line 2733 def driver_control_files_uri @driver_control_files_uri end |
#driver_output_resource_uri ⇒ String
Output only. A URI pointing to the location of the stdout of the job's driver
program.
Corresponds to the JSON property driverOutputResourceUri
2739 2740 2741 |
# File 'lib/google/apis/dataproc_v1/classes.rb', line 2739 def driver_output_resource_uri @driver_output_resource_uri end |
#driver_scheduling_config ⇒ Google::Apis::DataprocV1::DriverSchedulingConfig
Driver scheduling configuration.
Corresponds to the JSON property driverSchedulingConfig
2744 2745 2746 |
# File 'lib/google/apis/dataproc_v1/classes.rb', line 2744 def driver_scheduling_config @driver_scheduling_config end |
#flink_job ⇒ Google::Apis::DataprocV1::FlinkJob
A Dataproc job for running Apache Flink applications on YARN.
Corresponds to the JSON property flinkJob
2749 2750 2751 |
# File 'lib/google/apis/dataproc_v1/classes.rb', line 2749 def flink_job @flink_job end |
#hadoop_job ⇒ Google::Apis::DataprocV1::HadoopJob
A Dataproc job for running Apache Hadoop MapReduce (https://hadoop.apache.org/
docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/
MapReduceTutorial.html) jobs on Apache Hadoop YARN (https://hadoop.apache.org/
docs/r2.7.1/hadoop-yarn/hadoop-yarn-site/YARN.html).
Corresponds to the JSON property hadoopJob
2757 2758 2759 |
# File 'lib/google/apis/dataproc_v1/classes.rb', line 2757 def hadoop_job @hadoop_job end |
#hive_job ⇒ Google::Apis::DataprocV1::HiveJob
A Dataproc job for running Apache Hive (https://hive.apache.org/) queries on
YARN.
Corresponds to the JSON property hiveJob
2763 2764 2765 |
# File 'lib/google/apis/dataproc_v1/classes.rb', line 2763 def hive_job @hive_job end |
#job_uuid ⇒ String
Output only. A UUID that uniquely identifies a job within the project over
time. This is in contrast to a user-settable reference.job_id that might be
reused over time.
Corresponds to the JSON property jobUuid
2770 2771 2772 |
# File 'lib/google/apis/dataproc_v1/classes.rb', line 2770 def job_uuid @job_uuid end |
#labels ⇒ Hash<String,String>
Optional. The labels to associate with this job. Label keys must contain 1 to
63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.
txt). Label values can be empty, but, if present, must contain 1 to 63
characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt)
. No more than 32 labels can be associated with a job.
Corresponds to the JSON property labels
2779 2780 2781 |
# File 'lib/google/apis/dataproc_v1/classes.rb', line 2779 def labels @labels end |
#pig_job ⇒ Google::Apis::DataprocV1::PigJob
A Dataproc job for running Apache Pig (https://pig.apache.org/) queries on
YARN.
Corresponds to the JSON property pigJob
2785 2786 2787 |
# File 'lib/google/apis/dataproc_v1/classes.rb', line 2785 def pig_job @pig_job end |
#placement ⇒ Google::Apis::DataprocV1::JobPlacement
Dataproc job config.
Corresponds to the JSON property placement
2790 2791 2792 |
# File 'lib/google/apis/dataproc_v1/classes.rb', line 2790 def placement @placement end |
#presto_job ⇒ Google::Apis::DataprocV1::PrestoJob
A Dataproc job for running Presto (https://prestosql.io/) queries. IMPORTANT:
The Dataproc Presto Optional Component (https://cloud.google.com/dataproc/docs/
concepts/components/presto) must be enabled when the cluster is created to
submit a Presto job to the cluster.
Corresponds to the JSON property prestoJob
2798 2799 2800 |
# File 'lib/google/apis/dataproc_v1/classes.rb', line 2798 def presto_job @presto_job end |
#pyspark_job ⇒ Google::Apis::DataprocV1::PySparkJob
A Dataproc job for running Apache PySpark (https://spark.apache.org/docs/0.9.0/
python-programming-guide.html) applications on YARN.
Corresponds to the JSON property pysparkJob
2804 2805 2806 |
# File 'lib/google/apis/dataproc_v1/classes.rb', line 2804 def pyspark_job @pyspark_job end |
#reference ⇒ Google::Apis::DataprocV1::JobReference
Encapsulates the full scoping used to reference a job.
Corresponds to the JSON property reference
2809 2810 2811 |
# File 'lib/google/apis/dataproc_v1/classes.rb', line 2809 def reference @reference end |
#scheduling ⇒ Google::Apis::DataprocV1::JobScheduling
Job scheduling options.
Corresponds to the JSON property scheduling
2814 2815 2816 |
# File 'lib/google/apis/dataproc_v1/classes.rb', line 2814 def scheduling @scheduling end |
#spark_job ⇒ Google::Apis::DataprocV1::SparkJob
A Dataproc job for running Apache Spark (https://spark.apache.org/)
applications on YARN.
Corresponds to the JSON property sparkJob
2820 2821 2822 |
# File 'lib/google/apis/dataproc_v1/classes.rb', line 2820 def spark_job @spark_job end |
#spark_r_job ⇒ Google::Apis::DataprocV1::SparkRJob
A Dataproc job for running Apache SparkR (https://spark.apache.org/docs/latest/
sparkr.html) applications on YARN.
Corresponds to the JSON property sparkRJob
2826 2827 2828 |
# File 'lib/google/apis/dataproc_v1/classes.rb', line 2826 def spark_r_job @spark_r_job end |
#spark_sql_job ⇒ Google::Apis::DataprocV1::SparkSqlJob
A Dataproc job for running Apache Spark SQL (https://spark.apache.org/sql/)
queries.
Corresponds to the JSON property sparkSqlJob
2832 2833 2834 |
# File 'lib/google/apis/dataproc_v1/classes.rb', line 2832 def spark_sql_job @spark_sql_job end |
#status ⇒ Google::Apis::DataprocV1::JobStatus
Dataproc job status.
Corresponds to the JSON property status
2837 2838 2839 |
# File 'lib/google/apis/dataproc_v1/classes.rb', line 2837 def status @status end |
#status_history ⇒ Array<Google::Apis::DataprocV1::JobStatus>
Output only. The previous job status.
Corresponds to the JSON property statusHistory
2842 2843 2844 |
# File 'lib/google/apis/dataproc_v1/classes.rb', line 2842 def status_history @status_history end |
#trino_job ⇒ Google::Apis::DataprocV1::TrinoJob
A Dataproc job for running Trino (https://trino.io/) queries. IMPORTANT: The
Dataproc Trino Optional Component (https://cloud.google.com/dataproc/docs/
concepts/components/trino) must be enabled when the cluster is created to
submit a Trino job to the cluster.
Corresponds to the JSON property trinoJob
2850 2851 2852 |
# File 'lib/google/apis/dataproc_v1/classes.rb', line 2850 def trino_job @trino_job end |
#yarn_applications ⇒ Array<Google::Apis::DataprocV1::YarnApplication>
Output only. The collection of YARN applications spun up by this job.Beta
Feature: This report is available for testing purposes only. It might be
changed before final release.
Corresponds to the JSON property yarnApplications
2857 2858 2859 |
# File 'lib/google/apis/dataproc_v1/classes.rb', line 2857 def yarn_applications @yarn_applications end |
Instance Method Details
#update!(**args) ⇒ Object
Update properties of this object
2864 2865 2866 2867 2868 2869 2870 2871 2872 2873 2874 2875 2876 2877 2878 2879 2880 2881 2882 2883 2884 2885 2886 2887 |
# File 'lib/google/apis/dataproc_v1/classes.rb', line 2864 def update!(**args) @done = args[:done] if args.key?(:done) @driver_control_files_uri = args[:driver_control_files_uri] if args.key?(:driver_control_files_uri) @driver_output_resource_uri = args[:driver_output_resource_uri] if args.key?(:driver_output_resource_uri) @driver_scheduling_config = args[:driver_scheduling_config] if args.key?(:driver_scheduling_config) @flink_job = args[:flink_job] if args.key?(:flink_job) @hadoop_job = args[:hadoop_job] if args.key?(:hadoop_job) @hive_job = args[:hive_job] if args.key?(:hive_job) @job_uuid = args[:job_uuid] if args.key?(:job_uuid) @labels = args[:labels] if args.key?(:labels) @pig_job = args[:pig_job] if args.key?(:pig_job) @placement = args[:placement] if args.key?(:placement) @presto_job = args[:presto_job] if args.key?(:presto_job) @pyspark_job = args[:pyspark_job] if args.key?(:pyspark_job) @reference = args[:reference] if args.key?(:reference) @scheduling = args[:scheduling] if args.key?(:scheduling) @spark_job = args[:spark_job] if args.key?(:spark_job) @spark_r_job = args[:spark_r_job] if args.key?(:spark_r_job) @spark_sql_job = args[:spark_sql_job] if args.key?(:spark_sql_job) @status = args[:status] if args.key?(:status) @status_history = args[:status_history] if args.key?(:status_history) @trino_job = args[:trino_job] if args.key?(:trino_job) @yarn_applications = args[:yarn_applications] if args.key?(:yarn_applications) end |