Class: Google::Apis::DataprocV1::Job
- Inherits:
-
Object
- Object
- Google::Apis::DataprocV1::Job
- Includes:
- Core::Hashable, Core::JsonObjectSupport
- Defined in:
- generated/google/apis/dataproc_v1/classes.rb,
generated/google/apis/dataproc_v1/representations.rb,
generated/google/apis/dataproc_v1/representations.rb
Overview
A Cloud Dataproc job resource.
Instance Attribute Summary collapse
-
#driver_control_files_uri ⇒ String
Output only.
-
#driver_output_resource_uri ⇒ String
Output only.
-
#hadoop_job ⇒ Google::Apis::DataprocV1::HadoopJob
A Cloud Dataproc job for running Apache Hadoop MapReduce (https://hadoop. apache.org/docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/ MapReduceTutorial.html) jobs on Apache Hadoop YARN (https://hadoop.apache.org/ docs/r2.7.1/hadoop-yarn/hadoop-yarn-site/YARN.html).
-
#hive_job ⇒ Google::Apis::DataprocV1::HiveJob
A Cloud Dataproc job for running Apache Hive (https://hive.apache.org/) queries on YARN.
-
#job_uuid ⇒ String
Output only.
-
#labels ⇒ Hash<String,String>
Optional.
-
#pig_job ⇒ Google::Apis::DataprocV1::PigJob
A Cloud Dataproc job for running Apache Pig (https://pig.apache.org/) queries on YARN.
-
#placement ⇒ Google::Apis::DataprocV1::JobPlacement
Cloud Dataproc job config.
-
#pyspark_job ⇒ Google::Apis::DataprocV1::PySparkJob
A Cloud Dataproc job for running Apache PySpark (https://spark.apache.org/docs/ 0.9.0/python-programming-guide.html) applications on YARN.
-
#reference ⇒ Google::Apis::DataprocV1::JobReference
Encapsulates the full scoping used to reference a job.
-
#scheduling ⇒ Google::Apis::DataprocV1::JobScheduling
Job scheduling options.
-
#spark_job ⇒ Google::Apis::DataprocV1::SparkJob
A Cloud Dataproc job for running Apache Spark (http://spark.apache.org/) applications on YARN.
-
#spark_sql_job ⇒ Google::Apis::DataprocV1::SparkSqlJob
A Cloud Dataproc job for running Apache Spark SQL (http://spark.apache.org/sql/ ) queries.
-
#status ⇒ Google::Apis::DataprocV1::JobStatus
Cloud Dataproc job status.
-
#status_history ⇒ Array<Google::Apis::DataprocV1::JobStatus>
Output only.
-
#yarn_applications ⇒ Array<Google::Apis::DataprocV1::YarnApplication>
Output only.
Instance Method Summary collapse
-
#initialize(**args) ⇒ Job
constructor
A new instance of Job.
-
#update!(**args) ⇒ Object
Update properties of this object.
Methods included from Core::JsonObjectSupport
Methods included from Core::Hashable
Constructor Details
#initialize(**args) ⇒ Job
Returns a new instance of Job
1118 1119 1120 |
# File 'generated/google/apis/dataproc_v1/classes.rb', line 1118 def initialize(**args) update!(**args) end |
Instance Attribute Details
#driver_control_files_uri ⇒ String
Output only. If present, the location of miscellaneous control files which may
be used as part of job setup and handling. If not present, control files may
be placed in the same location as driver_output_uri.
Corresponds to the JSON property driverControlFilesUri
1024 1025 1026 |
# File 'generated/google/apis/dataproc_v1/classes.rb', line 1024 def driver_control_files_uri @driver_control_files_uri end |
#driver_output_resource_uri ⇒ String
Output only. A URI pointing to the location of the stdout of the job's driver
program.
Corresponds to the JSON property driverOutputResourceUri
1030 1031 1032 |
# File 'generated/google/apis/dataproc_v1/classes.rb', line 1030 def driver_output_resource_uri @driver_output_resource_uri end |
#hadoop_job ⇒ Google::Apis::DataprocV1::HadoopJob
A Cloud Dataproc job for running Apache Hadoop MapReduce (https://hadoop.
apache.org/docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/
MapReduceTutorial.html) jobs on Apache Hadoop YARN (https://hadoop.apache.org/
docs/r2.7.1/hadoop-yarn/hadoop-yarn-site/YARN.html).
Corresponds to the JSON property hadoopJob
1038 1039 1040 |
# File 'generated/google/apis/dataproc_v1/classes.rb', line 1038 def hadoop_job @hadoop_job end |
#hive_job ⇒ Google::Apis::DataprocV1::HiveJob
A Cloud Dataproc job for running Apache Hive (https://hive.apache.org/)
queries on YARN.
Corresponds to the JSON property hiveJob
1044 1045 1046 |
# File 'generated/google/apis/dataproc_v1/classes.rb', line 1044 def hive_job @hive_job end |
#job_uuid ⇒ String
Output only. A UUID that uniquely identifies a job within the project over
time. This is in contrast to a user-settable reference.job_id that may be
reused over time.
Corresponds to the JSON property jobUuid
1051 1052 1053 |
# File 'generated/google/apis/dataproc_v1/classes.rb', line 1051 def job_uuid @job_uuid end |
#labels ⇒ Hash<String,String>
Optional. The labels to associate with this job. Label keys must contain 1 to
63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.
txt). Label values may be empty, but, if present, must contain 1 to 63
characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt)
. No more than 32 labels can be associated with a job.
Corresponds to the JSON property labels
1060 1061 1062 |
# File 'generated/google/apis/dataproc_v1/classes.rb', line 1060 def labels @labels end |
#pig_job ⇒ Google::Apis::DataprocV1::PigJob
A Cloud Dataproc job for running Apache Pig (https://pig.apache.org/) queries
on YARN.
Corresponds to the JSON property pigJob
1066 1067 1068 |
# File 'generated/google/apis/dataproc_v1/classes.rb', line 1066 def pig_job @pig_job end |
#placement ⇒ Google::Apis::DataprocV1::JobPlacement
Cloud Dataproc job config.
Corresponds to the JSON property placement
1071 1072 1073 |
# File 'generated/google/apis/dataproc_v1/classes.rb', line 1071 def placement @placement end |
#pyspark_job ⇒ Google::Apis::DataprocV1::PySparkJob
A Cloud Dataproc job for running Apache PySpark (https://spark.apache.org/docs/
0.9.0/python-programming-guide.html) applications on YARN.
Corresponds to the JSON property pysparkJob
1077 1078 1079 |
# File 'generated/google/apis/dataproc_v1/classes.rb', line 1077 def pyspark_job @pyspark_job end |
#reference ⇒ Google::Apis::DataprocV1::JobReference
Encapsulates the full scoping used to reference a job.
Corresponds to the JSON property reference
1082 1083 1084 |
# File 'generated/google/apis/dataproc_v1/classes.rb', line 1082 def reference @reference end |
#scheduling ⇒ Google::Apis::DataprocV1::JobScheduling
Job scheduling options.
Corresponds to the JSON property scheduling
1087 1088 1089 |
# File 'generated/google/apis/dataproc_v1/classes.rb', line 1087 def scheduling @scheduling end |
#spark_job ⇒ Google::Apis::DataprocV1::SparkJob
A Cloud Dataproc job for running Apache Spark (http://spark.apache.org/)
applications on YARN.
Corresponds to the JSON property sparkJob
1093 1094 1095 |
# File 'generated/google/apis/dataproc_v1/classes.rb', line 1093 def spark_job @spark_job end |
#spark_sql_job ⇒ Google::Apis::DataprocV1::SparkSqlJob
A Cloud Dataproc job for running Apache Spark SQL (http://spark.apache.org/sql/
) queries.
Corresponds to the JSON property sparkSqlJob
1099 1100 1101 |
# File 'generated/google/apis/dataproc_v1/classes.rb', line 1099 def spark_sql_job @spark_sql_job end |
#status ⇒ Google::Apis::DataprocV1::JobStatus
Cloud Dataproc job status.
Corresponds to the JSON property status
1104 1105 1106 |
# File 'generated/google/apis/dataproc_v1/classes.rb', line 1104 def status @status end |
#status_history ⇒ Array<Google::Apis::DataprocV1::JobStatus>
Output only. The previous job status.
Corresponds to the JSON property statusHistory
1109 1110 1111 |
# File 'generated/google/apis/dataproc_v1/classes.rb', line 1109 def status_history @status_history end |
#yarn_applications ⇒ Array<Google::Apis::DataprocV1::YarnApplication>
Output only. The collection of YARN applications spun up by this job.Beta
Feature: This report is available for testing purposes only. It may be changed
before final release.
Corresponds to the JSON property yarnApplications
1116 1117 1118 |
# File 'generated/google/apis/dataproc_v1/classes.rb', line 1116 def yarn_applications @yarn_applications end |
Instance Method Details
#update!(**args) ⇒ Object
Update properties of this object
1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 |
# File 'generated/google/apis/dataproc_v1/classes.rb', line 1123 def update!(**args) @driver_control_files_uri = args[:driver_control_files_uri] if args.key?(:driver_control_files_uri) @driver_output_resource_uri = args[:driver_output_resource_uri] if args.key?(:driver_output_resource_uri) @hadoop_job = args[:hadoop_job] if args.key?(:hadoop_job) @hive_job = args[:hive_job] if args.key?(:hive_job) @job_uuid = args[:job_uuid] if args.key?(:job_uuid) @labels = args[:labels] if args.key?(:labels) @pig_job = args[:pig_job] if args.key?(:pig_job) @placement = args[:placement] if args.key?(:placement) @pyspark_job = args[:pyspark_job] if args.key?(:pyspark_job) @reference = args[:reference] if args.key?(:reference) @scheduling = args[:scheduling] if args.key?(:scheduling) @spark_job = args[:spark_job] if args.key?(:spark_job) @spark_sql_job = args[:spark_sql_job] if args.key?(:spark_sql_job) @status = args[:status] if args.key?(:status) @status_history = args[:status_history] if args.key?(:status_history) @yarn_applications = args[:yarn_applications] if args.key?(:yarn_applications) end |