Class: Google::Apis::BigqueryV2::SparkStatistics
- Inherits:
-
Object
- Object
- Google::Apis::BigqueryV2::SparkStatistics
- Includes:
- Core::Hashable, Core::JsonObjectSupport
- Defined in:
- lib/google/apis/bigquery_v2/classes.rb,
lib/google/apis/bigquery_v2/representations.rb,
lib/google/apis/bigquery_v2/representations.rb
Overview
Statistics for a BigSpark query. Populated as part of JobStatistics2
Instance Attribute Summary collapse
-
#endpoints ⇒ Hash<String,String>
Output only.
-
#gcs_staging_bucket ⇒ String
Output only.
-
#kms_key_name ⇒ String
Output only.
-
#logging_info ⇒ Google::Apis::BigqueryV2::SparkLoggingInfo
Spark job logs can be filtered by these fields in Cloud Logging.
-
#spark_job_id ⇒ String
Output only.
-
#spark_job_location ⇒ String
Output only.
Instance Method Summary collapse
-
#initialize(**args) ⇒ SparkStatistics
constructor
A new instance of SparkStatistics.
-
#update!(**args) ⇒ Object
Update properties of this object.
Constructor Details
#initialize(**args) ⇒ SparkStatistics
Returns a new instance of SparkStatistics.
8869 8870 8871 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 8869 def initialize(**args) update!(**args) end |
Instance Attribute Details
#endpoints ⇒ Hash<String,String>
Output only. Endpoints returned from Dataproc. Key list: -
history_server_endpoint: A link to Spark job UI.
Corresponds to the JSON property endpoints
8827 8828 8829 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 8827 def endpoints @endpoints end |
#gcs_staging_bucket ⇒ String
Output only. The Google Cloud Storage bucket that is used as the default file
system by the Spark application. This field is only filled when the Spark
procedure uses the invoker security mode. The gcsStagingBucket
bucket is
inferred from the @@spark_proc_properties.staging_bucket
system variable (if
it is provided). Otherwise, BigQuery creates a default staging bucket for the
job and returns the bucket name in this field. Example: * gs://[bucket_name]
Corresponds to the JSON property gcsStagingBucket
8837 8838 8839 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 8837 def gcs_staging_bucket @gcs_staging_bucket end |
#kms_key_name ⇒ String
Output only. The Cloud KMS encryption key that is used to protect the
resources created by the Spark job. If the Spark procedure uses the invoker
security mode, the Cloud KMS encryption key is either inferred from the
provided system variable, @@spark_proc_properties.kms_key_name
, or the
default key of the BigQuery job's project (if the CMEK organization policy is
enforced). Otherwise, the Cloud KMS key is either inferred from the Spark
connection associated with the procedure (if it is provided), or from the
default key of the Spark connection's project if the CMEK organization policy
is enforced. Example: * projects/[kms_project_id]/locations/[region]/keyRings/
[key_region]/cryptoKeys/[key]
Corresponds to the JSON property kmsKeyName
8851 8852 8853 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 8851 def kms_key_name @kms_key_name end |
#logging_info ⇒ Google::Apis::BigqueryV2::SparkLoggingInfo
Spark job logs can be filtered by these fields in Cloud Logging.
Corresponds to the JSON property loggingInfo
8856 8857 8858 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 8856 def logging_info @logging_info end |
#spark_job_id ⇒ String
Output only. Spark job ID if a Spark job is created successfully.
Corresponds to the JSON property sparkJobId
8861 8862 8863 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 8861 def spark_job_id @spark_job_id end |
#spark_job_location ⇒ String
Output only. Location where the Spark job is executed. A location is selected
by BigQueury for jobs configured to run in a multi-region.
Corresponds to the JSON property sparkJobLocation
8867 8868 8869 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 8867 def spark_job_location @spark_job_location end |
Instance Method Details
#update!(**args) ⇒ Object
Update properties of this object
8874 8875 8876 8877 8878 8879 8880 8881 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 8874 def update!(**args) @endpoints = args[:endpoints] if args.key?(:endpoints) @gcs_staging_bucket = args[:gcs_staging_bucket] if args.key?(:gcs_staging_bucket) @kms_key_name = args[:kms_key_name] if args.key?(:kms_key_name) @logging_info = args[:logging_info] if args.key?(:logging_info) @spark_job_id = args[:spark_job_id] if args.key?(:spark_job_id) @spark_job_location = args[:spark_job_location] if args.key?(:spark_job_location) end |