Class: Google::Cloud::Bigquery::QueryJob

Inherits:
Job
  • Object
show all
Defined in:
lib/google/cloud/bigquery/query_job.rb

Overview

QueryJob

A Job subclass representing a query operation that may be performed on a Table. A QueryJob instance is created when you call Project#query_job, Dataset#query_job.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new

job = bigquery.query_job "SELECT COUNT(word) as count FROM " \
                         "`bigquery-public-data.samples.shakespeare`"

job.wait_until_done!

if job.failed?
  puts job.error
else
  puts job.data.first
end

With multiple statements and child jobs:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new

multi_statement_sql = <<~SQL
  -- Declare a variable to hold names as an array.
  DECLARE top_names ARRAY<STRING>;
  -- Build an array of the top 100 names from the year 2017.
  SET top_names = (
  SELECT ARRAY_AGG(name ORDER BY number DESC LIMIT 100)
  FROM `bigquery-public-data.usa_names.usa_1910_current`
  WHERE year = 2017
  );
  -- Which names appear as words in Shakespeare's plays?
  SELECT
  name AS shakespeare_name
  FROM UNNEST(top_names) AS name
  WHERE name IN (
  SELECT word
  FROM `bigquery-public-data.samples.shakespeare`
  );
SQL

job = bigquery.query_job multi_statement_sql

job.wait_until_done!

child_jobs = bigquery.jobs parent_job: job

child_jobs.each do |child_job|
  script_statistics = child_job.script_statistics
  puts script_statistics.evaluation_kind
  script_statistics.stack_frames.each do |stack_frame|
    puts stack_frame.text
  end
end

See Also:

Direct Known Subclasses

Updater

Defined Under Namespace

Classes: Stage, Step, Updater

Attributes collapse

Instance Method Summary collapse

Methods inherited from Job

#cancel, #configuration, #created_at, #done?, #ended_at, #error, #errors, #failed?, #job_id, #labels, #location, #num_child_jobs, #parent_job_id, #pending?, #project_id, #reload!, #rerun!, #running?, #script_statistics, #started_at, #state, #statistics, #status, #user_email

Instance Method Details

#batch?Boolean

Checks if the priority for the query is BATCH.

Returns:

  • (Boolean)

    true when the priority is BATCH, false otherwise.



96
97
98
99
# File 'lib/google/cloud/bigquery/query_job.rb', line 96

def batch?
  val = @gapi.configuration.query.priority
  val == "BATCH"
end

#bytes_processedInteger?

The number of bytes processed by the query.

Returns:

  • (Integer, nil)

    Total bytes processed for the job.



214
215
216
217
218
# File 'lib/google/cloud/bigquery/query_job.rb', line 214

def bytes_processed
  Integer @gapi.statistics.query.total_bytes_processed
rescue StandardError
  nil
end

#cache?Boolean

Checks if the query job looks for an existing result in the query cache. For more information, see Query Caching.

Returns:

  • (Boolean)

    true when the query cache will be used, false otherwise.



134
135
136
137
138
# File 'lib/google/cloud/bigquery/query_job.rb', line 134

def cache?
  val = @gapi.configuration.query.use_query_cache
  return false if val.nil?
  val
end

#cache_hit?Boolean

Checks if the query results are from the query cache.

Returns:

  • (Boolean)

    true when the job statistics indicate a cache hit, false otherwise.



204
205
206
207
# File 'lib/google/cloud/bigquery/query_job.rb', line 204

def cache_hit?
  return false unless @gapi.statistics.query
  @gapi.statistics.query.cache_hit
end

#clustering?Boolean?

Checks if the destination table will be clustered.

Returns:

  • (Boolean, nil)

    true when the table will be clustered, or false otherwise.

See Also:



600
601
602
# File 'lib/google/cloud/bigquery/query_job.rb', line 600

def clustering?
  !@gapi.configuration.query.clustering.nil?
end

#clustering_fieldsArray<String>?

One or more fields on which the destination table should be clustered. Must be specified with time-based partitioning, data in the table will be first partitioned and subsequently clustered. The order of the returned fields determines the sort order of the data.

See Google::Cloud::Bigquery::QueryJob::Updater#clustering_fields=.

Returns:

  • (Array<String>, nil)

    The clustering fields, or nil if the destination table will not be clustered.

See Also:



624
625
626
# File 'lib/google/cloud/bigquery/query_job.rb', line 624

def clustering_fields
  @gapi.configuration.query.clustering.fields if clustering?
end

#data(token: nil, max: nil, start: nil) ⇒ Google::Cloud::Bigquery::Data Also known as: query_results

Retrieves the query results for the job.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new

sql = "SELECT word FROM `bigquery-public-data.samples.shakespeare`"
job = bigquery.query_job sql

job.wait_until_done!
data = job.data

# Iterate over the first page of results
data.each do |row|
  puts row[:word]
end
# Retrieve the next page of results
data = data.next if data.next?

Parameters:

  • token (String) (defaults to: nil)

    Page token, returned by a previous call, identifying the result set.

  • max (Integer) (defaults to: nil)

    Maximum number of results to return.

  • start (Integer) (defaults to: nil)

    Zero-based index of the starting row to read.

Returns:



686
687
688
689
690
691
692
693
694
695
696
697
698
# File 'lib/google/cloud/bigquery/query_job.rb', line 686

def data token: nil, max: nil, start: nil
  return nil unless done?
  return Data.from_gapi_json({ rows: [] }, nil, @gapi, service) if dryrun?
  if ddl? || dml?
    data_hash = { totalRows: nil, rows: [] }
    return Data.from_gapi_json data_hash, nil, @gapi, service
  end
  ensure_schema!

  options = { token: token, max: max, start: start }
  data_hash = service.list_tabledata destination_table_dataset_id, destination_table_table_id, options
  Data.from_gapi_json data_hash, destination_table_gapi, @gapi, service
end

#ddl?Boolean

Whether the query is a DDL statement.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new
query_job = bigquery.query_job "CREATE TABLE my_table (x INT64)"

query_job.statement_type #=> "CREATE_TABLE"
query_job.ddl? #=> true

Returns:

  • (Boolean)

See Also:



298
299
300
301
# File 'lib/google/cloud/bigquery/query_job.rb', line 298

def ddl?
  ["CREATE_MODEL", "CREATE_TABLE", "CREATE_TABLE_AS_SELECT", "CREATE_VIEW", "\n", "DROP_MODEL", "DROP_TABLE",
   "DROP_VIEW"].include? statement_type
end

#ddl_operation_performedString?

The DDL operation performed, possibly dependent on the pre-existence of the DDL target. (See #ddl_target_table.) Possible values (new values might be added in the future):

  • "CREATE": The query created the DDL target.
  • "SKIP": No-op. Example cases: the query is CREATE TABLE IF NOT EXISTS while the table already exists, or the query is DROP TABLE IF EXISTS while the table does not exist.
  • "REPLACE": The query replaced the DDL target. Example case: the query is CREATE OR REPLACE TABLE, and the table already exists.
  • "DROP": The query deleted the DDL target.

Returns:

  • (String, nil)

    The DDL operation performed.



341
342
343
344
# File 'lib/google/cloud/bigquery/query_job.rb', line 341

def ddl_operation_performed
  return nil unless @gapi.statistics.query
  @gapi.statistics.query.ddl_operation_performed
end

#ddl_target_routineGoogle::Cloud::Bigquery::Routine?

The DDL target routine, in reference state. (See Routine#reference?.) Present only for CREATE/DROP FUNCTION/PROCEDURE queries. (See #statement_type.)

Returns:



354
355
356
357
358
359
360
# File 'lib/google/cloud/bigquery/query_job.rb', line 354

def ddl_target_routine
  return nil unless @gapi.statistics.query
  ensure_service!
  routine = @gapi.statistics.query.ddl_target_routine
  return nil unless routine
  Google::Cloud::Bigquery::Routine.new_reference_from_gapi routine, service
end

#ddl_target_tableGoogle::Cloud::Bigquery::Table?

The DDL target table, in reference state. (See Table#reference?.) Present only for CREATE/DROP TABLE/VIEW queries. (See #statement_type.)

Returns:



370
371
372
373
374
375
376
# File 'lib/google/cloud/bigquery/query_job.rb', line 370

def ddl_target_table
  return nil unless @gapi.statistics.query
  ensure_service!
  table = @gapi.statistics.query.ddl_target_table
  return nil unless table
  Google::Cloud::Bigquery::Table.new_reference_from_gapi table, service
end

#destinationTable

The table in which the query results are stored.

Returns:

  • (Table)

    A table instance.



395
396
397
398
399
400
401
# File 'lib/google/cloud/bigquery/query_job.rb', line 395

def destination
  table = @gapi.configuration.query.destination_table
  return nil unless table
  retrieve_table table.project_id,
                 table.dataset_id,
                 table.table_id
end

#dml?Boolean

Whether the query is a DML statement.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new
query_job = bigquery.query_job "UPDATE my_table " \
                               "SET x = x + 1 " \
                               "WHERE x IS NOT NULL"

query_job.statement_type #=> "UPDATE"
query_job.dml? #=> true

Returns:

  • (Boolean)

See Also:



322
323
324
# File 'lib/google/cloud/bigquery/query_job.rb', line 322

def dml?
  ["INSERT", "UPDATE", "MERGE", "DELETE"].include? statement_type
end

#dryrun?Boolean Also known as: dryrun, dry_run, dry_run?

If set, don't actually run this job. A valid query will return a mostly empty response with some processing statistics, while an invalid query will return the same error it would if it wasn't a dry run.

Returns:

  • (Boolean)

    true when the dry run flag is set for the query job, false otherwise.



149
150
151
# File 'lib/google/cloud/bigquery/query_job.rb', line 149

def dryrun?
  @gapi.configuration.dry_run
end

#encryptionGoogle::Cloud::BigQuery::EncryptionConfiguration

The encryption configuration of the destination table.

Returns:

  • (Google::Cloud::BigQuery::EncryptionConfiguration)

    Custom encryption configuration (e.g., Cloud KMS keys).



447
448
449
# File 'lib/google/cloud/bigquery/query_job.rb', line 447

def encryption
  EncryptionConfiguration.from_gapi @gapi.configuration.query.destination_encryption_configuration
end

#flatten?Boolean

Checks if the query job flattens nested and repeated fields in the query results. The default is true. If the value is false,

large_results? should return true.

Returns:

  • (Boolean)

    true when the job flattens results, false otherwise.



164
165
166
167
168
# File 'lib/google/cloud/bigquery/query_job.rb', line 164

def flatten?
  val = @gapi.configuration.query.flatten_results
  return true if val.nil?
  val
end

#interactive?Boolean

Checks if the priority for the query is INTERACTIVE.

Returns:

  • (Boolean)

    true when the priority is INTERACTIVE, false otherwise.



107
108
109
110
111
# File 'lib/google/cloud/bigquery/query_job.rb', line 107

def interactive?
  val = @gapi.configuration.query.priority
  return true if val.nil?
  val == "INTERACTIVE"
end

#large_results?Boolean

Checks if the the query job allows arbitrarily large results at a slight cost to performance.

Returns:

  • (Boolean)

    true when large results are allowed, false otherwise.



120
121
122
123
124
# File 'lib/google/cloud/bigquery/query_job.rb', line 120

def large_results?
  val = @gapi.configuration.query.allow_large_results
  return false if val.nil?
  val
end

#legacy_sql?Boolean

Checks if the query job is using legacy sql.

Returns:

  • (Boolean)

    true when legacy sql is used, false otherwise.



408
409
410
411
412
# File 'lib/google/cloud/bigquery/query_job.rb', line 408

def legacy_sql?
  val = @gapi.configuration.query.use_legacy_sql
  return true if val.nil?
  val
end

#maximum_billing_tierInteger?

Limits the billing tier for this job. Queries that have resource usage beyond this tier will raise (without incurring a charge). If unspecified, this will be set to your project default. For more information, see High-Compute queries.

Returns:

  • (Integer, nil)

    The tier number, or nil for the project default.



180
181
182
# File 'lib/google/cloud/bigquery/query_job.rb', line 180

def maximum_billing_tier
  @gapi.configuration.query.maximum_billing_tier
end

#maximum_bytes_billedInteger?

Limits the bytes billed for this job. Queries that will have bytes billed beyond this limit will raise (without incurring a charge). If nil, this will be set to your project default.

Returns:

  • (Integer, nil)

    The number of bytes, or nil for the project default.



192
193
194
195
196
# File 'lib/google/cloud/bigquery/query_job.rb', line 192

def maximum_bytes_billed
  Integer @gapi.configuration.query.maximum_bytes_billed
rescue StandardError
  nil
end

#num_dml_affected_rowsInteger?

The number of rows affected by a DML statement. Present only for DML statements INSERT, UPDATE or DELETE. (See #statement_type.)

Returns:

  • (Integer, nil)

    The number of rows affected by a DML statement, or nil if the query is not a DML statement.



385
386
387
388
# File 'lib/google/cloud/bigquery/query_job.rb', line 385

def num_dml_affected_rows
  return nil unless @gapi.statistics.query
  @gapi.statistics.query.num_dml_affected_rows
end

#query_planArray<Google::Cloud::Bigquery::QueryJob::Stage>?

Describes the execution plan for the query.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new

sql = "SELECT word FROM `bigquery-public-data.samples.shakespeare`"
job = bigquery.query_job sql

job.wait_until_done!

stages = job.query_plan
stages.each do |stage|
  puts stage.name
  stage.steps.each do |step|
    puts step.kind
    puts step.substeps.inspect
  end
end

Returns:



245
246
247
248
# File 'lib/google/cloud/bigquery/query_job.rb', line 245

def query_plan
  return nil unless @gapi&.statistics&.query&.query_plan
  Array(@gapi.statistics.query.query_plan).map { |stage| Stage.from_gapi stage }
end

#range_partitioning?Boolean

Checks if the destination table will be range partitioned. See Creating and using integer range partitioned tables.

Returns:

  • (Boolean)

    true when the table is range partitioned, or false otherwise.



459
460
461
# File 'lib/google/cloud/bigquery/query_job.rb', line 459

def range_partitioning?
  !@gapi.configuration.query.range_partitioning.nil?
end

#range_partitioning_endInteger?

The end of range partitioning, exclusive. See Creating and using integer range partitioned tables.

Returns:

  • (Integer, nil)

    The end of range partitioning, exclusive, or nil if not range partitioned.



510
511
512
# File 'lib/google/cloud/bigquery/query_job.rb', line 510

def range_partitioning_end
  @gapi.configuration.query.range_partitioning.range.end if range_partitioning?
end

#range_partitioning_fieldString?

The field on which the destination table will be range partitioned, if any. The field must be a top-level NULLABLE/REQUIRED field. The only supported type is INTEGER/INT64. See Creating and using integer range partitioned tables.

Returns:

  • (String, nil)

    The partition field, if a field was configured, or nil if not range partitioned.



473
474
475
# File 'lib/google/cloud/bigquery/query_job.rb', line 473

def range_partitioning_field
  @gapi.configuration.query.range_partitioning.field if range_partitioning?
end

#range_partitioning_intervalInteger?

The width of each interval. See Creating and using integer range partitioned tables.

Returns:

  • (Integer, nil)

    The width of each interval, for data in range partitions, or nil if not range partitioned.



498
499
500
# File 'lib/google/cloud/bigquery/query_job.rb', line 498

def range_partitioning_interval
  @gapi.configuration.query.range_partitioning.range.interval if range_partitioning?
end

#range_partitioning_startInteger?

The start of range partitioning, inclusive. See Creating and using integer range partitioned tables.

Returns:

  • (Integer, nil)

    The start of range partitioning, inclusive, or nil if not range partitioned.



485
486
487
# File 'lib/google/cloud/bigquery/query_job.rb', line 485

def range_partitioning_start
  @gapi.configuration.query.range_partitioning.range.start if range_partitioning?
end

#standard_sql?Boolean

Checks if the query job is using standard sql.

Returns:

  • (Boolean)

    true when standard sql is used, false otherwise.



419
420
421
# File 'lib/google/cloud/bigquery/query_job.rb', line 419

def standard_sql?
  !legacy_sql?
end

#statement_typeString?

The type of query statement, if valid. Possible values (new values might be added in the future):

Returns:

  • (String, nil)

    The type of query statement.



276
277
278
279
# File 'lib/google/cloud/bigquery/query_job.rb', line 276

def statement_type
  return nil unless @gapi.statistics.query
  @gapi.statistics.query.statement_type
end

#time_partitioning?Boolean?

Checks if the destination table will be time-partitioned. See Partitioned Tables.

Returns:

  • (Boolean, nil)

    true when the table will be time-partitioned, or false otherwise.



523
524
525
# File 'lib/google/cloud/bigquery/query_job.rb', line 523

def time_partitioning?
  !@gapi.configuration.query.time_partitioning.nil?
end

#time_partitioning_expirationInteger?

The expiration for the destination table partitions, if any, in seconds. See Partitioned Tables.

Returns:

  • (Integer, nil)

    The expiration time, in seconds, for data in partitions, or nil if not present.



567
568
569
570
# File 'lib/google/cloud/bigquery/query_job.rb', line 567

def time_partitioning_expiration
  tp = @gapi.configuration.query.time_partitioning
  tp.expiration_ms / 1_000 if tp && !tp.expiration_ms.nil?
end

#time_partitioning_fieldString?

The field on which the destination table will be partitioned, if any. If not set, the destination table will be partitioned by pseudo column _PARTITIONTIME; if set, the table will be partitioned by this field. See Partitioned Tables.

Returns:

  • (String, nil)

    The partition field, if a field was configured. nil if not partitioned or not set (partitioned by pseudo column '_PARTITIONTIME').



552
553
554
555
# File 'lib/google/cloud/bigquery/query_job.rb', line 552

def time_partitioning_field
  return nil unless time_partitioning?
  @gapi.configuration.query.time_partitioning.field
end

#time_partitioning_require_filter?Boolean

If set to true, queries over the destination table will require a partition filter that can be used for partition elimination to be specified. See Partitioned Tables.

Returns:

  • (Boolean)

    true when a partition filter will be required, or false otherwise.



583
584
585
586
587
# File 'lib/google/cloud/bigquery/query_job.rb', line 583

def time_partitioning_require_filter?
  tp = @gapi.configuration.query.time_partitioning
  return false if tp.nil? || tp.require_partition_filter.nil?
  tp.require_partition_filter
end

#time_partitioning_typeString?

The period for which the destination table will be partitioned, if any. See Partitioned Tables.

Returns:

  • (String, nil)

    The partition type. Currently the only supported value is "DAY", or nil if not present.



536
537
538
# File 'lib/google/cloud/bigquery/query_job.rb', line 536

def time_partitioning_type
  @gapi.configuration.query.time_partitioning.type if time_partitioning?
end

#udfsArray<String>

The user-defined function resources used in the query. May be either a code resource to load from a Google Cloud Storage URI (gs://bucket/path), or an inline resource that contains code for a user-defined function (UDF). Providing an inline code resource is equivalent to providing a URI for a file containing the same code. See User-Defined Functions.

Returns:

  • (Array<String>)

    An array containing Google Cloud Storage URIs and/or inline source code.



434
435
436
437
438
# File 'lib/google/cloud/bigquery/query_job.rb', line 434

def udfs
  udfs_gapi = @gapi.configuration.query.user_defined_function_resources
  return nil unless udfs_gapi
  Array(udfs_gapi).map { |udf| udf.inline_code || udf.resource_uri }
end

#wait_until_done!Object

Refreshes the job until the job is DONE. The delay between refreshes will incrementally increase.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new

sql = "SELECT word FROM `bigquery-public-data.samples.shakespeare`"
job = bigquery.query_job sql

job.wait_until_done!
job.done? #=> true


643
644
645
646
647
648
649
650
651
652
653
654
655
# File 'lib/google/cloud/bigquery/query_job.rb', line 643

def wait_until_done!
  return if done?

  ensure_service!
  loop do
    query_results_gapi = service.job_query_results job_id, location: location, max: 0
    if query_results_gapi.job_complete
      @destination_schema_gapi = query_results_gapi.schema
      break
    end
  end
  reload!
end