Class: Google::Cloud::Bigquery::Table

Inherits:
Object
  • Object
show all
Defined in:
lib/google/cloud/bigquery/table.rb,
lib/google/cloud/bigquery/table/list.rb,
lib/google/cloud/bigquery/table/async_inserter.rb

Overview

Table

A named resource representing a BigQuery table that holds zero or more records. Every table is defined by a schema that may contain nested and repeated fields.

The Table class can also represent a view, which is a virtual table defined by a SQL query. BigQuery's views are logical views, not materialized views, which means that the query that defines the view is re-executed every time the view is queried. Queries are billed according to the total amount of data in all table fields referenced directly or indirectly by the top-level query. (See #view?, #query, #query=, and Dataset#create_view.)

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"

table = dataset.create_table "my_table" do |schema|
  schema.string "first_name", mode: :required
  schema.record "cities_lived", mode: :repeated do |nested_schema|
    nested_schema.string "place", mode: :required
    nested_schema.integer "number_of_years", mode: :required
  end
end

row = {
  "first_name" => "Alice",
  "cities_lived" => [
    {
      "place" => "Seattle",
      "number_of_years" => 5
    },
    {
      "place" => "Stockholm",
      "number_of_years" => 6
    }
  ]
}
table.insert row

Creating a BigQuery view:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"
view = dataset.create_view "my_view",
         "SELECT name, age FROM `my_project.my_dataset.my_table`"
view.view? # true

See Also:

Direct Known Subclasses

Updater

Defined Under Namespace

Classes: AsyncInserter, List, Updater

Attributes collapse

Data collapse

Lifecycle collapse

Instance Method Details

#api_urlString?

A URL that can be used to access the table using the REST API.

Returns:

  • (String, nil)

    A REST URL for the resource, or nil if the object is a reference (see #reference?).



611
612
613
614
615
# File 'lib/google/cloud/bigquery/table.rb', line 611

def api_url
  return nil if reference?
  ensure_full_data!
  @gapi.self_link
end

#buffer_bytesInteger?

A lower-bound estimate of the number of bytes currently in this table's streaming buffer, if one is present. This field will be absent if the table is not being streamed to or if there is no data in the streaming buffer.

Returns:

  • (Integer, nil)

    The estimated number of bytes in the buffer, or nil if not present or the object is a reference (see #reference?).



1098
1099
1100
1101
1102
# File 'lib/google/cloud/bigquery/table.rb', line 1098

def buffer_bytes
  return nil if reference?
  ensure_full_data!
  @gapi.streaming_buffer&.estimated_bytes
end

#buffer_oldest_atTime?

The time of the oldest entry currently in this table's streaming buffer, if one is present. This field will be absent if the table is not being streamed to or if there is no data in the streaming buffer.

Returns:

  • (Time, nil)

    The oldest entry time, or nil if not present or the object is a reference (see #reference?).



1132
1133
1134
1135
1136
1137
1138
# File 'lib/google/cloud/bigquery/table.rb', line 1132

def buffer_oldest_at
  return nil if reference?
  ensure_full_data!
  return nil unless @gapi.streaming_buffer
  oldest_entry_time = @gapi.streaming_buffer.oldest_entry_time
  Convert.millis_to_time oldest_entry_time
end

#buffer_rowsInteger?

A lower-bound estimate of the number of rows currently in this table's streaming buffer, if one is present. This field will be absent if the table is not being streamed to or if there is no data in the streaming buffer.

Returns:

  • (Integer, nil)

    The estimated number of rows in the buffer, or nil if not present or the object is a reference (see #reference?).



1116
1117
1118
1119
1120
# File 'lib/google/cloud/bigquery/table.rb', line 1116

def buffer_rows
  return nil if reference?
  ensure_full_data!
  @gapi.streaming_buffer&.estimated_rows
end

#bytes_countInteger?

The number of bytes in the table.

Returns:

  • (Integer, nil)

    The count of bytes in the table, or nil if the object is a reference (see #reference?).



656
657
658
659
660
661
662
663
664
# File 'lib/google/cloud/bigquery/table.rb', line 656

def bytes_count
  return nil if reference?
  ensure_full_data!
  begin
    Integer @gapi.num_bytes
  rescue StandardError
    nil
  end
end

#clustering?Boolean?

Checks if the table is clustered.

Returns:

  • (Boolean, nil)

    true when the table is clustered, or false otherwise, if the object is a resource (see #resource?); nil if the object is a reference (see #reference?).

See Also:



473
474
475
476
# File 'lib/google/cloud/bigquery/table.rb', line 473

def clustering?
  return nil if reference?
  !@gapi.clustering.nil?
end

#clustering_fieldsArray<String>?

One or more fields on which data should be clustered. Must be specified with time partitioning, data in the table will be first partitioned and subsequently clustered. The order of the returned fields determines the sort order of the data.

See Google::Cloud::Bigquery::Table::Updater#clustering_fields=.

Returns:

  • (Array<String>, nil)

    The clustering fields, or nil if the table is not clustered or if the table is a reference (see #reference?).

See Also:



499
500
501
502
503
# File 'lib/google/cloud/bigquery/table.rb', line 499

def clustering_fields
  return nil if reference?
  ensure_full_data!
  @gapi.clustering.fields if clustering?
end

#copy(destination_table, create: nil, write: nil) {|job| ... } ⇒ Boolean

Copies the data from the table to another table using a synchronous method that blocks for a response. Timeouts and transient errors are generally handled as needed to complete the job. See also #copy_job.

The geographic location for the job ("US", "EU", etc.) can be set via CopyJob::Updater#location= in a block passed to this method. If the table is a full resource representation (see #resource_full?), the location of the job will be automatically set to the location of the table.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"
table = dataset.table "my_table"
destination_table = dataset.table "my_destination_table"

table.copy destination_table

Passing a string identifier for the destination table:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"
table = dataset.table "my_table"

table.copy "other-project:other_dataset.other_table"

Parameters:

  • destination_table (Table, String)

    The destination for the copied data. This can also be a string identifier as specified by the Standard SQL Query Reference (project-name.dataset_id.table_id) or the Legacy SQL Query Reference (project-name:dataset_id.table_id). This is useful for referencing tables in other projects and datasets.

  • create (String) (defaults to: nil)

    Specifies whether the job is allowed to create new tables. The default value is needed.

    The following values are supported:

    • needed - Create the table if it does not exist.
    • never - The table must already exist. A 'notFound' error is raised if the table does not exist.
  • write (String) (defaults to: nil)

    Specifies how to handle data already present in the destination table. The default value is empty.

    The following values are supported:

    • truncate - BigQuery overwrites the table data.
    • append - BigQuery appends the data to the table.
    • empty - An error will be returned if the destination table already contains data.

Yields:

  • (job)

    a job configuration object

Yield Parameters:

Returns:

  • (Boolean)

    Returns true if the copy operation succeeded.



1617
1618
1619
1620
1621
1622
# File 'lib/google/cloud/bigquery/table.rb', line 1617

def copy destination_table, create: nil, write: nil, &block
  job = copy_job destination_table, create: create, write: write, &block
  job.wait_until_done!
  ensure_job_succeeded! job
  true
end

#copy_job(destination_table, create: nil, write: nil, job_id: nil, prefix: nil, labels: nil, dryrun: nil) {|job| ... } ⇒ Google::Cloud::Bigquery::CopyJob

Copies the data from the table to another table using an asynchronous method. In this method, a CopyJob is immediately returned. The caller may poll the service by repeatedly calling Job#reload! and Job#done? to detect when the job is done, or simply block until the job is done by calling #Job#wait_until_done!. See also #copy.

The geographic location for the job ("US", "EU", etc.) can be set via CopyJob::Updater#location= in a block passed to this method. If the table is a full resource representation (see #resource_full?), the location of the job will be automatically set to the location of the table.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"
table = dataset.table "my_table"
destination_table = dataset.table "my_destination_table"

copy_job = table.copy_job destination_table

Passing a string identifier for the destination table:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"
table = dataset.table "my_table"

copy_job = table.copy_job "other-project:other_dataset.other_table"

copy_job.wait_until_done!
copy_job.done? #=> true

Parameters:

  • destination_table (Table, String)

    The destination for the copied data. This can also be a string identifier as specified by the Standard SQL Query Reference (project-name.dataset_id.table_id) or the Legacy SQL Query Reference (project-name:dataset_id.table_id). This is useful for referencing tables in other projects and datasets.

  • create (String) (defaults to: nil)

    Specifies whether the job is allowed to create new tables. The default value is needed.

    The following values are supported:

    • needed - Create the table if it does not exist.
    • never - The table must already exist. A 'notFound' error is raised if the table does not exist.
  • write (String) (defaults to: nil)

    Specifies how to handle data already present in the destination table. The default value is empty.

    The following values are supported:

    • truncate - BigQuery overwrites the table data.
    • append - BigQuery appends the data to the table.
    • empty - An error will be returned if the destination table already contains data.
  • job_id (String) (defaults to: nil)

    A user-defined ID for the copy job. The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), or dashes (-). The maximum length is 1,024 characters. If job_id is provided, then prefix will not be used.

    See Generating a job ID.

  • prefix (String) (defaults to: nil)

    A string, usually human-readable, that will be prepended to a generated value to produce a unique job ID. For example, the prefix daily_import_job_ can be given to generate a job ID such as daily_import_job_12vEDtMQ0mbp1Mo5Z7mzAFQJZazh. The prefix must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), or dashes (-). The maximum length of the entire ID is 1,024 characters. If job_id is provided, then prefix will not be used.

  • labels (Hash) (defaults to: nil)

    A hash of user-provided labels associated with the job. You can use these to organize and group your jobs.

    The labels applied to a resource must meet the following requirements:

    • Each resource can have multiple labels, up to a maximum of 64.
    • Each label must be a key-value pair.
    • Keys have a minimum length of 1 character and a maximum length of 63 characters, and cannot be empty. Values can be empty, and have a maximum length of 63 characters.
    • Keys and values can contain only lowercase letters, numeric characters, underscores, and dashes. All characters must use UTF-8 encoding, and international characters are allowed.
    • The key portion of a label must be unique. However, you can use the same key with multiple resources.
    • Keys must start with a lowercase letter or international character.
  • dryrun (Boolean) (defaults to: nil)

    If set, don't actually run this job. Behavior is undefined however for non-query jobs and may result in an error. Deprecated.

Yields:

  • (job)

    a job configuration object

Yield Parameters:

Returns:



1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
# File 'lib/google/cloud/bigquery/table.rb', line 1535

def copy_job destination_table, create: nil, write: nil, job_id: nil, prefix: nil, labels: nil, dryrun: nil
  ensure_service!
  options = { create: create, write: write, dryrun: dryrun, labels: labels, job_id: job_id, prefix: prefix }
  updater = CopyJob::Updater.from_options(
    service,
    table_ref,
    Service.get_table_ref(destination_table, default_ref: table_ref),
    options
  )
  updater.location = location if location # may be table reference

  yield updater if block_given?

  job_gapi = updater.to_gapi
  gapi = service.copy_table job_gapi
  Job.from_gapi gapi, service
end

#created_atTime?

The time when this table was created.

Returns:

  • (Time, nil)

    The creation time, or nil if the object is a reference (see #reference?).



692
693
694
695
696
# File 'lib/google/cloud/bigquery/table.rb', line 692

def created_at
  return nil if reference?
  ensure_full_data!
  Convert.millis_to_time @gapi.creation_time
end

#data(token: nil, max: nil, start: nil) ⇒ Google::Cloud::Bigquery::Data

Retrieves data from the table.

If the table is not a full resource representation (see #resource_full?), the full representation will be retrieved before the data retrieval.

Examples:

Paginate rows of data: (See Data#next)

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"
table = dataset.table "my_table"

data = table.data

# Iterate over the first page of results
data.each do |row|
  puts row[:name]
end
# Retrieve the next page of results
data = data.next if data.next?

Retrieve all rows of data: (See Data#all)

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"
table = dataset.table "my_table"

data = table.data

data.all do |row|
  puts row[:name]
end

Parameters:

  • token (String) (defaults to: nil)

    Page token, returned by a previous call, identifying the result set.

  • max (Integer) (defaults to: nil)

    Maximum number of results to return.

  • start (Integer) (defaults to: nil)

    Zero-based index of the starting row to read.

Returns:



1425
1426
1427
1428
1429
1430
# File 'lib/google/cloud/bigquery/table.rb', line 1425

def data token: nil, max: nil, start: nil
  ensure_service!
  reload! unless resource_full?
  data_json = service.list_tabledata dataset_id, table_id, token: token, max: max, start: start
  Data.from_gapi_json data_json, gapi, nil, service
end

#dataset_idString

The ID of the Dataset containing this table.

Returns:

  • (String)

    The ID must contain only letters (a-z, A-Z), numbers (0-9), or underscores (_). The maximum length is 1,024 characters.



131
132
133
134
# File 'lib/google/cloud/bigquery/table.rb', line 131

def dataset_id
  return reference.dataset_id if reference?
  @gapi.table_reference.dataset_id
end

#deleteBoolean

Permanently deletes the table.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"
table = dataset.table "my_table"

table.delete

Returns:

  • (Boolean)

    Returns true if the table was deleted.



2343
2344
2345
2346
2347
2348
2349
# File 'lib/google/cloud/bigquery/table.rb', line 2343

def delete
  ensure_service!
  service.delete_table dataset_id, table_id
  # Set flag for #exists?
  @exists = false
  true
end

#descriptionString?

A user-friendly description of the table.

Returns:

  • (String, nil)

    The description, or nil if the object is a reference (see #reference?).



625
626
627
628
629
# File 'lib/google/cloud/bigquery/table.rb', line 625

def description
  return nil if reference?
  ensure_full_data!
  @gapi.description
end

#description=(new_description) ⇒ Object

Updates the user-friendly description of the table.

If the table is not a full resource representation (see #resource_full?), the full representation will be retrieved before the update to comply with ETag-based optimistic concurrency control.

Parameters:

  • new_description (String)

    The new user-friendly description.



642
643
644
645
646
# File 'lib/google/cloud/bigquery/table.rb', line 642

def description= new_description
  reload! unless resource_full?
  @gapi.update! description: new_description
  patch_gapi! :description
end

#encryptionEncryptionConfiguration?

The EncryptionConfiguration object that represents the custom encryption method used to protect the table. If not set, Dataset#default_encryption is used.

Present only if the table is using custom encryption.

Returns:

See Also:



1004
1005
1006
1007
1008
1009
# File 'lib/google/cloud/bigquery/table.rb', line 1004

def encryption
  return nil if reference?
  ensure_full_data!
  return nil if @gapi.encryption_configuration.nil?
  EncryptionConfiguration.from_gapi(@gapi.encryption_configuration).freeze
end

#encryption=(value) ⇒ Object

Set the EncryptionConfiguration object that represents the custom encryption method used to protect the table. If not set, Dataset#default_encryption is used.

Present only if the table is using custom encryption.

If the table is not a full resource representation (see #resource_full?), the full representation will be retrieved before the update to comply with ETag-based optimistic concurrency control.

Parameters:

See Also:



1029
1030
1031
1032
1033
# File 'lib/google/cloud/bigquery/table.rb', line 1029

def encryption= value
  reload! unless resource_full?
  @gapi.encryption_configuration = value.to_gapi
  patch_gapi! :encryption_configuration
end

#etagString?

The ETag hash of the table.

Returns:

  • (String, nil)

    The ETag hash, or nil if the object is a reference (see #reference?).



597
598
599
600
601
# File 'lib/google/cloud/bigquery/table.rb', line 597

def etag
  return nil if reference?
  ensure_full_data!
  @gapi.etag
end

#exists?(force: false) ⇒ Boolean

Determines whether the table exists in the BigQuery service. The result is cached locally. To refresh state, set force to true.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new

dataset = bigquery.dataset "my_dataset"
table = dataset.table "my_table", skip_lookup: true
table.exists? # true

Parameters:

  • force (Boolean) (defaults to: false)

    Force the latest resource representation to be retrieved from the BigQuery service when true. Otherwise the return value of this method will be memoized to reduce the number of API calls made to the BigQuery service. The default is false.

Returns:

  • (Boolean)

    true when the table exists in the BigQuery service, false otherwise.



2399
2400
2401
2402
2403
2404
2405
2406
# File 'lib/google/cloud/bigquery/table.rb', line 2399

def exists? force: false
  return gapi_exists? if force
  # If we have a value, return it
  return @exists unless @exists.nil?
  # Always true if we have a gapi object
  return true if resource?
  gapi_exists?
end

#expires_atTime?

The time when this table expires. If not present, the table will persist indefinitely. Expired tables will be deleted and their storage reclaimed.

Returns:

  • (Time, nil)

    The expiration time, or nil if not present or the object is a reference (see #reference?).



708
709
710
711
712
# File 'lib/google/cloud/bigquery/table.rb', line 708

def expires_at
  return nil if reference?
  ensure_full_data!
  Convert.millis_to_time @gapi.expiration_time
end

#externalExternal::DataSource?

The External::DataSource (or subclass) object that represents the external data source that the table represents. Data can be queried the table, even though the data is not stored in BigQuery. Instead of loading or streaming the data, this object references the external data source.

Present only if the table represents an External Data Source. See #external? and External::DataSource.

Returns:

See Also:



1052
1053
1054
1055
1056
1057
# File 'lib/google/cloud/bigquery/table.rb', line 1052

def external
  return nil if reference?
  ensure_full_data!
  return nil if @gapi.external_data_configuration.nil?
  External.from_gapi(@gapi.external_data_configuration).freeze
end

#external=(external) ⇒ Object

Set the External::DataSource (or subclass) object that represents the external data source that the table represents. Data can be queried the table, even though the data is not stored in BigQuery. Instead of loading or streaming the data, this object references the external data source.

Use only if the table represents an External Data Source. See #external? and External::DataSource.

If the table is not a full resource representation (see #resource_full?), the full representation will be retrieved before the update to comply with ETag-based optimistic concurrency control.

Parameters:

See Also:



1080
1081
1082
1083
1084
# File 'lib/google/cloud/bigquery/table.rb', line 1080

def external= external
  reload! unless resource_full?
  @gapi.external_data_configuration = external.to_gapi
  patch_gapi! :external_data_configuration
end

#external?Boolean?

Checks if the table's type is "EXTERNAL", indicating that the table represents an External Data Source. See #external? and External::DataSource.

Returns:

  • (Boolean, nil)

    true when the type is EXTERNAL, false otherwise, if the object is a resource (see #resource?); nil if the object is a reference (see #reference?).



768
769
770
771
# File 'lib/google/cloud/bigquery/table.rb', line 768

def external?
  return nil if reference?
  @gapi.type == "EXTERNAL"
end

#extract(extract_url, format: nil, compression: nil, delimiter: nil, header: nil) {|job| ... } ⇒ Boolean

Extracts the data from the table to a Google Cloud Storage file using a synchronous method that blocks for a response. Timeouts and transient errors are generally handled as needed to complete the job. See also #extract_job.

The geographic location for the job ("US", "EU", etc.) can be set via ExtractJob::Updater#location= in a block passed to this method. If the table is a full resource representation (see #resource_full?), the location of the job will be automatically set to the location of the table.

Examples:

Extract to a JSON file:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"
table = dataset.table "my_table"

table.extract "gs://my-bucket/file-name.json", format: "json"

Extract to a CSV file, attaching labels to the job:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"
table = dataset.table "my_table"

table.extract "gs://my-bucket/file-name.csv" do |extract|
  extract.labels = { "custom-label" => "custom-value" }
end

Parameters:

  • extract_url (Google::Cloud::Storage::File, String, Array<String>)

    The Google Storage file or file URI pattern(s) to which BigQuery should extract the table data.

  • format (String) (defaults to: nil)

    The exported file format. The default value is csv.

    The following values are supported:

  • compression (String) (defaults to: nil)

    The compression type to use for exported files. Possible values include GZIP and NONE. The default value is NONE.

  • delimiter (String) (defaults to: nil)

    Delimiter to use between fields in the exported data. Default is ,.

  • header (Boolean) (defaults to: nil)

    Whether to print out a header row in the results. Default is true.

Yields:

  • (job)

    a job configuration object

Yield Parameters:

Returns:

  • (Boolean)

    Returns true if the extract operation succeeded.

See Also:



1790
1791
1792
1793
1794
1795
1796
1797
1798
1799
1800
# File 'lib/google/cloud/bigquery/table.rb', line 1790

def extract extract_url, format: nil, compression: nil, delimiter: nil, header: nil, &block
  job = extract_job extract_url,
                    format:      format,
                    compression: compression,
                    delimiter:   delimiter,
                    header:      header,
                    &block
  job.wait_until_done!
  ensure_job_succeeded! job
  true
end

#extract_job(extract_url, format: nil, compression: nil, delimiter: nil, header: nil, job_id: nil, prefix: nil, labels: nil, dryrun: nil) {|job| ... } ⇒ Google::Cloud::Bigquery::ExtractJob

Extracts the data from the table to a Google Cloud Storage file using an asynchronous method. In this method, an ExtractJob is immediately returned. The caller may poll the service by repeatedly calling Job#reload! and Job#done? to detect when the job is done, or simply block until the job is done by calling #Job#wait_until_done!. See also #extract.

The geographic location for the job ("US", "EU", etc.) can be set via ExtractJob::Updater#location= in a block passed to this method. If the table is a full resource representation (see #resource_full?), the location of the job will automatically be set to the location of the table.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"
table = dataset.table "my_table"

extract_job = table.extract_job "gs://my-bucket/file-name.json",
                                format: "json"
extract_job.wait_until_done!
extract_job.done? #=> true

Parameters:

  • extract_url (Google::Cloud::Storage::File, String, Array<String>)

    The Google Storage file or file URI pattern(s) to which BigQuery should extract the table data.

  • format (String) (defaults to: nil)

    The exported file format. The default value is csv.

    The following values are supported:

  • compression (String) (defaults to: nil)

    The compression type to use for exported files. Possible values include GZIP and NONE. The default value is NONE.

  • delimiter (String) (defaults to: nil)

    Delimiter to use between fields in the exported data. Default is ,.

  • header (Boolean) (defaults to: nil)

    Whether to print out a header row in the results. Default is true.

  • job_id (String) (defaults to: nil)

    A user-defined ID for the extract job. The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), or dashes (-). The maximum length is 1,024 characters. If job_id is provided, then prefix will not be used.

    See Generating a job ID.

  • prefix (String) (defaults to: nil)

    A string, usually human-readable, that will be prepended to a generated value to produce a unique job ID. For example, the prefix daily_import_job_ can be given to generate a job ID such as daily_import_job_12vEDtMQ0mbp1Mo5Z7mzAFQJZazh. The prefix must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), or dashes (-). The maximum length of the entire ID is 1,024 characters. If job_id is provided, then prefix will not be used.

  • labels (Hash) (defaults to: nil)

    A hash of user-provided labels associated with the job. You can use these to organize and group your jobs.

    The labels applied to a resource must meet the following requirements:

    • Each resource can have multiple labels, up to a maximum of 64.
    • Each label must be a key-value pair.
    • Keys have a minimum length of 1 character and a maximum length of 63 characters, and cannot be empty. Values can be empty, and have a maximum length of 63 characters.
    • Keys and values can contain only lowercase letters, numeric characters, underscores, and dashes. All characters must use UTF-8 encoding, and international characters are allowed.
    • The key portion of a label must be unique. However, you can use the same key with multiple resources.
    • Keys must start with a lowercase letter or international character.
  • dryrun (Boolean) (defaults to: nil)

    If set, don't actually run this job. Behavior is undefined however for non-query jobs and may result in an error. Deprecated.

Yields:

  • (job)

    a job configuration object

Yield Parameters:

Returns:

See Also:



1714
1715
1716
1717
1718
1719
1720
1721
1722
1723
1724
1725
1726
1727
# File 'lib/google/cloud/bigquery/table.rb', line 1714

def extract_job extract_url, format: nil, compression: nil, delimiter: nil, header: nil, job_id: nil,
                prefix: nil, labels: nil, dryrun: nil
  ensure_service!
  options = { format: format, compression: compression, delimiter: delimiter, header: header, dryrun: dryrun,
              job_id: job_id, prefix: prefix, labels: labels }
  updater = ExtractJob::Updater.from_options service, table_ref, extract_url, options
  updater.location = location if location # may be table reference

  yield updater if block_given?

  job_gapi = updater.to_gapi
  gapi = service.extract_table job_gapi
  Job.from_gapi gapi, service
end

#fieldsArray<Schema::Field>?

The fields of the table, obtained from its schema.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"
table = dataset.table "my_table"

table.fields.each do |field|
  puts field.name
end

Returns:



942
943
944
945
# File 'lib/google/cloud/bigquery/table.rb', line 942

def fields
  return nil if reference?
  schema.fields
end

#headersArray<Symbol>?

The names of the columns in the table, obtained from its schema.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"
table = dataset.table "my_table"

table.headers.each do |header|
  puts header
end

Returns:

  • (Array<Symbol>, nil)

    An array of column names.



965
966
967
968
# File 'lib/google/cloud/bigquery/table.rb', line 965

def headers
  return nil if reference?
  schema.headers
end

#idString?

The combined Project ID, Dataset ID, and Table ID for this table, in the format specified by the Legacy SQL Query Reference (project-name:dataset_id.table_id). This is useful for referencing tables in other projects and datasets. To use this value in queries see #query_id.

Returns:

  • (String, nil)

    The combined ID, or nil if the object is a reference (see #reference?).



518
519
520
521
# File 'lib/google/cloud/bigquery/table.rb', line 518

def id
  return nil if reference?
  @gapi.id
end

#insert(rows, insert_ids: nil, skip_invalid: nil, ignore_unknown: nil) ⇒ Google::Cloud::Bigquery::InsertResponse

Inserts data into the table for near-immediate querying, without the need to complete a load operation before the data can appear in query results.

Because BigQuery's streaming API is designed for high insertion rates, modifications to the underlying table metadata are eventually consistent when interacting with the streaming system. In most cases metadata changes are propagated within minutes, but during this period API responses may reflect the inconsistent state of the table.

The value :skip can be provided to skip the generation of IDs for all rows, or to skip the generation of an ID for a specific row in the array.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"
table = dataset.table "my_table"

rows = [
  { "first_name" => "Alice", "age" => 21 },
  { "first_name" => "Bob", "age" => 22 }
]
table.insert rows

Avoid retrieving the dataset and table with skip_lookup:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset", skip_lookup: true
table = dataset.table "my_table", skip_lookup: true

rows = [
  { "first_name" => "Alice", "age" => 21 },
  { "first_name" => "Bob", "age" => 22 }
]
table.insert rows

Parameters:

  • rows (Hash, Array<Hash>)

    A hash object or array of hash objects containing the data. Required.

  • insert_ids (Array<String|Symbol>, Symbol) (defaults to: nil)

    A unique ID for each row. BigQuery uses this property to detect duplicate insertion requests on a best-effort basis. For more information, see data consistency. Optional. If not provided, the client library will assign a UUID to each row before the request is sent.

  • skip_invalid (Boolean) (defaults to: nil)

    Insert all valid rows of a request, even if invalid rows exist. The default value is false, which causes the entire request to fail if any invalid rows exist.

  • ignore_unknown (Boolean) (defaults to: nil)

    Accept rows that contain values that do not match the schema. The unknown values are ignored. Default is false, which treats unknown values as errors.

Returns:

Raises:

  • (ArgumentError)

See Also:



2256
2257
2258
2259
2260
2261
2262
2263
2264
2265
2266
2267
2268
2269
2270
# File 'lib/google/cloud/bigquery/table.rb', line 2256

def insert rows, insert_ids: nil, skip_invalid: nil, ignore_unknown: nil
  rows = [rows] if rows.is_a? Hash
  raise ArgumentError, "No rows provided" if rows.empty?

  insert_ids = Array.new(rows.count) { :skip } if insert_ids == :skip
  insert_ids = Array insert_ids
  if insert_ids.count.positive? && insert_ids.count != rows.count
    raise ArgumentError, "insert_ids must be the same size as rows"
  end

  ensure_service!
  options = { skip_invalid: skip_invalid, ignore_unknown: ignore_unknown, insert_ids: insert_ids }
  gapi = service.insert_tabledata dataset_id, table_id, rows, options
  InsertResponse.from_gapi rows, gapi
end

#insert_async(skip_invalid: nil, ignore_unknown: nil, max_bytes: 10_000_000, max_rows: 500, interval: 10, threads: 4) {|response| ... } ⇒ Table::AsyncInserter

Create an asynchronous inserter object used to insert rows in batches.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"
table = dataset.table "my_table"
inserter = table.insert_async do |result|
  if result.error?
    log_error result.error
  else
    log_insert "inserted #{result.insert_count} rows " \
      "with #{result.error_count} errors"
  end
end

rows = [
  { "first_name" => "Alice", "age" => 21 },
  { "first_name" => "Bob", "age" => 22 }
]
inserter.insert rows

inserter.stop.wait!

Parameters:

  • skip_invalid (Boolean) (defaults to: nil)

    Insert all valid rows of a request, even if invalid rows exist. The default value is false, which causes the entire request to fail if any invalid rows exist.

  • ignore_unknown (Boolean) (defaults to: nil)

    Accept rows that contain values that do not match the schema. The unknown values are ignored. Default is false, which treats unknown values as errors.

  • max_rows (Integer) (defaults to: 500)

    The maximum number of rows to be collected before the batch is published. Default is 500.

Yields:

  • (response)

    the callback for when a batch of rows is inserted

Yield Parameters:

Returns:



2319
2320
2321
2322
2323
2324
2325
# File 'lib/google/cloud/bigquery/table.rb', line 2319

def insert_async skip_invalid: nil, ignore_unknown: nil, max_bytes: 10_000_000, max_rows: 500, interval: 10,
                 threads: 4, &block
  ensure_service!

  AsyncInserter.new self, skip_invalid: skip_invalid, ignore_unknown: ignore_unknown, max_bytes: max_bytes,
                          max_rows: max_rows, interval: interval, threads: threads, &block
end

#labelsHash<String, String>?

A hash of user-provided labels associated with this table. Labels are used to organize and group tables. See Using Labels.

The returned hash is frozen and changes are not allowed. Use #labels= to replace the entire hash.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"
table = dataset.table "my_table"

labels = table.labels
labels["department"] #=> "shipping"

Returns:

  • (Hash<String, String>, nil)

    A hash containing key/value pairs.



809
810
811
812
813
814
# File 'lib/google/cloud/bigquery/table.rb', line 809

def labels
  return nil if reference?
  m = @gapi.labels
  m = m.to_h if m.respond_to? :to_h
  m.dup.freeze
end

#labels=(labels) ⇒ Object

Updates the hash of user-provided labels associated with this table. Labels are used to organize and group tables. See Using Labels.

If the table is not a full resource representation (see #resource_full?), the full representation will be retrieved before the update to comply with ETag-based optimistic concurrency control.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"
table = dataset.table "my_table"

table.labels = { "department" => "shipping" }

Parameters:

  • labels (Hash<String, String>)

    A hash containing key/value pairs.

    The labels applied to a resource must meet the following requirements:

    • Each resource can have multiple labels, up to a maximum of 64.
    • Each label must be a key-value pair.
    • Keys have a minimum length of 1 character and a maximum length of 63 characters, and cannot be empty. Values can be empty, and have a maximum length of 63 characters.
    • Keys and values can contain only lowercase letters, numeric characters, underscores, and dashes. All characters must use UTF-8 encoding, and international characters are allowed.
    • The key portion of a label must be unique. However, you can use the same key with multiple resources.
    • Keys must start with a lowercase letter or international character.


853
854
855
856
857
# File 'lib/google/cloud/bigquery/table.rb', line 853

def labels= labels
  reload! unless resource_full?
  @gapi.labels = labels
  patch_gapi! :labels
end

#load(files, format: nil, create: nil, write: nil, projection_fields: nil, jagged_rows: nil, quoted_newlines: nil, encoding: nil, delimiter: nil, ignore_unknown: nil, max_bad_records: nil, quote: nil, skip_leading: nil, autodetect: nil, null_marker: nil) {|updater| ... } ⇒ Boolean

Loads data into the table. You can pass a google-cloud storage file path or a google-cloud storage file instance. Or, you can upload a file directly. See Loading Data with a POST Request.

The geographic location for the job ("US", "EU", etc.) can be set via LoadJob::Updater#location= in a block passed to this method. If the table is a full resource representation (see #resource_full?), the location of the job will be automatically set to the location of the table.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"
table = dataset.table "my_table"

success = table.load "gs://my-bucket/file-name.csv"

Pass a google-cloud-storage File instance:

require "google/cloud/bigquery"
require "google/cloud/storage"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"
table = dataset.table "my_table"

storage = Google::Cloud::Storage.new
bucket = storage.bucket "my-bucket"
file = bucket.file "file-name.csv"
success = table.load file

Pass a list of google-cloud-storage files:

require "google/cloud/bigquery"
require "google/cloud/storage"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"
table = dataset.table "my_table"

storage = Google::Cloud::Storage.new
bucket = storage.bucket "my-bucket"
file = bucket.file "file-name.csv"
table.load [file, "gs://my-bucket/file-name2.csv"]

Upload a file directly:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"
table = dataset.table "my_table"

file = File.open "my_data.json"
success = table.load file do |j|
  j.format = "newline_delimited_json"
end

Parameters:

  • files (File, Google::Cloud::Storage::File, String, URI, Array<Google::Cloud::Storage::File, String, URI>)

    A file or the URI of a Google Cloud Storage file, or an Array of those, containing data to load into the table.

  • format (String) (defaults to: nil)

    The exported file format. The default value is csv.

    The following values are supported:

  • create (String) (defaults to: nil)

    Specifies whether the job is allowed to create new tables. The default value is needed.

    The following values are supported:

    • needed - Create the table if it does not exist.
    • never - The table must already exist. A 'notFound' error is raised if the table does not exist.
  • write (String) (defaults to: nil)

    Specifies how to handle data already present in the table. The default value is append.

    The following values are supported:

    • truncate - BigQuery overwrites the table data.
    • append - BigQuery appends the data to the table.
    • empty - An error will be returned if the table already contains data.
  • projection_fields (Array<String>) (defaults to: nil)

    If the format option is set to datastore_backup, indicates which entity properties to load from a Cloud Datastore backup. Property names are case sensitive and must be top-level properties. If not set, BigQuery loads all properties. If any named property isn't found in the Cloud Datastore backup, an invalid error is returned.

  • jagged_rows (Boolean) (defaults to: nil)

    Accept rows that are missing trailing optional columns. The missing values are treated as nulls. If false, records with missing trailing columns are treated as bad records, and if there are too many bad records, an invalid error is returned in the job result. The default value is false. Only applicable to CSV, ignored for other formats.

  • quoted_newlines (Boolean) (defaults to: nil)

    Indicates if BigQuery should allow quoted data sections that contain newline characters in a CSV file. The default value is false.

  • autodetect (Boolean) (defaults to: nil)

    Indicates if BigQuery should automatically infer the options and schema for CSV and JSON sources. The default value is false.

  • encoding (String) (defaults to: nil)

    The character encoding of the data. The supported values are UTF-8 or ISO-8859-1. The default value is UTF-8.

  • delimiter (String) (defaults to: nil)

    Specifices the separator for fields in a CSV file. BigQuery converts the string to ISO-8859-1 encoding, and then uses the first byte of the encoded string to split the data in its raw, binary state. Default is ,.

  • ignore_unknown (Boolean) (defaults to: nil)

    Indicates if BigQuery should allow extra values that are not represented in the table schema. If true, the extra values are ignored. If false, records with extra columns are treated as bad records, and if there are too many bad records, an invalid error is returned in the job result. The default value is false.

    The format property determines what BigQuery treats as an extra value:

    • CSV: Trailing columns
    • JSON: Named values that don't match any column names
  • max_bad_records (Integer) (defaults to: nil)

    The maximum number of bad records that BigQuery can ignore when running the job. If the number of bad records exceeds this value, an invalid error is returned in the job result. The default value is 0, which requires that all records are valid.

  • null_marker (String) (defaults to: nil)

    Specifies a string that represents a null value in a CSV file. For example, if you specify \N, BigQuery interprets \N as a null value when loading a CSV file. The default value is the empty string. If you set this property to a custom value, BigQuery throws an error if an empty string is present for all data types except for STRING and BYTE. For STRING and BYTE columns, BigQuery interprets the empty string as an empty value.

  • quote (String) (defaults to: nil)

    The value that is used to quote data sections in a CSV file. BigQuery converts the string to ISO-8859-1 encoding, and then uses the first byte of the encoded string to split the data in its raw, binary state. The default value is a double-quote ". If your data does not contain quoted sections, set the property value to an empty string. If your data contains quoted newline characters, you must also set the allowQuotedNewlines property to true.

  • skip_leading (Integer) (defaults to: nil)

    The number of rows at the top of a CSV file that BigQuery will skip when loading the data. The default value is 0. This property is useful if you have header rows in the file that should be skipped.

Yields:

  • (updater)

    A block for setting the schema of the destination table and other options for the load job. The schema can be omitted if the destination table already exists, or if you're loading data from a Google Cloud Datastore backup.

Yield Parameters:

Returns:

  • (Boolean)

    Returns true if the load job was successful.



2179
2180
2181
2182
2183
2184
2185
2186
2187
2188
2189
2190
2191
# File 'lib/google/cloud/bigquery/table.rb', line 2179

def load files, format: nil, create: nil, write: nil, projection_fields: nil, jagged_rows: nil,
         quoted_newlines: nil, encoding: nil, delimiter: nil, ignore_unknown: nil, max_bad_records: nil,
         quote: nil, skip_leading: nil, autodetect: nil, null_marker: nil, &block
  job = load_job files, format: format, create: create, write: write, projection_fields: projection_fields,
                        jagged_rows: jagged_rows, quoted_newlines: quoted_newlines, encoding: encoding,
                        delimiter: delimiter, ignore_unknown: ignore_unknown, max_bad_records: max_bad_records,
                        quote: quote, skip_leading: skip_leading, autodetect: autodetect,
                        null_marker: null_marker, &block

  job.wait_until_done!
  ensure_job_succeeded! job
  true
end

#load_job(files, format: nil, create: nil, write: nil, projection_fields: nil, jagged_rows: nil, quoted_newlines: nil, encoding: nil, delimiter: nil, ignore_unknown: nil, max_bad_records: nil, quote: nil, skip_leading: nil, job_id: nil, prefix: nil, labels: nil, autodetect: nil, null_marker: nil, dryrun: nil) {|load_job| ... } ⇒ Google::Cloud::Bigquery::LoadJob

Loads data into the table. You can pass a google-cloud storage file path or a google-cloud storage file instance. Or, you can upload a file directly. See Loading Data with a POST Request.

The geographic location for the job ("US", "EU", etc.) can be set via LoadJob::Updater#location= in a block passed to this method. If the table is a full resource representation (see #resource_full?), the location of the job will be automatically set to the location of the table.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"
table = dataset.table "my_table"

load_job = table.load_job "gs://my-bucket/file-name.csv"

Pass a google-cloud-storage File instance:

require "google/cloud/bigquery"
require "google/cloud/storage"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"
table = dataset.table "my_table"

storage = Google::Cloud::Storage.new
bucket = storage.bucket "my-bucket"
file = bucket.file "file-name.csv"
load_job = table.load_job file

Pass a list of google-cloud-storage files:

require "google/cloud/bigquery"
require "google/cloud/storage"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"
table = dataset.table "my_table"

storage = Google::Cloud::Storage.new
bucket = storage.bucket "my-bucket"
file = bucket.file "file-name.csv"
load_job = table.load_job [file, "gs://my-bucket/file-name2.csv"]

Upload a file directly:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"
table = dataset.table "my_table"

file = File.open "my_data.csv"
load_job = table.load_job file

Parameters:

  • files (File, Google::Cloud::Storage::File, String, URI, Array<Google::Cloud::Storage::File, String, URI>)

    A file or the URI of a Google Cloud Storage file, or an Array of those, containing data to load into the table.

  • format (String) (defaults to: nil)

    The exported file format. The default value is csv.

    The following values are supported:

  • create (String) (defaults to: nil)

    Specifies whether the job is allowed to create new tables. The default value is needed.

    The following values are supported:

    • needed - Create the table if it does not exist.
    • never - The table must already exist. A 'notFound' error is raised if the table does not exist.
  • write (String) (defaults to: nil)

    Specifies how to handle data already present in the table. The default value is append.

    The following values are supported:

    • truncate - BigQuery overwrites the table data.
    • append - BigQuery appends the data to the table.
    • empty - An error will be returned if the table already contains data.
  • projection_fields (Array<String>) (defaults to: nil)

    If the format option is set to datastore_backup, indicates which entity properties to load from a Cloud Datastore backup. Property names are case sensitive and must be top-level properties. If not set, BigQuery loads all properties. If any named property isn't found in the Cloud Datastore backup, an invalid error is returned.

  • jagged_rows (Boolean) (defaults to: nil)

    Accept rows that are missing trailing optional columns. The missing values are treated as nulls. If false, records with missing trailing columns are treated as bad records, and if there are too many bad records, an invalid error is returned in the job result. The default value is false. Only applicable to CSV, ignored for other formats.

  • quoted_newlines (Boolean) (defaults to: nil)

    Indicates if BigQuery should allow quoted data sections that contain newline characters in a CSV file. The default value is false.

  • autodetect (Boolean) (defaults to: nil)

    Indicates if BigQuery should automatically infer the options and schema for CSV and JSON sources. The default value is false.

  • encoding (String) (defaults to: nil)

    The character encoding of the data. The supported values are UTF-8 or ISO-8859-1. The default value is UTF-8.

  • delimiter (String) (defaults to: nil)

    Specifices the separator for fields in a CSV file. BigQuery converts the string to ISO-8859-1 encoding, and then uses the first byte of the encoded string to split the data in its raw, binary state. Default is ,.

  • ignore_unknown (Boolean) (defaults to: nil)

    Indicates if BigQuery should allow extra values that are not represented in the table schema. If true, the extra values are ignored. If false, records with extra columns are treated as bad records, and if there are too many bad records, an invalid error is returned in the job result. The default value is false.

    The format property determines what BigQuery treats as an extra value:

    • CSV: Trailing columns
    • JSON: Named values that don't match any column names
  • max_bad_records (Integer) (defaults to: nil)

    The maximum number of bad records that BigQuery can ignore when running the job. If the number of bad records exceeds this value, an invalid error is returned in the job result. The default value is 0, which requires that all records are valid.

  • null_marker (String) (defaults to: nil)

    Specifies a string that represents a null value in a CSV file. For example, if you specify \N, BigQuery interprets \N as a null value when loading a CSV file. The default value is the empty string. If you set this property to a custom value, BigQuery throws an error if an empty string is present for all data types except for STRING and BYTE. For STRING and BYTE columns, BigQuery interprets the empty string as an empty value.

  • quote (String) (defaults to: nil)

    The value that is used to quote data sections in a CSV file. BigQuery converts the string to ISO-8859-1 encoding, and then uses the first byte of the encoded string to split the data in its raw, binary state. The default value is a double-quote ". If your data does not contain quoted sections, set the property value to an empty string. If your data contains quoted newline characters, you must also set the allowQuotedNewlines property to true.

  • skip_leading (Integer) (defaults to: nil)

    The number of rows at the top of a CSV file that BigQuery will skip when loading the data. The default value is 0. This property is useful if you have header rows in the file that should be skipped.

  • job_id (String) (defaults to: nil)

    A user-defined ID for the load job. The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), or dashes (-). The maximum length is 1,024 characters. If job_id is provided, then prefix will not be used.

    See Generating a job ID.

  • prefix (String) (defaults to: nil)

    A string, usually human-readable, that will be prepended to a generated value to produce a unique job ID. For example, the prefix daily_import_job_ can be given to generate a job ID such as daily_import_job_12vEDtMQ0mbp1Mo5Z7mzAFQJZazh. The prefix must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), or dashes (-). The maximum length of the entire ID is 1,024 characters. If job_id is provided, then prefix will not be used.

  • labels (Hash) (defaults to: nil)

    A hash of user-provided labels associated with the job. You can use these to organize and group your jobs.

    The labels applied to a resource must meet the following requirements:

    • Each resource can have multiple labels, up to a maximum of 64.
    • Each label must be a key-value pair.
    • Keys have a minimum length of 1 character and a maximum length of 63 characters, and cannot be empty. Values can be empty, and have a maximum length of 63 characters.
    • Keys and values can contain only lowercase letters, numeric characters, underscores, and dashes. All characters must use UTF-8 encoding, and international characters are allowed.
    • The key portion of a label must be unique. However, you can use the same key with multiple resources.
    • Keys must start with a lowercase letter or international character.
  • dryrun (Boolean) (defaults to: nil)

    If set, don't actually run this job. Behavior is undefined however for non-query jobs and may result in an error. Deprecated.

Yields:

  • (load_job)

    a block for setting the load job

Yield Parameters:

  • load_job (LoadJob)

    the load job object to be updated

Returns:



1994
1995
1996
1997
1998
1999
2000
2001
2002
2003
2004
2005
2006
2007
2008
2009
2010
2011
2012
2013
# File 'lib/google/cloud/bigquery/table.rb', line 1994

def load_job files, format: nil, create: nil, write: nil, projection_fields: nil, jagged_rows: nil,
             quoted_newlines: nil, encoding: nil, delimiter: nil, ignore_unknown: nil, max_bad_records: nil,
             quote: nil, skip_leading: nil, job_id: nil, prefix: nil, labels: nil, autodetect: nil,
             null_marker: nil, dryrun: nil
  ensure_service!

  updater = load_job_updater format: format, create: create, write: write, projection_fields: projection_fields,
                             jagged_rows: jagged_rows, quoted_newlines: quoted_newlines, encoding: encoding,
                             delimiter: delimiter, ignore_unknown: ignore_unknown,
                             max_bad_records: max_bad_records, quote: quote, skip_leading: skip_leading,
                             dryrun: dryrun, job_id: job_id, prefix: prefix, schema: schema, labels: labels,
                             autodetect: autodetect, null_marker: null_marker

  yield updater if block_given?

  job_gapi = updater.to_gapi

  return load_local files, job_gapi if local_file? files
  load_storage files, job_gapi
end

#locationString?

The geographic location where the table should reside. Possible values include EU and US. The default value is US.

Returns:

  • (String, nil)

    The location code.



781
782
783
784
785
# File 'lib/google/cloud/bigquery/table.rb', line 781

def location
  return nil if reference?
  ensure_full_data!
  @gapi.location
end

#modified_atTime?

The date when this table was last modified.

Returns:

  • (Time, nil)

    The last modified time, or nil if not present or the object is a reference (see #reference?).



722
723
724
725
726
# File 'lib/google/cloud/bigquery/table.rb', line 722

def modified_at
  return nil if reference?
  ensure_full_data!
  Convert.millis_to_time @gapi.last_modified_time
end

#nameString?

The name of the table.

Returns:

  • (String, nil)

    The friendly name, or nil if the object is a reference (see #reference?).



567
568
569
570
# File 'lib/google/cloud/bigquery/table.rb', line 567

def name
  return nil if reference?
  @gapi.friendly_name
end

#name=(new_name) ⇒ Object

Updates the name of the table.

If the table is not a full resource representation (see #resource_full?), the full representation will be retrieved before the update to comply with ETag-based optimistic concurrency control.

Parameters:

  • new_name (String)

    The new friendly name.



583
584
585
586
587
# File 'lib/google/cloud/bigquery/table.rb', line 583

def name= new_name
  reload! unless resource_full?
  @gapi.update! friendly_name: new_name
  patch_gapi! :friendly_name
end

#param_typesHash

The types of the fields in the table, obtained from its schema. Types use the same format as the optional query parameter types.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"
table = dataset.table "my_table"

table.param_types

Returns:

  • (Hash)

    A hash with field names as keys, and types as values.



985
986
987
988
# File 'lib/google/cloud/bigquery/table.rb', line 985

def param_types
  return nil if reference?
  schema.param_types
end

#policyPolicy

Gets the Cloud IAM access control policy for the table. The latest policy will be read from the service. See also #update_policy.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"
table = dataset.table "my_table"

policy = table.policy

policy.frozen? #=> true
binding_owner = policy.bindings.find { |b| b.role == "roles/owner" }
binding_owner.role #=> "roles/owner"
binding_owner.members #=> ["user:owner@example.com"]
binding_owner.frozen? #=> true
binding_owner.members.frozen? #=> true

Returns:

  • (Policy)

    The frozen policy for the table.

Raises:

  • (ArgumentError)

See Also:



1303
1304
1305
1306
1307
1308
# File 'lib/google/cloud/bigquery/table.rb', line 1303

def policy
  raise ArgumentError, "Block argument not supported: Use #update_policy instead." if block_given?
  ensure_service!
  gapi = service.get_table_policy dataset_id, table_id
  Policy.from_gapi(gapi).freeze
end

#project_idString

The ID of the Project containing this table.

Returns:

  • (String)

    The project ID.



143
144
145
146
# File 'lib/google/cloud/bigquery/table.rb', line 143

def project_id
  return reference.project_id if reference?
  @gapi.table_reference.project_id
end

#queryString

The query that executes each time the view is loaded.

Returns:

  • (String)

    The query that defines the view.



1147
1148
1149
# File 'lib/google/cloud/bigquery/table.rb', line 1147

def query
  @gapi.view&.query
end

#query=(new_query) ⇒ Object

Updates the query that executes each time the view is loaded.

This sets the query using standard SQL. To specify legacy SQL or to use user-defined function resources use (#set_query) instead.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"
view = dataset.table "my_view"

view.query = "SELECT first_name FROM " \
               "`my_project.my_dataset.my_table`"

Parameters:

  • new_query (String)

    The query that defines the view.

See Also:



1174
1175
1176
# File 'lib/google/cloud/bigquery/table.rb', line 1174

def query= new_query
  set_query new_query
end

#query_id(standard_sql: nil, legacy_sql: nil) ⇒ String

The value returned by #id, wrapped in backticks (Standard SQL) or s quare brackets (Legacy SQL) to accommodate project IDs containing dashes. Useful in queries.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"
table = dataset.table "my_table"

data = bigquery.query "SELECT first_name FROM #{table.query_id}"

Parameters:

  • standard_sql (Boolean) (defaults to: nil)

    Specifies whether to use BigQuery's standard SQL dialect. Optional. The default value is true.

  • legacy_sql (Boolean) (defaults to: nil)

    Specifies whether to use BigQuery's legacy SQL dialect. Optional. The default value is false.

Returns:

  • (String)

    The appropriate table ID for use in queries, depending on SQL type.



551
552
553
554
555
556
557
# File 'lib/google/cloud/bigquery/table.rb', line 551

def query_id standard_sql: nil, legacy_sql: nil
  if Convert.resolve_legacy_sql standard_sql, legacy_sql
    "[#{project_id}:#{dataset_id}.#{table_id}]"
  else
    "`#{project_id}.#{dataset_id}.#{table_id}`"
  end
end

#query_legacy_sql?Boolean

Checks if the view's query is using legacy sql.

Returns:

  • (Boolean)

    true when legacy sql is used, false otherwise.



1241
1242
1243
1244
1245
# File 'lib/google/cloud/bigquery/table.rb', line 1241

def query_legacy_sql?
  val = @gapi.view.use_legacy_sql
  return true if val.nil?
  val
end

#query_standard_sql?Boolean

Checks if the view's query is using standard sql.

Returns:

  • (Boolean)

    true when standard sql is used, false otherwise.



1254
1255
1256
# File 'lib/google/cloud/bigquery/table.rb', line 1254

def query_standard_sql?
  !query_legacy_sql?
end

#query_udfsArray<String>

The user-defined function resources used in the view's query. May be either a code resource to load from a Google Cloud Storage URI (gs://bucket/path), or an inline resource that contains code for a user-defined function (UDF). Providing an inline code resource is equivalent to providing a URI for a file containing the same code. See User-Defined Functions.

Returns:

  • (Array<String>)

    An array containing Google Cloud Storage URIs and/or inline source code.



1272
1273
1274
1275
1276
# File 'lib/google/cloud/bigquery/table.rb', line 1272

def query_udfs
  udfs_gapi = @gapi.view.user_defined_function_resources
  return [] if udfs_gapi.nil?
  Array(udfs_gapi).map { |udf| udf.inline_code || udf.resource_uri }
end

#range_partitioning?Boolean?

Checks if the table is range partitioned. See Creating and using integer range partitioned tables.

Returns:

  • (Boolean, nil)

    true when the table is range partitioned, or false otherwise, if the object is a resource (see #resource?); nil if the object is a reference (see #reference?).



168
169
170
171
# File 'lib/google/cloud/bigquery/table.rb', line 168

def range_partitioning?
  return nil if reference?
  !@gapi.range_partitioning.nil?
end

#range_partitioning_endInteger?

The end of range partitioning, exclusive. See Creating and using integer range partitioned tables.

Returns:

  • (Integer, nil)

    The end of range partitioning, exclusive, or nil if not range partitioned or the object is a reference (see #reference?).



229
230
231
232
233
# File 'lib/google/cloud/bigquery/table.rb', line 229

def range_partitioning_end
  return nil if reference?
  ensure_full_data!
  @gapi.range_partitioning.range.end if range_partitioning?
end

#range_partitioning_fieldInteger?

The field on which the table is range partitioned, if any. The field must be a top-level NULLABLE/REQUIRED field. The only supported type is INTEGER/INT64. See Creating and using integer range partitioned tables.

Returns:

  • (Integer, nil)

    The range partition field, or nil if not range partitioned or the object is a reference (see #reference?).



183
184
185
186
187
# File 'lib/google/cloud/bigquery/table.rb', line 183

def range_partitioning_field
  return nil if reference?
  ensure_full_data!
  @gapi.range_partitioning.field if range_partitioning?
end

#range_partitioning_intervalInteger?

The width of each interval. See Creating and using integer range partitioned tables.

Returns:

  • (Integer, nil)

    The width of each interval, for data in range partitions, or nil if not range partitioned or the object is a reference (see #reference?).



213
214
215
216
217
218
# File 'lib/google/cloud/bigquery/table.rb', line 213

def range_partitioning_interval
  return nil if reference?
  ensure_full_data!
  return nil unless range_partitioning?
  @gapi.range_partitioning.range.interval
end

#range_partitioning_startInteger?

The start of range partitioning, inclusive. See Creating and using integer range partitioned tables.

Returns:

  • (Integer, nil)

    The start of range partitioning, inclusive, or nil if not range partitioned or the object is a reference (see #reference?).



198
199
200
201
202
# File 'lib/google/cloud/bigquery/table.rb', line 198

def range_partitioning_start
  return nil if reference?
  ensure_full_data!
  @gapi.range_partitioning.range.start if range_partitioning?
end

#reference?Boolean

Whether the table was created without retrieving the resource representation from the BigQuery service.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new

dataset = bigquery.dataset "my_dataset"
table = dataset.table "my_table", skip_lookup: true

table.reference? # true
table.reload!
table.reference? # false

Returns:

  • (Boolean)

    true when the table is just a local reference object, false otherwise.



2427
2428
2429
# File 'lib/google/cloud/bigquery/table.rb', line 2427

def reference?
  @gapi.nil?
end

#reload!Google::Cloud::Bigquery::Table Also known as: refresh!

Reloads the table with current data from the BigQuery service.

Examples:

Skip retrieving the table from the service, then load it:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new

dataset = bigquery.dataset "my_dataset"
table = dataset.table "my_table", skip_lookup: true

table.reload!

Returns:



2369
2370
2371
2372
2373
2374
2375
# File 'lib/google/cloud/bigquery/table.rb', line 2369

def reload!
  ensure_service!
  @gapi = service.get_table dataset_id, table_id
  @reference = nil
  @exists = nil
  self
end

#require_partition_filterBoolean?

Whether queries over this table require a partition filter that can be used for partition elimination to be specified. See Partitioned Tables.

Returns:

  • (Boolean, nil)

    true when a partition filter will be required, false otherwise, or nil if the object is a reference (see #reference?).



426
427
428
429
430
# File 'lib/google/cloud/bigquery/table.rb', line 426

def require_partition_filter
  return nil if reference?
  ensure_full_data!
  @gapi.require_partition_filter
end

#require_partition_filter=(new_require) ⇒ Object

Sets whether queries over this table require a partition filter. See Partitioned Tables.

If the table is not a full resource representation (see #resource_full?), the full representation will be retrieved before the update to comply with ETag-based optimistic concurrency control.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"
table = dataset.create_table "my_table" do |t|
  t.require_partition_filter = true
end

Parameters:

  • new_require (Boolean)

    Whether queries over this table require a partition filter.



455
456
457
458
459
# File 'lib/google/cloud/bigquery/table.rb', line 455

def require_partition_filter= new_require
  reload! unless resource_full?
  @gapi.require_partition_filter = new_require
  patch_gapi! :require_partition_filter
end

#resource?Boolean

Whether the table was created with a resource representation from the BigQuery service.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new

dataset = bigquery.dataset "my_dataset"
table = dataset.table "my_table", skip_lookup: true

table.resource? # false
table.reload!
table.resource? # true

Returns:

  • (Boolean)

    true when the table was created with a resource representation, false otherwise.



2450
2451
2452
# File 'lib/google/cloud/bigquery/table.rb', line 2450

def resource?
  !@gapi.nil?
end

#resource_full?Boolean

Whether the table was created with a full resource representation from the BigQuery service.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new

dataset = bigquery.dataset "my_dataset"
table = dataset.table "my_table"

table.resource_full? # true

Returns:

  • (Boolean)

    true when the table was created with a full resource representation, false otherwise.



2499
2500
2501
# File 'lib/google/cloud/bigquery/table.rb', line 2499

def resource_full?
  @gapi.is_a? Google::Apis::BigqueryV2::Table
end

#resource_partial?Boolean

Whether the table was created with a partial resource representation from the BigQuery service by retrieval through Dataset#tables. See Tables: list response for the contents of the partial representation. Accessing any attribute outside of the partial representation will result in loading the full representation.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new

dataset = bigquery.dataset "my_dataset"
table = dataset.tables.first

table.resource_partial? # true
table.description # Loads the full resource.
table.resource_partial? # false

Returns:

  • (Boolean)

    true when the table was created with a partial resource representation, false otherwise.



2478
2479
2480
# File 'lib/google/cloud/bigquery/table.rb', line 2478

def resource_partial?
  @gapi.is_a? Google::Apis::BigqueryV2::TableList::Table
end

#rows_countInteger?

The number of rows in the table.

Returns:

  • (Integer, nil)

    The count of rows in the table, or nil if the object is a reference (see #reference?).



674
675
676
677
678
679
680
681
682
# File 'lib/google/cloud/bigquery/table.rb', line 674

def rows_count
  return nil if reference?
  ensure_full_data!
  begin
    Integer @gapi.num_rows
  rescue StandardError
    nil
  end
end

#schema(replace: false) {|schema| ... } ⇒ Google::Cloud::Bigquery::Schema?

Returns the table's schema. If the table is not a view (See #view?), this method can also be used to set, replace, or add to the schema by passing a block. See Schema for available methods.

If the table is not a full resource representation (see #resource_full?), the full representation will be retrieved.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"
table = dataset.create_table "my_table"

table.schema do |schema|
  schema.string "first_name", mode: :required
  schema.record "cities_lived", mode: :repeated do |nested_schema|
    nested_schema.string "place", mode: :required
    nested_schema.integer "number_of_years", mode: :required
  end
end

Load the schema from a file

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"
table = dataset.create_table "my_table"
table.schema do |schema|
  schema.load File.open("schema.json")
end

Parameters:

  • replace (Boolean) (defaults to: false)

    Whether to replace the existing schema with the new schema. If true, the fields will replace the existing schema. If false, the fields will be added to the existing schema. When a table already contains data, schema changes must be additive. Thus, the default value is false. When loading from a file this will always replace the schema, no matter what replace is set to. You can update the schema (for example, for a table that already contains data) by providing a schema file that includes the existing schema plus any new fields.

Yields:

  • (schema)

    a block for setting the schema

Yield Parameters:

  • schema (Schema)

    the object accepting the schema

Returns:



909
910
911
912
913
914
915
916
917
918
919
920
921
922
# File 'lib/google/cloud/bigquery/table.rb', line 909

def schema replace: false
  return nil if reference? && !block_given?
  reload! unless resource_full?
  schema_builder = Schema.from_gapi @gapi.schema
  if block_given?
    schema_builder = Schema.from_gapi if replace
    yield schema_builder
    if schema_builder.changed?
      @gapi.schema = schema_builder.to_gapi
      patch_gapi! :schema
    end
  end
  schema_builder.freeze
end

#set_query(query, standard_sql: nil, legacy_sql: nil, udfs: nil) ⇒ Object

Updates the query that executes each time the view is loaded. Allows setting of standard vs. legacy SQL and user-defined function resources.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"
view = dataset.table "my_view"

view.set_query "SELECT first_name FROM " \
                 "`my_project.my_dataset.my_table`",
               standard_sql: true

Parameters:

  • query (String)

    The query that defines the view.

  • standard_sql (Boolean) (defaults to: nil)

    Specifies whether to use BigQuery's standard SQL dialect. Optional. The default value is true.

  • legacy_sql (Boolean) (defaults to: nil)

    Specifies whether to use BigQuery's legacy SQL dialect. Optional. The default value is false.

  • udfs (Array<String>, String) (defaults to: nil)

    User-defined function resources used in a legacy SQL query. May be either a code resource to load from a Google Cloud Storage URI (gs://bucket/path), or an inline resource that contains code for a user-defined function (UDF). Providing an inline code resource is equivalent to providing a URI for a file containing the same code.

    This parameter is used for defining User Defined Function (UDF) resources only when using legacy SQL. Users of standard SQL should leverage either DDL (e.g. CREATE [TEMPORARY] FUNCTION ...) or the Routines API to define UDF resources.

    For additional information on migrating, see: Migrating to standard SQL - Differences in user-defined JavaScript functions

See Also:



1224
1225
1226
1227
1228
1229
1230
1231
1232
# File 'lib/google/cloud/bigquery/table.rb', line 1224

def set_query query, standard_sql: nil, legacy_sql: nil, udfs: nil
  use_legacy_sql = Convert.resolve_legacy_sql standard_sql, legacy_sql
  @gapi.view = Google::Apis::BigqueryV2::ViewDefinition.new(
    query:                           query,
    use_legacy_sql:                  use_legacy_sql,
    user_defined_function_resources: udfs_gapi(udfs)
  )
  patch_gapi! :view
end

#table?Boolean?

Checks if the table's type is "TABLE".

Returns:

  • (Boolean, nil)

    true when the type is TABLE, false otherwise, if the object is a resource (see #resource?); nil if the object is a reference (see #reference?).



737
738
739
740
# File 'lib/google/cloud/bigquery/table.rb', line 737

def table?
  return nil if reference?
  @gapi.type == "TABLE"
end

#table_idString

A unique ID for this table.

Returns:

  • (String)

    The ID must contain only letters (a-z, A-Z), numbers (0-9), or underscores (_). The maximum length is 1,024 characters.



118
119
120
121
# File 'lib/google/cloud/bigquery/table.rb', line 118

def table_id
  return reference.table_id if reference?
  @gapi.table_reference.table_id
end

#test_iam_permissions(*permissions) ⇒ Array<String>

Tests the specified permissions against the Cloud IAM access control policy.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"
table = dataset.table "my_table"

permissions = table.test_iam_permissions "bigquery.tables.get",
                                         "bigquery.tables.delete"
permissions.include? "bigquery.tables.get"    #=> true
permissions.include? "bigquery.tables.delete" #=> false

Parameters:

Returns:

  • (Array<String>)

    The frozen array of permissions held by the caller.

See Also:



1372
1373
1374
1375
1376
1377
# File 'lib/google/cloud/bigquery/table.rb', line 1372

def test_iam_permissions *permissions
  permissions = Array(permissions).flatten
  ensure_service!
  gapi = service.test_table_permissions dataset_id, table_id, permissions
  gapi.permissions.freeze
end

#time_partitioning?Boolean?

Checks if the table is time partitioned. See Partitioned Tables.

Returns:

  • (Boolean, nil)

    true when the table is time partitioned, or false otherwise, if the object is a resource (see #resource?); nil if the object is a reference (see #reference?).



245
246
247
248
# File 'lib/google/cloud/bigquery/table.rb', line 245

def time_partitioning?
  return nil if reference?
  !@gapi.time_partitioning.nil?
end

#time_partitioning_expirationInteger?

The expiration for the time partitions, if any, in seconds. See Partitioned Tables.

Returns:

  • (Integer, nil)

    The expiration time, in seconds, for data in time partitions, or nil if not present or the object is a reference (see #reference?).



370
371
372
373
374
375
376
# File 'lib/google/cloud/bigquery/table.rb', line 370

def time_partitioning_expiration
  return nil if reference?
  ensure_full_data!
  return nil unless time_partitioning?
  return nil if @gapi.time_partitioning.expiration_ms.nil?
  @gapi.time_partitioning.expiration_ms / 1_000
end

#time_partitioning_expiration=(expiration) ⇒ Object

Sets the time partition expiration for the table. See Partitioned Tables. The table must also be time partitioned.

See #time_partitioning_type=.

If the table is not a full resource representation (see #resource_full?), the full representation will be retrieved before the update to comply with ETag-based optimistic concurrency control.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"
table = dataset.create_table "my_table" do |t|
  t.schema do |schema|
    schema.timestamp "dob", mode: :required
  end
  t.time_partitioning_type = "DAY"
  t.time_partitioning_field = "dob"
  t.time_partitioning_expiration = 86_400
end

Parameters:

  • expiration (Integer)

    An expiration time, in seconds, for data in time partitions.



408
409
410
411
412
413
# File 'lib/google/cloud/bigquery/table.rb', line 408

def time_partitioning_expiration= expiration
  reload! unless resource_full?
  @gapi.time_partitioning ||= Google::Apis::BigqueryV2::TimePartitioning.new
  @gapi.time_partitioning.expiration_ms = expiration * 1000
  patch_gapi! :time_partitioning
end

#time_partitioning_fieldString?

The field on which the table is time partitioned, if any. If not set, the destination table is time partitioned by pseudo column _PARTITIONTIME; if set, the table is time partitioned by this field. See Partitioned Tables.

Returns:

  • (String, nil)

    The time partition field, if a field was configured. nil if not time partitioned, not set (time partitioned by pseudo column '_PARTITIONTIME') or the object is a reference (see #reference?).



315
316
317
318
319
# File 'lib/google/cloud/bigquery/table.rb', line 315

def time_partitioning_field
  return nil if reference?
  ensure_full_data!
  @gapi.time_partitioning.field if time_partitioning?
end

#time_partitioning_field=(field) ⇒ Object

Sets the field on which to time partition the table. If not set, the destination table is time partitioned by pseudo column _PARTITIONTIME; if set, the table is time partitioned by this field. See Partitioned Tables. The table must also be time partitioned.

See #time_partitioning_type=.

You can only set the time partitioning field while creating a table as in the example below. BigQuery does not allow you to change time partitioning on an existing table.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"
table = dataset.create_table "my_table" do |t|
  t.schema do |schema|
    schema.timestamp "dob", mode: :required
  end
  t.time_partitioning_type  = "DAY"
  t.time_partitioning_field = "dob"
end

Parameters:

  • field (String)

    The time partition field. The field must be a top-level TIMESTAMP or DATE field. Its mode must be NULLABLE or REQUIRED.



353
354
355
356
357
358
# File 'lib/google/cloud/bigquery/table.rb', line 353

def time_partitioning_field= field
  reload! unless resource_full?
  @gapi.time_partitioning ||= Google::Apis::BigqueryV2::TimePartitioning.new
  @gapi.time_partitioning.field = field
  patch_gapi! :time_partitioning
end

#time_partitioning_typeString?

The period for which the table is time partitioned, if any. See Partitioned Tables.

Returns:

  • (String, nil)

    The time partition type. The supported types are DAY, HOUR, MONTH, and YEAR, which will generate one partition per day, hour, month, and year, respectively; or nil if not set or the object is a reference (see #reference?).



261
262
263
264
265
# File 'lib/google/cloud/bigquery/table.rb', line 261

def time_partitioning_type
  return nil if reference?
  ensure_full_data!
  @gapi.time_partitioning.type if time_partitioning?
end

#time_partitioning_type=(type) ⇒ Object

Sets the time partitioning type for the table. See Partitioned Tables. The supported types are DAY, HOUR, MONTH, and YEAR, which will generate one partition per day, hour, month, and year, respectively.

You can only set time partitioning when creating a table as in the example below. BigQuery does not allow you to change time partitioning on an existing table.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"
table = dataset.create_table "my_table" do |t|
  t.schema do |schema|
    schema.timestamp "dob", mode: :required
  end
  t.time_partitioning_type  = "DAY"
  t.time_partitioning_field = "dob"
end

Parameters:

  • type (String)

    The time partition type. The supported types are DAY, HOUR, MONTH, and YEAR, which will generate one partition per day, hour, month, and year, respectively.



296
297
298
299
300
301
# File 'lib/google/cloud/bigquery/table.rb', line 296

def time_partitioning_type= type
  reload! unless resource_full?
  @gapi.time_partitioning ||= Google::Apis::BigqueryV2::TimePartitioning.new
  @gapi.time_partitioning.type = type
  patch_gapi! :time_partitioning
end

#update_policy {|policy| ... } ⇒ Policy

Updates the Cloud IAM access control policy for the table. The latest policy will be read from the service. See also #policy.

Examples:

Update the policy by passing a block.

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"
table = dataset.table "my_table"

table.update_policy do |p|
  p.grant role: "roles/viewer", members: "user:viewer@example.com"
  p.revoke role: "roles/editor", members: "user:editor@example.com"
  p.revoke role: "roles/owner"
end # 2 API calls

Yields:

  • (policy)

    A block for updating the policy. The latest policy will be read from the service and passed to the block. After the block completes, the modified policy will be written to the service.

Yield Parameters:

  • policy (Policy)

    The mutable Policy for the table.

Returns:

  • (Policy)

    The updated and frozen policy for the table.

Raises:

  • (ArgumentError)

See Also:



1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
# File 'lib/google/cloud/bigquery/table.rb', line 1336

def update_policy
  raise ArgumentError, "A block updating the policy must be provided" unless block_given?
  ensure_service!
  gapi = service.get_table_policy dataset_id, table_id
  policy = Policy.from_gapi gapi
  yield policy
  # TODO: Check for changes before calling RPC
  gapi = service.set_table_policy dataset_id, table_id, policy.to_gapi
  Policy.from_gapi(gapi).freeze
end

#view?Boolean?

Checks if the table's type is "VIEW", indicating that the table represents a BigQuery view. See Dataset#create_view.

Returns:

  • (Boolean, nil)

    true when the type is VIEW, false otherwise, if the object is a resource (see #resource?); nil if the object is a reference (see #reference?).



752
753
754
755
# File 'lib/google/cloud/bigquery/table.rb', line 752

def view?
  return nil if reference?
  @gapi.type == "VIEW"
end