Class: Google::Cloud::Bigquery::Table

Inherits:
Object
  • Object
show all
Defined in:
lib/google/cloud/bigquery/table.rb,
lib/google/cloud/bigquery/table/list.rb,
lib/google/cloud/bigquery/table/async_inserter.rb

Overview

Table

A named resource representing a BigQuery table that holds zero or more records. Every table is defined by a schema that may contain nested and repeated fields.

The Table class can also represent a logical view, which is a virtual table defined by a SQL query (see #view? and Dataset#create_view); or a materialized view, which is a precomputed view that periodically caches results of a query for increased performance and efficiency (see #materialized_view? and Dataset#create_materialized_view).

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"

table = dataset.create_table "my_table" do |schema|
  schema.string "first_name", mode: :required
  schema.record "cities_lived", mode: :repeated do |nested_schema|
    nested_schema.string "place", mode: :required
    nested_schema.integer "number_of_years", mode: :required
  end
end

row = {
  "first_name" => "Alice",
  "cities_lived" => [
    {
      "place" => "Seattle",
      "number_of_years" => 5
    },
    {
      "place" => "Stockholm",
      "number_of_years" => 6
    }
  ]
}
table.insert row

Creating a logical view:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"
view = dataset.create_view "my_view",
         "SELECT name, age FROM `my_project.my_dataset.my_table`"
view.view? # true

Creating a materialized view:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"
view = dataset.create_materialized_view "my_materialized_view",
                                        "SELECT name, age FROM `my_project.my_dataset.my_table`"
view.materialized_view? # true

See Also:

Direct Known Subclasses

Updater

Defined Under Namespace

Classes: AsyncInserter, List, Updater

Attributes collapse

Data collapse

Lifecycle collapse

Instance Method Details

#api_urlString?

A URL that can be used to access the table using the REST API.

Returns:

  • (String, nil)

    A REST URL for the resource, or nil if the object is a reference (see #reference?).



718
719
720
721
722
# File 'lib/google/cloud/bigquery/table.rb', line 718

def api_url
  return nil if reference?
  ensure_full_data!
  @gapi.self_link
end

#buffer_bytesInteger?

A lower-bound estimate of the number of bytes currently in this table's streaming buffer, if one is present. This field will be absent if the table is not being streamed to or if there is no data in the streaming buffer.

Returns:

  • (Integer, nil)

    The estimated number of bytes in the buffer, or nil if not present or the object is a reference (see #reference?).



1259
1260
1261
1262
1263
# File 'lib/google/cloud/bigquery/table.rb', line 1259

def buffer_bytes
  return nil if reference?
  ensure_full_data!
  @gapi.streaming_buffer&.estimated_bytes
end

#buffer_oldest_atTime?

The time of the oldest entry currently in this table's streaming buffer, if one is present. This field will be absent if the table is not being streamed to or if there is no data in the streaming buffer.

Returns:

  • (Time, nil)

    The oldest entry time, or nil if not present or the object is a reference (see #reference?).



1293
1294
1295
1296
1297
1298
1299
# File 'lib/google/cloud/bigquery/table.rb', line 1293

def buffer_oldest_at
  return nil if reference?
  ensure_full_data!
  return nil unless @gapi.streaming_buffer
  oldest_entry_time = @gapi.streaming_buffer.oldest_entry_time
  Convert.millis_to_time oldest_entry_time
end

#buffer_rowsInteger?

A lower-bound estimate of the number of rows currently in this table's streaming buffer, if one is present. This field will be absent if the table is not being streamed to or if there is no data in the streaming buffer.

Returns:

  • (Integer, nil)

    The estimated number of rows in the buffer, or nil if not present or the object is a reference (see #reference?).



1277
1278
1279
1280
1281
# File 'lib/google/cloud/bigquery/table.rb', line 1277

def buffer_rows
  return nil if reference?
  ensure_full_data!
  @gapi.streaming_buffer&.estimated_rows
end

#bytes_countInteger?

The number of bytes in the table.

Returns:

  • (Integer, nil)

    The count of bytes in the table, or nil if the object is a reference (see #reference?).



763
764
765
766
767
768
769
770
771
# File 'lib/google/cloud/bigquery/table.rb', line 763

def bytes_count
  return nil if reference?
  ensure_full_data!
  begin
    Integer @gapi.num_bytes
  rescue StandardError
    nil
  end
end

#clone(destination_table) {|job| ... } ⇒ Boolean

Clones the data from the table to another table using a synchronous method that blocks for a response. The source and destination table have the same table type, but only bill for unique data. Timeouts and transient errors are generally handled as needed to complete the job. See also #copy_job.

The geographic location for the job ("US", "EU", etc.) can be set via CopyJob::Updater#location= in a block passed to this method. If the table is a full resource representation (see #resource_full?), the location of the job will be automatically set to the location of the table.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"
table = dataset.table "my_table"
destination_table = dataset.table "my_destination_table"

table.clone destination_table

Passing a string identifier for the destination table:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"
table = dataset.table "my_table"

table.clone "other-project:other_dataset.other_table"

Parameters:

  • destination_table (Table, String)

    The destination for the copied data. This can also be a string identifier as specified by the Standard SQL Query Reference (project-name.dataset_id.table_id) or the Legacy SQL Query Reference (project-name:dataset_id.table_id). This is useful for referencing tables in other projects and datasets.

Yields:

  • (job)

    a job configuration object

Yield Parameters:

Returns:

  • (Boolean)

    Returns true if the copy operation succeeded.



1924
1925
1926
1927
1928
# File 'lib/google/cloud/bigquery/table.rb', line 1924

def clone destination_table, &block
  copy_job_with_operation_type destination_table,
                               operation_type: OperationType::CLONE,
                               &block
end

#clone?Boolean?

Checks if the table's type is CLONE, indicating that the table represents a BigQuery table clone.

Returns:

  • (Boolean, nil)

    true when the type is CLONE, false otherwise, if the object is a resource (see #resource?); nil if the object is a reference (see #reference?).

See Also:



895
896
897
898
# File 'lib/google/cloud/bigquery/table.rb', line 895

def clone?
  return nil if reference?
  !@gapi.clone_definition.nil?
end

#clone_definitionGoogle::Apis::BigqueryV2::CloneDefinition?

The Information about base table and clone time of the table.

Returns:

  • (Google::Apis::BigqueryV2::CloneDefinition, nil)

    Clone definition of table clone, or nil if not clone or the object is a reference (see #reference?).



195
196
197
198
# File 'lib/google/cloud/bigquery/table.rb', line 195

def clone_definition
  return nil if reference?
  @gapi.clone_definition
end

#clustering?Boolean?

Returns:

  • (Boolean, nil)

    true when the table is clustered, or false otherwise, if the object is a resource (see #resource?); nil if the object is a reference (see #reference?).

See Also:



531
532
533
534
# File 'lib/google/cloud/bigquery/table.rb', line 531

def clustering?
  return nil if reference?
  !@gapi.clustering.nil?
end

#clustering_fieldsArray<String>?

One or more fields on which data should be clustered. Must be specified with time partitioning, data in the table will be first partitioned and subsequently clustered. The order of the returned fields determines the sort order of the data.

BigQuery supports clustering for both partitioned and non-partitioned tables.

See Google::Cloud::Bigquery::Table::Updater#clustering_fields=, #clustering_fields= and #clustering?.

Returns:

  • (Array<String>, nil)

    The clustering fields, or nil if the table is not clustered or if the table is a reference (see #reference?).

See Also:



559
560
561
562
563
# File 'lib/google/cloud/bigquery/table.rb', line 559

def clustering_fields
  return nil if reference?
  ensure_full_data!
  @gapi.clustering.fields if clustering?
end

#clustering_fields=(fields) ⇒ Object

Updates the list of fields on which data should be clustered.

Only top-level, non-repeated, simple-type fields are supported. When you cluster a table using multiple columns, the order of columns you specify is important. The order of the specified columns determines the sort order of the data.

BigQuery supports clustering for both partitioned and non-partitioned tables.

See Google::Cloud::Bigquery::Table::Updater#clustering_fields=, #clustering_fields and #clustering?.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"
table = dataset.table "my_table"

table.clustering_fields = ["last_name", "first_name"]

Parameters:

  • fields (Array<String>, nil)

    The clustering fields, or nil to remove the clustering configuration. Only top-level, non-repeated, simple-type fields are supported.

See Also:



601
602
603
604
605
606
607
608
609
610
# File 'lib/google/cloud/bigquery/table.rb', line 601

def clustering_fields= fields
  reload! unless resource_full?
  if fields
    @gapi.clustering ||= Google::Apis::BigqueryV2::Clustering.new
    @gapi.clustering.fields = fields
  else
    @gapi.clustering = nil
  end
  patch_gapi! :clustering
end

#copy(destination_table, create: nil, write: nil) {|job| ... } ⇒ Boolean

Copies the data from the table to another table using a synchronous method that blocks for a response. Timeouts and transient errors are generally handled as needed to complete the job. See also #copy_job.

The geographic location for the job ("US", "EU", etc.) can be set via CopyJob::Updater#location= in a block passed to this method. If the table is a full resource representation (see #resource_full?), the location of the job will be automatically set to the location of the table.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"
table = dataset.table "my_table"
destination_table = dataset.table "my_destination_table"

table.copy destination_table

Passing a string identifier for the destination table:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"
table = dataset.table "my_table"

table.copy "other-project:other_dataset.other_table"

Parameters:

  • destination_table (Table, String)

    The destination for the copied data. This can also be a string identifier as specified by the Standard SQL Query Reference (project-name.dataset_id.table_id) or the Legacy SQL Query Reference (project-name:dataset_id.table_id). This is useful for referencing tables in other projects and datasets.

  • create (String) (defaults to: nil)

    Specifies whether the job is allowed to create new tables. The default value is needed.

    The following values are supported:

    • needed - Create the table if it does not exist.
    • never - The table must already exist. A 'notFound' error is raised if the table does not exist.
  • write (String) (defaults to: nil)

    Specifies how to handle data already present in the destination table. The default value is empty.

    The following values are supported:

    • truncate - BigQuery overwrites the table data.
    • append - BigQuery appends the data to the table.
    • empty - An error will be returned if the destination table already contains data.

Yields:

  • (job)

    a job configuration object

Yield Parameters:

Returns:

  • (Boolean)

    Returns true if the copy operation succeeded.



1866
1867
1868
1869
1870
1871
1872
# File 'lib/google/cloud/bigquery/table.rb', line 1866

def copy destination_table, create: nil, write: nil, &block
  copy_job_with_operation_type destination_table,
                               create: create,
                               write: write,
                               operation_type: OperationType::COPY,
                               &block
end

#copy_job(destination_table, create: nil, write: nil, job_id: nil, prefix: nil, labels: nil, dryrun: nil, operation_type: nil) {|job| ... } ⇒ Google::Cloud::Bigquery::CopyJob

Copies the data from the table to another table using an asynchronous method. In this method, a CopyJob is immediately returned. The caller may poll the service by repeatedly calling Job#reload! and Job#done? to detect when the job is done, or simply block until the job is done by calling #Job#wait_until_done!. See also #copy.

The geographic location for the job ("US", "EU", etc.) can be set via CopyJob::Updater#location= in a block passed to this method. If the table is a full resource representation (see #resource_full?), the location of the job will be automatically set to the location of the table.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"
table = dataset.table "my_table"
destination_table = dataset.table "my_destination_table"

copy_job = table.copy_job destination_table

Passing a string identifier for the destination table:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"
table = dataset.table "my_table"

copy_job = table.copy_job "other-project:other_dataset.other_table"

copy_job.wait_until_done!
copy_job.done? #=> true

Parameters:

  • destination_table (Table, String)

    The destination for the copied data. This can also be a string identifier as specified by the Standard SQL Query Reference (project-name.dataset_id.table_id) or the Legacy SQL Query Reference (project-name:dataset_id.table_id). This is useful for referencing tables in other projects and datasets.

  • create (String) (defaults to: nil)

    Specifies whether the job is allowed to create new tables. The default value is needed.

    The following values are supported:

    • needed - Create the table if it does not exist.
    • never - The table must already exist. A 'notFound' error is raised if the table does not exist.
  • write (String) (defaults to: nil)

    Specifies how to handle data already present in the destination table. The default value is empty.

    The following values are supported:

    • truncate - BigQuery overwrites the table data.
    • append - BigQuery appends the data to the table.
    • empty - An error will be returned if the destination table already contains data.
  • job_id (String) (defaults to: nil)

    A user-defined ID for the copy job. The ID must contain only letters ([A-Za-z]), numbers ([0-9]), underscores (_), or dashes (-). The maximum length is 1,024 characters. If job_id is provided, then prefix will not be used.

    See Generating a job ID.

  • prefix (String) (defaults to: nil)

    A string, usually human-readable, that will be prepended to a generated value to produce a unique job ID. For example, the prefix daily_import_job_ can be given to generate a job ID such as daily_import_job_12vEDtMQ0mbp1Mo5Z7mzAFQJZazh. The prefix must contain only letters ([A-Za-z]), numbers ([0-9]), underscores (_), or dashes (-). The maximum length of the entire ID is 1,024 characters. If job_id is provided, then prefix will not be used.

  • labels (Hash) (defaults to: nil)

    A hash of user-provided labels associated with the job. You can use these to organize and group your jobs.

    The labels applied to a resource must meet the following requirements:

    • Each resource can have multiple labels, up to a maximum of 64.
    • Each label must be a key-value pair.
    • Keys have a minimum length of 1 character and a maximum length of 63 characters, and cannot be empty. Values can be empty, and have a maximum length of 63 characters.
    • Keys and values can contain only lowercase letters, numeric characters, underscores, and dashes. All characters must use UTF-8 encoding, and international characters are allowed.
    • The key portion of a label must be unique. However, you can use the same key with multiple resources.
    • Keys must start with a lowercase letter or international character.
  • dryrun (Boolean) (defaults to: nil)

    If set, don't actually run this job. Behavior is undefined however for non-query jobs and may result in an error. Deprecated.

Yields:

  • (job)

    a job configuration object

Yield Parameters:

Returns:



1777
1778
1779
1780
1781
1782
1783
1784
1785
1786
1787
1788
1789
1790
1791
1792
1793
1794
1795
1796
1797
1798
1799
1800
# File 'lib/google/cloud/bigquery/table.rb', line 1777

def copy_job destination_table, create: nil, write: nil, job_id: nil, prefix: nil, labels: nil, dryrun: nil,
             operation_type: nil
  ensure_service!
  options = { create: create,
              write: write,
              dryrun: dryrun,
              labels: labels,
              job_id: job_id,
              prefix: prefix,
              operation_type: operation_type }
  updater = CopyJob::Updater.from_options(
    service,
    table_ref,
    Service.get_table_ref(destination_table, default_ref: table_ref),
    options
  )
  updater.location = location if location # may be table reference

  yield updater if block_given?

  job_gapi = updater.to_gapi
  gapi = service.copy_table job_gapi
  Job.from_gapi gapi, service
end

#created_atTime?

The time when this table was created.

Returns:

  • (Time, nil)

    The creation time, or nil if the object is a reference (see #reference?).



799
800
801
802
803
# File 'lib/google/cloud/bigquery/table.rb', line 799

def created_at
  return nil if reference?
  ensure_full_data!
  Convert.millis_to_time @gapi.creation_time
end

#data(token: nil, max: nil, start: nil) ⇒ Google::Cloud::Bigquery::Data

Retrieves data from the table.

If the table is not a full resource representation (see #resource_full?), the full representation will be retrieved before the data retrieval.

Examples:

Paginate rows of data: (See Data#next)

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"
table = dataset.table "my_table"

data = table.data

# Iterate over the first page of results
data.each do |row|
  puts row[:name]
end
# Retrieve the next page of results
data = data.next if data.next?

Retrieve all rows of data: (See Data#all)

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"
table = dataset.table "my_table"

data = table.data

data.all do |row|
  puts row[:name]
end

Parameters:

  • token (String) (defaults to: nil)

    Page token, returned by a previous call, identifying the result set.

  • max (Integer) (defaults to: nil)

    Maximum number of results to return.

  • start (Integer) (defaults to: nil)

    Zero-based index of the starting row to read.

Returns:



1667
1668
1669
1670
1671
1672
# File 'lib/google/cloud/bigquery/table.rb', line 1667

def data token: nil, max: nil, start: nil
  ensure_service!
  reload! unless resource_full?
  data_json = service.list_tabledata dataset_id, table_id, token: token, max: max, start: start
  Data.from_gapi_json data_json, gapi, nil, service
end

#dataset_idString

The ID of the Dataset containing this table.

Returns:

  • (String)

    The ID must contain only letters ([A-Za-z]), numbers ([0-9]), or underscores (_). The maximum length is 1,024 characters.



144
145
146
147
# File 'lib/google/cloud/bigquery/table.rb', line 144

def dataset_id
  return reference.dataset_id if reference?
  @gapi.table_reference.dataset_id
end

#deleteBoolean

Permanently deletes the table.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"
table = dataset.table "my_table"

table.delete

Returns:

  • (Boolean)

    Returns true if the table was deleted.



2825
2826
2827
2828
2829
2830
2831
# File 'lib/google/cloud/bigquery/table.rb', line 2825

def delete
  ensure_service!
  service.delete_table dataset_id, table_id
  # Set flag for #exists?
  @exists = false
  true
end

#descriptionString?

A user-friendly description of the table.

Returns:

  • (String, nil)

    The description, or nil if the object is a reference (see #reference?).



732
733
734
735
736
# File 'lib/google/cloud/bigquery/table.rb', line 732

def description
  return nil if reference?
  ensure_full_data!
  @gapi.description
end

#description=(new_description) ⇒ Object

Updates the user-friendly description of the table.

If the table is not a full resource representation (see #resource_full?), the full representation will be retrieved before the update to comply with ETag-based optimistic concurrency control.

Parameters:

  • new_description (String)

    The new user-friendly description.



749
750
751
752
753
# File 'lib/google/cloud/bigquery/table.rb', line 749

def description= new_description
  reload! unless resource_full?
  @gapi.update! description: new_description
  patch_gapi! :description
end

#enable_refresh=(new_enable_refresh) ⇒ Object

Sets whether automatic refresh of the materialized view is enabled. When true, the materialized view is updated when the base table is updated. See #materialized_view?.

Parameters:

  • new_enable_refresh (Boolean)

    true when automatic refresh is enabled, false otherwise.



1473
1474
1475
1476
1477
1478
# File 'lib/google/cloud/bigquery/table.rb', line 1473

def enable_refresh= new_enable_refresh
  @gapi.materialized_view = Google::Apis::BigqueryV2::MaterializedViewDefinition.new(
    enable_refresh: new_enable_refresh
  )
  patch_gapi! :materialized_view
end

#enable_refresh?Boolean?

Whether automatic refresh of the materialized view is enabled. When true, the materialized view is updated when the base table is updated. The default value is true. See #materialized_view?.

Returns:

  • (Boolean, nil)

    true when automatic refresh is enabled, false otherwise; or nil if not a materialized view.



1458
1459
1460
1461
1462
1463
# File 'lib/google/cloud/bigquery/table.rb', line 1458

def enable_refresh?
  return nil unless @gapi.materialized_view
  val = @gapi.materialized_view.enable_refresh
  return true if val.nil?
  val
end

#encryptionEncryptionConfiguration?

The EncryptionConfiguration object that represents the custom encryption method used to protect the table. If not set, Dataset#default_encryption is used.

Present only if the table is using custom encryption.

Returns:

See Also:



1165
1166
1167
1168
1169
1170
# File 'lib/google/cloud/bigquery/table.rb', line 1165

def encryption
  return nil if reference?
  ensure_full_data!
  return nil if @gapi.encryption_configuration.nil?
  EncryptionConfiguration.from_gapi(@gapi.encryption_configuration).freeze
end

#encryption=(value) ⇒ Object

Set the EncryptionConfiguration object that represents the custom encryption method used to protect the table. If not set, Dataset#default_encryption is used.

Present only if the table is using custom encryption.

If the table is not a full resource representation (see #resource_full?), the full representation will be retrieved before the update to comply with ETag-based optimistic concurrency control.

Parameters:

See Also:



1190
1191
1192
1193
1194
# File 'lib/google/cloud/bigquery/table.rb', line 1190

def encryption= value
  reload! unless resource_full?
  @gapi.encryption_configuration = value.to_gapi
  patch_gapi! :encryption_configuration
end

#etagString?

The ETag hash of the table.

Returns:

  • (String, nil)

    The ETag hash, or nil if the object is a reference (see #reference?).



704
705
706
707
708
# File 'lib/google/cloud/bigquery/table.rb', line 704

def etag
  return nil if reference?
  ensure_full_data!
  @gapi.etag
end

#exists?(force: false) ⇒ Boolean

Determines whether the table exists in the BigQuery service. The result is cached locally. To refresh state, set force to true.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new

dataset = bigquery.dataset "my_dataset"
table = dataset.table "my_table", skip_lookup: true
table.exists? # true

Parameters:

  • force (Boolean) (defaults to: false)

    Force the latest resource representation to be retrieved from the BigQuery service when true. Otherwise the return value of this method will be memoized to reduce the number of API calls made to the BigQuery service. The default is false.

Returns:

  • (Boolean)

    true when the table exists in the BigQuery service, false otherwise.



2881
2882
2883
2884
2885
2886
2887
2888
# File 'lib/google/cloud/bigquery/table.rb', line 2881

def exists? force: false
  return gapi_exists? if force
  # If we have a value, return it
  return @exists unless @exists.nil?
  # Always true if we have a gapi object
  return true if resource?
  gapi_exists?
end

#expires_atTime?

The time when this table expires. If not present, the table will persist indefinitely. Expired tables will be deleted and their storage reclaimed.

Returns:

  • (Time, nil)

    The expiration time, or nil if not present or the object is a reference (see #reference?).



815
816
817
818
819
# File 'lib/google/cloud/bigquery/table.rb', line 815

def expires_at
  return nil if reference?
  ensure_full_data!
  Convert.millis_to_time @gapi.expiration_time
end

#externalExternal::DataSource?

The External::DataSource (or subclass) object that represents the external data source that the table represents. Data can be queried the table, even though the data is not stored in BigQuery. Instead of loading or streaming the data, this object references the external data source.

Present only if the table represents an External Data Source. See #external? and External::DataSource.

Returns:

See Also:



1213
1214
1215
1216
1217
1218
# File 'lib/google/cloud/bigquery/table.rb', line 1213

def external
  return nil if reference?
  ensure_full_data!
  return nil if @gapi.external_data_configuration.nil?
  External.from_gapi(@gapi.external_data_configuration).freeze
end

#external=(external) ⇒ Object

Set the External::DataSource (or subclass) object that represents the external data source that the table represents. Data can be queried the table, even though the data is not stored in BigQuery. Instead of loading or streaming the data, this object references the external data source.

Use only if the table represents an External Data Source. See #external? and External::DataSource.

If the table is not a full resource representation (see #resource_full?), the full representation will be retrieved before the update to comply with ETag-based optimistic concurrency control.

Parameters:

See Also:



1241
1242
1243
1244
1245
# File 'lib/google/cloud/bigquery/table.rb', line 1241

def external= external
  reload! unless resource_full?
  @gapi.external_data_configuration = external.to_gapi
  patch_gapi! :external_data_configuration
end

#external?Boolean?

Checks if the table's type is EXTERNAL, indicating that the table represents an External Data Source. See #external? and External::DataSource.

Returns:

  • (Boolean, nil)

    true when the type is EXTERNAL, false otherwise, if the object is a resource (see #resource?); nil if the object is a reference (see #reference?).



929
930
931
932
# File 'lib/google/cloud/bigquery/table.rb', line 929

def external?
  return nil if reference?
  @gapi.type == "EXTERNAL"
end

#extract(extract_url, format: nil, compression: nil, delimiter: nil, header: nil) {|job| ... } ⇒ Boolean

Extracts the data from the table to a Google Cloud Storage file using a synchronous method that blocks for a response. Timeouts and transient errors are generally handled as needed to complete the job. See also #extract_job.

The geographic location for the job ("US", "EU", etc.) can be set via ExtractJob::Updater#location= in a block passed to this method. If the table is a full resource representation (see #resource_full?), the location of the job will be automatically set to the location of the table.

Examples:

Extract to a JSON file:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"
table = dataset.table "my_table"

table.extract "gs://my-bucket/file-name.json", format: "json"

Extract to a CSV file, attaching labels to the job:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"
table = dataset.table "my_table"

table.extract "gs://my-bucket/file-name.csv" do |extract|
  extract.labels = { "custom-label" => "custom-value" }
end

Parameters:

  • extract_url (Google::Cloud::Storage::File, String, Array<String>)

    The Google Storage file or file URI pattern(s) to which BigQuery should extract the table data.

  • format (String) (defaults to: nil)

    The exported file format. The default value is csv.

    The following values are supported:

  • compression (String) (defaults to: nil)

    The compression type to use for exported files. Possible values include GZIP and NONE. The default value is NONE.

  • delimiter (String) (defaults to: nil)

    Delimiter to use between fields in the exported data. Default is ,.

  • header (Boolean) (defaults to: nil)

    Whether to print out a header row in the results. Default is true.

Yields:

  • (job)

    a job configuration object

Yield Parameters:

Returns:

  • (Boolean)

    Returns true if the extract operation succeeded.

See Also:



2224
2225
2226
2227
2228
2229
2230
2231
2232
2233
2234
# File 'lib/google/cloud/bigquery/table.rb', line 2224

def extract extract_url, format: nil, compression: nil, delimiter: nil, header: nil, &block
  job = extract_job extract_url,
                    format:      format,
                    compression: compression,
                    delimiter:   delimiter,
                    header:      header,
                    &block
  job.wait_until_done!
  ensure_job_succeeded! job
  true
end

#extract_job(extract_url, format: nil, compression: nil, delimiter: nil, header: nil, job_id: nil, prefix: nil, labels: nil, dryrun: nil) {|job| ... } ⇒ Google::Cloud::Bigquery::ExtractJob

Extracts the data from the table to a Google Cloud Storage file using an asynchronous method. In this method, an ExtractJob is immediately returned. The caller may poll the service by repeatedly calling Job#reload! and Job#done? to detect when the job is done, or simply block until the job is done by calling #Job#wait_until_done!. See also #extract.

The geographic location for the job ("US", "EU", etc.) can be set via ExtractJob::Updater#location= in a block passed to this method. If the table is a full resource representation (see #resource_full?), the location of the job will automatically be set to the location of the table.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"
table = dataset.table "my_table"

extract_job = table.extract_job "gs://my-bucket/file-name.json",
                                format: "json"
extract_job.wait_until_done!
extract_job.done? #=> true

Parameters:

  • extract_url (Google::Cloud::Storage::File, String, Array<String>)

    The Google Storage file or file URI pattern(s) to which BigQuery should extract the table data.

  • format (String) (defaults to: nil)

    The exported file format. The default value is csv.

    The following values are supported:

  • compression (String) (defaults to: nil)

    The compression type to use for exported files. Possible values include GZIP and NONE. The default value is NONE.

  • delimiter (String) (defaults to: nil)

    Delimiter to use between fields in the exported data. Default is ,.

  • header (Boolean) (defaults to: nil)

    Whether to print out a header row in the results. Default is true.

  • job_id (String) (defaults to: nil)

    A user-defined ID for the extract job. The ID must contain only letters ([A-Za-z]), numbers ([0-9]), underscores (_), or dashes (-). The maximum length is 1,024 characters. If job_id is provided, then prefix will not be used.

    See Generating a job ID.

  • prefix (String) (defaults to: nil)

    A string, usually human-readable, that will be prepended to a generated value to produce a unique job ID. For example, the prefix daily_import_job_ can be given to generate a job ID such as daily_import_job_12vEDtMQ0mbp1Mo5Z7mzAFQJZazh. The prefix must contain only letters ([A-Za-z]), numbers ([0-9]), underscores (_), or dashes (-). The maximum length of the entire ID is 1,024 characters. If job_id is provided, then prefix will not be used.

  • labels (Hash) (defaults to: nil)

    A hash of user-provided labels associated with the job. You can use these to organize and group your jobs.

    The labels applied to a resource must meet the following requirements:

    • Each resource can have multiple labels, up to a maximum of 64.
    • Each label must be a key-value pair.
    • Keys have a minimum length of 1 character and a maximum length of 63 characters, and cannot be empty. Values can be empty, and have a maximum length of 63 characters.
    • Keys and values can contain only lowercase letters, numeric characters, underscores, and dashes. All characters must use UTF-8 encoding, and international characters are allowed.
    • The key portion of a label must be unique. However, you can use the same key with multiple resources.
    • Keys must start with a lowercase letter or international character.
  • dryrun (Boolean) (defaults to: nil)

    If set, don't actually run this job. Behavior is undefined however for non-query jobs and may result in an error. Deprecated.

Yields:

  • (job)

    a job configuration object

Yield Parameters:

Returns:

See Also:



2148
2149
2150
2151
2152
2153
2154
2155
2156
2157
2158
2159
2160
2161
# File 'lib/google/cloud/bigquery/table.rb', line 2148

def extract_job extract_url, format: nil, compression: nil, delimiter: nil, header: nil, job_id: nil,
                prefix: nil, labels: nil, dryrun: nil
  ensure_service!
  options = { format: format, compression: compression, delimiter: delimiter, header: header, dryrun: dryrun,
              job_id: job_id, prefix: prefix, labels: labels }
  updater = ExtractJob::Updater.from_options service, table_ref, extract_url, options
  updater.location = location if location # may be table reference

  yield updater if block_given?

  job_gapi = updater.to_gapi
  gapi = service.extract_table job_gapi
  Job.from_gapi gapi, service
end

#fieldsArray<Schema::Field>?

The fields of the table, obtained from its schema.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"
table = dataset.table "my_table"

table.fields.each do |field|
  puts field.name
end

Returns:



1103
1104
1105
1106
# File 'lib/google/cloud/bigquery/table.rb', line 1103

def fields
  return nil if reference?
  schema.fields
end

#headersArray<Symbol>?

The names of the columns in the table, obtained from its schema.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"
table = dataset.table "my_table"

table.headers.each do |header|
  puts header
end

Returns:

  • (Array<Symbol>, nil)

    An array of column names.



1126
1127
1128
1129
# File 'lib/google/cloud/bigquery/table.rb', line 1126

def headers
  return nil if reference?
  schema.headers
end

#idString?

The combined Project ID, Dataset ID, and Table ID for this table, in the format specified by the Legacy SQL Query Reference (project-name:dataset_id.table_id). This is useful for referencing tables in other projects and datasets. To use this value in queries see #query_id.

Returns:

  • (String, nil)

    The combined ID, or nil if the object is a reference (see #reference?).



625
626
627
628
# File 'lib/google/cloud/bigquery/table.rb', line 625

def id
  return nil if reference?
  @gapi.id
end

#insert(rows, insert_ids: nil, skip_invalid: nil, ignore_unknown: nil) ⇒ Google::Cloud::Bigquery::InsertResponse

Inserts data into the table for near-immediate querying, without the need to complete a load operation before the data can appear in query results.

Simple Ruby types are generally accepted per JSON rules, along with the following support for BigQuery's more complex types:

BigQuery Ruby Notes
NUMERIC BigDecimal BigDecimal values will be rounded to scale 9.
BIGNUMERIC String Pass as String to avoid rounding to scale 9.
DATETIME DateTime DATETIME does not support time zone.
DATE Date
GEOGRAPHY String Well-known text (WKT) or GeoJSON.
JSON String (Stringified JSON) String, as JSON does not have a schema to verify.
TIMESTAMP Time
TIME Google::Cloud::BigQuery::Time
BYTES File, IO, StringIO, or similar
ARRAY Array Nested arrays, nil values are not supported.
STRUCT Hash Hash keys may be strings or symbols.

For GEOGRAPHY data, see Working with BigQuery GIS data.

Because BigQuery's streaming API is designed for high insertion rates, modifications to the underlying table metadata are eventually consistent when interacting with the streaming system. In most cases metadata changes are propagated within minutes, but during this period API responses may reflect the inconsistent state of the table.

The value :skip can be provided to skip the generation of IDs for all rows, or to skip the generation of an ID for a specific row in the array.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"
table = dataset.table "my_table"

rows = [
  { "first_name" => "Alice", "age" => 21 },
  { "first_name" => "Bob", "age" => 22 }
]
table.insert rows

Avoid retrieving the dataset and table with skip_lookup:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset", skip_lookup: true
table = dataset.table "my_table", skip_lookup: true

rows = [
  { "first_name" => "Alice", "age" => 21 },
  { "first_name" => "Bob", "age" => 22 }
]
table.insert rows

Pass BIGNUMERIC value as a string to avoid rounding to scale 9 in the conversion from BigDecimal:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"
table = dataset.table "my_table"

row = {
  "my_numeric" => BigDecimal("123456798.987654321"),
  "my_bignumeric" => "123456798.98765432100001" # BigDecimal would be rounded, use String instead!
}
table.insert row

Parameters:

  • rows (Hash, Array<Hash>)

    A hash object or array of hash objects containing the data. Required. BigDecimal values will be rounded to scale 9 to conform with the BigQuery NUMERIC data type. To avoid rounding BIGNUMERIC type values with scale greater than 9, use String instead of BigDecimal.

  • insert_ids (Array<String|Symbol>, Symbol) (defaults to: nil)

    A unique ID for each row. BigQuery uses this property to detect duplicate insertion requests on a best-effort basis. For more information, see data consistency. Optional. If not provided, the client library will assign a UUID to each row before the request is sent.

  • skip_invalid (Boolean) (defaults to: nil)

    Insert all valid rows of a request, even if invalid rows exist. The default value is false, which causes the entire request to fail if any invalid rows exist.

  • ignore_unknown (Boolean) (defaults to: nil)

    Accept rows that contain values that do not match the schema. The unknown values are ignored. Default is false, which treats unknown values as errors.

Returns:

Raises:

  • (ArgumentError)

See Also:



2734
2735
2736
2737
2738
2739
2740
2741
2742
2743
2744
2745
2746
2747
2748
2749
2750
2751
2752
# File 'lib/google/cloud/bigquery/table.rb', line 2734

def insert rows, insert_ids: nil, skip_invalid: nil, ignore_unknown: nil
  rows = [rows] if rows.is_a? Hash
  raise ArgumentError, "No rows provided" if rows.empty?

  insert_ids = Array.new(rows.count) { :skip } if insert_ids == :skip
  insert_ids = Array insert_ids
  if insert_ids.count.positive? && insert_ids.count != rows.count
    raise ArgumentError, "insert_ids must be the same size as rows"
  end

  ensure_service!
  gapi = service.insert_tabledata dataset_id,
                                  table_id,
                                  rows,
                                  skip_invalid: skip_invalid,
                                  ignore_unknown: ignore_unknown,
                                  insert_ids: insert_ids
  InsertResponse.from_gapi rows, gapi
end

#insert_async(skip_invalid: nil, ignore_unknown: nil, max_bytes: 10_000_000, max_rows: 500, interval: 10, threads: 4) {|response| ... } ⇒ Table::AsyncInserter

Create an asynchronous inserter object used to insert rows in batches.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"
table = dataset.table "my_table"
inserter = table.insert_async do |result|
  if result.error?
    log_error result.error
  else
    log_insert "inserted #{result.insert_count} rows " \
      "with #{result.error_count} errors"
  end
end

rows = [
  { "first_name" => "Alice", "age" => 21 },
  { "first_name" => "Bob", "age" => 22 }
]
inserter.insert rows

inserter.stop.wait!

Parameters:

  • skip_invalid (Boolean) (defaults to: nil)

    Insert all valid rows of a request, even if invalid rows exist. The default value is false, which causes the entire request to fail if any invalid rows exist.

  • ignore_unknown (Boolean) (defaults to: nil)

    Accept rows that contain values that do not match the schema. The unknown values are ignored. Default is false, which treats unknown values as errors.

  • max_rows (Integer) (defaults to: 500)

    The maximum number of rows to be collected before the batch is published. Default is 500.

Yields:

  • (response)

    the callback for when a batch of rows is inserted

Yield Parameters:

Returns:



2801
2802
2803
2804
2805
2806
2807
# File 'lib/google/cloud/bigquery/table.rb', line 2801

def insert_async skip_invalid: nil, ignore_unknown: nil, max_bytes: 10_000_000, max_rows: 500, interval: 10,
                 threads: 4, &block
  ensure_service!

  AsyncInserter.new self, skip_invalid: skip_invalid, ignore_unknown: ignore_unknown, max_bytes: max_bytes,
                          max_rows: max_rows, interval: interval, threads: threads, &block
end

#labelsHash<String, String>?

A hash of user-provided labels associated with this table. Labels are used to organize and group tables. See Using Labels.

The returned hash is frozen and changes are not allowed. Use #labels= to replace the entire hash.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"
table = dataset.table "my_table"

labels = table.labels
labels["department"] #=> "shipping"

Returns:

  • (Hash<String, String>, nil)

    A hash containing key/value pairs.



970
971
972
973
974
975
# File 'lib/google/cloud/bigquery/table.rb', line 970

def labels
  return nil if reference?
  m = @gapi.labels
  m = m.to_h if m.respond_to? :to_h
  m.dup.freeze
end

#labels=(labels) ⇒ Object

Updates the hash of user-provided labels associated with this table. Labels are used to organize and group tables. See Using Labels.

If the table is not a full resource representation (see #resource_full?), the full representation will be retrieved before the update to comply with ETag-based optimistic concurrency control.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"
table = dataset.table "my_table"

table.labels = { "department" => "shipping" }

Parameters:

  • labels (Hash<String, String>)

    A hash containing key/value pairs.

    The labels applied to a resource must meet the following requirements:

    • Each resource can have multiple labels, up to a maximum of 64.
    • Each label must be a key-value pair.
    • Keys have a minimum length of 1 character and a maximum length of 63 characters, and cannot be empty. Values can be empty, and have a maximum length of 63 characters.
    • Keys and values can contain only lowercase letters, numeric characters, underscores, and dashes. All characters must use UTF-8 encoding, and international characters are allowed.
    • The key portion of a label must be unique. However, you can use the same key with multiple resources.
    • Keys must start with a lowercase letter or international character.


1014
1015
1016
1017
1018
# File 'lib/google/cloud/bigquery/table.rb', line 1014

def labels= labels
  reload! unless resource_full?
  @gapi.labels = labels
  patch_gapi! :labels
end

#last_refresh_timeTime?

The time when the materialized view was last modified. See #materialized_view?.

Returns:

  • (Time, nil)

    The time, or nil if not present or not a materialized view.



1488
1489
1490
# File 'lib/google/cloud/bigquery/table.rb', line 1488

def last_refresh_time
  Convert.millis_to_time @gapi.materialized_view&.last_refresh_time
end

#load(files, format: nil, create: nil, write: nil, projection_fields: nil, jagged_rows: nil, quoted_newlines: nil, encoding: nil, delimiter: nil, ignore_unknown: nil, max_bad_records: nil, quote: nil, skip_leading: nil, autodetect: nil, null_marker: nil, session_id: nil, schema: self.schema) {|updater| ... } ⇒ Boolean

Loads data into the table. You can pass a google-cloud storage file path or a google-cloud storage file instance. Or, you can upload a file directly. See Loading Data with a POST Request.

The geographic location for the job ("US", "EU", etc.) can be set via LoadJob::Updater#location= in a block passed to this method. If the table is a full resource representation (see #resource_full?), the location of the job will be automatically set to the location of the table.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"
table = dataset.table "my_table"

success = table.load "gs://my-bucket/file-name.csv"

Pass a google-cloud-storage File instance:

require "google/cloud/bigquery"
require "google/cloud/storage"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"
table = dataset.table "my_table"

storage = Google::Cloud::Storage.new
bucket = storage.bucket "my-bucket"
file = bucket.file "file-name.csv"
success = table.load file

Pass a list of google-cloud-storage files:

require "google/cloud/bigquery"
require "google/cloud/storage"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"
table = dataset.table "my_table"

storage = Google::Cloud::Storage.new
bucket = storage.bucket "my-bucket"
file = bucket.file "file-name.csv"
table.load [file, "gs://my-bucket/file-name2.csv"]

Upload a file directly:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"
table = dataset.table "my_table"

file = File.open "my_data.json"
success = table.load file do |j|
  j.format = "newline_delimited_json"
end

Parameters:

  • files (File, Google::Cloud::Storage::File, String, URI, Array<Google::Cloud::Storage::File, String, URI>)

    A file or the URI of a Google Cloud Storage file, or an Array of those, containing data to load into the table.

  • format (String) (defaults to: nil)

    The exported file format. The default value is csv.

    The following values are supported:

  • create (String) (defaults to: nil)

    Specifies whether the job is allowed to create new tables. The default value is needed.

    The following values are supported:

    • needed - Create the table if it does not exist.
    • never - The table must already exist. A 'notFound' error is raised if the table does not exist.
  • write (String) (defaults to: nil)

    Specifies how to handle data already present in the table. The default value is append.

    The following values are supported:

    • truncate - BigQuery overwrites the table data.
    • append - BigQuery appends the data to the table.
    • empty - An error will be returned if the table already contains data.
  • projection_fields (Array<String>) (defaults to: nil)

    If the format option is set to datastore_backup, indicates which entity properties to load from a Cloud Datastore backup. Property names are case sensitive and must be top-level properties. If not set, BigQuery loads all properties. If any named property isn't found in the Cloud Datastore backup, an invalid error is returned.

  • jagged_rows (Boolean) (defaults to: nil)

    Accept rows that are missing trailing optional columns. The missing values are treated as nulls. If false, records with missing trailing columns are treated as bad records, and if there are too many bad records, an invalid error is returned in the job result. The default value is false. Only applicable to CSV, ignored for other formats.

  • quoted_newlines (Boolean) (defaults to: nil)

    Indicates if BigQuery should allow quoted data sections that contain newline characters in a CSV file. The default value is false.

  • autodetect (Boolean) (defaults to: nil)

    Indicates if BigQuery should automatically infer the options and schema for CSV and JSON sources. The default value is false.

  • encoding (String) (defaults to: nil)

    The character encoding of the data. The supported values are UTF-8 or ISO-8859-1. The default value is UTF-8.

  • delimiter (String) (defaults to: nil)

    Specifices the separator for fields in a CSV file. BigQuery converts the string to ISO-8859-1 encoding, and then uses the first byte of the encoded string to split the data in its raw, binary state. Default is ,.

  • ignore_unknown (Boolean) (defaults to: nil)

    Indicates if BigQuery should allow extra values that are not represented in the table schema. If true, the extra values are ignored. If false, records with extra columns are treated as bad records, and if there are too many bad records, an invalid error is returned in the job result. The default value is false.

    The format property determines what BigQuery treats as an extra value:

    • CSV: Trailing columns
    • JSON: Named values that don't match any column names
  • max_bad_records (Integer) (defaults to: nil)

    The maximum number of bad records that BigQuery can ignore when running the job. If the number of bad records exceeds this value, an invalid error is returned in the job result. The default value is 0, which requires that all records are valid.

  • null_marker (String) (defaults to: nil)

    Specifies a string that represents a null value in a CSV file. For example, if you specify \N, BigQuery interprets \N as a null value when loading a CSV file. The default value is the empty string. If you set this property to a custom value, BigQuery throws an error if an empty string is present for all data types except for STRING and BYTE. For STRING and BYTE columns, BigQuery interprets the empty string as an empty value.

  • quote (String) (defaults to: nil)

    The value that is used to quote data sections in a CSV file. BigQuery converts the string to ISO-8859-1 encoding, and then uses the first byte of the encoded string to split the data in its raw, binary state. The default value is a double-quote ". If your data does not contain quoted sections, set the property value to an empty string. If your data contains quoted newline characters, you must also set the allowQuotedNewlines property to true.

  • skip_leading (Integer) (defaults to: nil)

    The number of rows at the top of a CSV file that BigQuery will skip when loading the data. The default value is 0. This property is useful if you have header rows in the file that should be skipped.

  • session_id (string) (defaults to: nil)

    Session ID in which the load job must run.

Yields:

  • (updater)

    A block for setting the schema of the destination table and other options for the load job. The schema can be omitted if the destination table already exists, or if you're loading data from a Google Cloud Datastore backup.

Yield Parameters:

Returns:

  • (Boolean)

    Returns true if the load job was successful.



2621
2622
2623
2624
2625
2626
2627
2628
2629
2630
2631
2632
2633
2634
# File 'lib/google/cloud/bigquery/table.rb', line 2621

def load files, format: nil, create: nil, write: nil, projection_fields: nil, jagged_rows: nil,
         quoted_newlines: nil, encoding: nil, delimiter: nil, ignore_unknown: nil, max_bad_records: nil,
         quote: nil, skip_leading: nil, autodetect: nil, null_marker: nil, session_id: nil,
         schema: self.schema, &block
  job = load_job files, format: format, create: create, write: write, projection_fields: projection_fields,
                        jagged_rows: jagged_rows, quoted_newlines: quoted_newlines, encoding: encoding,
                        delimiter: delimiter, ignore_unknown: ignore_unknown, max_bad_records: max_bad_records,
                        quote: quote, skip_leading: skip_leading, autodetect: autodetect,
                        null_marker: null_marker, session_id: session_id, schema: schema, &block

  job.wait_until_done!
  ensure_job_succeeded! job
  true
end

#load_job(files, format: nil, create: nil, write: nil, projection_fields: nil, jagged_rows: nil, quoted_newlines: nil, encoding: nil, delimiter: nil, ignore_unknown: nil, max_bad_records: nil, quote: nil, skip_leading: nil, job_id: nil, prefix: nil, labels: nil, autodetect: nil, null_marker: nil, dryrun: nil, create_session: nil, session_id: nil, schema: self.schema) {|load_job| ... } ⇒ Google::Cloud::Bigquery::LoadJob

Loads data into the table. You can pass a google-cloud storage file path or a google-cloud storage file instance. Or, you can upload a file directly. See Loading Data with a POST Request.

The geographic location for the job ("US", "EU", etc.) can be set via LoadJob::Updater#location= in a block passed to this method. If the table is a full resource representation (see #resource_full?), the location of the job will be automatically set to the location of the table.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"
table = dataset.table "my_table"

load_job = table.load_job "gs://my-bucket/file-name.csv"

Pass a google-cloud-storage File instance:

require "google/cloud/bigquery"
require "google/cloud/storage"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"
table = dataset.table "my_table"

storage = Google::Cloud::Storage.new
bucket = storage.bucket "my-bucket"
file = bucket.file "file-name.csv"
load_job = table.load_job file

Pass a list of google-cloud-storage files:

require "google/cloud/bigquery"
require "google/cloud/storage"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"
table = dataset.table "my_table"

storage = Google::Cloud::Storage.new
bucket = storage.bucket "my-bucket"
file = bucket.file "file-name.csv"
load_job = table.load_job [file, "gs://my-bucket/file-name2.csv"]

Upload a file directly:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"
table = dataset.table "my_table"

file = File.open "my_data.csv"
load_job = table.load_job file

Parameters:

  • files (File, Google::Cloud::Storage::File, String, URI, Array<Google::Cloud::Storage::File, String, URI>)

    A file or the URI of a Google Cloud Storage file, or an Array of those, containing data to load into the table.

  • format (String) (defaults to: nil)

    The exported file format. The default value is csv.

    The following values are supported:

  • create (String) (defaults to: nil)

    Specifies whether the job is allowed to create new tables. The default value is needed.

    The following values are supported:

    • needed - Create the table if it does not exist.
    • never - The table must already exist. A 'notFound' error is raised if the table does not exist.
  • write (String) (defaults to: nil)

    Specifies how to handle data already present in the table. The default value is append.

    The following values are supported:

    • truncate - BigQuery overwrites the table data.
    • append - BigQuery appends the data to the table.
    • empty - An error will be returned if the table already contains data.
  • projection_fields (Array<String>) (defaults to: nil)

    If the format option is set to datastore_backup, indicates which entity properties to load from a Cloud Datastore backup. Property names are case sensitive and must be top-level properties. If not set, BigQuery loads all properties. If any named property isn't found in the Cloud Datastore backup, an invalid error is returned.

  • jagged_rows (Boolean) (defaults to: nil)

    Accept rows that are missing trailing optional columns. The missing values are treated as nulls. If false, records with missing trailing columns are treated as bad records, and if there are too many bad records, an invalid error is returned in the job result. The default value is false. Only applicable to CSV, ignored for other formats.

  • quoted_newlines (Boolean) (defaults to: nil)

    Indicates if BigQuery should allow quoted data sections that contain newline characters in a CSV file. The default value is false.

  • autodetect (Boolean) (defaults to: nil)

    Indicates if BigQuery should automatically infer the options and schema for CSV and JSON sources. The default value is false.

  • encoding (String) (defaults to: nil)

    The character encoding of the data. The supported values are UTF-8 or ISO-8859-1. The default value is UTF-8.

  • delimiter (String) (defaults to: nil)

    Specifices the separator for fields in a CSV file. BigQuery converts the string to ISO-8859-1 encoding, and then uses the first byte of the encoded string to split the data in its raw, binary state. Default is ,.

  • ignore_unknown (Boolean) (defaults to: nil)

    Indicates if BigQuery should allow extra values that are not represented in the table schema. If true, the extra values are ignored. If false, records with extra columns are treated as bad records, and if there are too many bad records, an invalid error is returned in the job result. The default value is false.

    The format property determines what BigQuery treats as an extra value:

    • CSV: Trailing columns
    • JSON: Named values that don't match any column names
  • max_bad_records (Integer) (defaults to: nil)

    The maximum number of bad records that BigQuery can ignore when running the job. If the number of bad records exceeds this value, an invalid error is returned in the job result. The default value is 0, which requires that all records are valid.

  • null_marker (String) (defaults to: nil)

    Specifies a string that represents a null value in a CSV file. For example, if you specify \N, BigQuery interprets \N as a null value when loading a CSV file. The default value is the empty string. If you set this property to a custom value, BigQuery throws an error if an empty string is present for all data types except for STRING and BYTE. For STRING and BYTE columns, BigQuery interprets the empty string as an empty value.

  • quote (String) (defaults to: nil)

    The value that is used to quote data sections in a CSV file. BigQuery converts the string to ISO-8859-1 encoding, and then uses the first byte of the encoded string to split the data in its raw, binary state. The default value is a double-quote ". If your data does not contain quoted sections, set the property value to an empty string. If your data contains quoted newline characters, you must also set the allowQuotedNewlines property to true.

  • skip_leading (Integer) (defaults to: nil)

    The number of rows at the top of a CSV file that BigQuery will skip when loading the data. The default value is 0. This property is useful if you have header rows in the file that should be skipped.

  • job_id (String) (defaults to: nil)

    A user-defined ID for the load job. The ID must contain only letters ([A-Za-z]), numbers ([0-9]), underscores (_), or dashes (-). The maximum length is 1,024 characters. If job_id is provided, then prefix will not be used.

    See Generating a job ID.

  • prefix (String) (defaults to: nil)

    A string, usually human-readable, that will be prepended to a generated value to produce a unique job ID. For example, the prefix daily_import_job_ can be given to generate a job ID such as daily_import_job_12vEDtMQ0mbp1Mo5Z7mzAFQJZazh. The prefix must contain only letters ([A-Za-z]), numbers ([0-9]), underscores (_), or dashes (-). The maximum length of the entire ID is 1,024 characters. If job_id is provided, then prefix will not be used.

  • labels (Hash) (defaults to: nil)

    A hash of user-provided labels associated with the job. You can use these to organize and group your jobs.

    The labels applied to a resource must meet the following requirements:

    • Each resource can have multiple labels, up to a maximum of 64.
    • Each label must be a key-value pair.
    • Keys have a minimum length of 1 character and a maximum length of 63 characters, and cannot be empty. Values can be empty, and have a maximum length of 63 characters.
    • Keys and values can contain only lowercase letters, numeric characters, underscores, and dashes. All characters must use UTF-8 encoding, and international characters are allowed.
    • The key portion of a label must be unique. However, you can use the same key with multiple resources.
    • Keys must start with a lowercase letter or international character.
  • dryrun (Boolean) (defaults to: nil)

    If set, don't actually run this job. Behavior is undefined however for non-query jobs and may result in an error. Deprecated.

  • create_session (Boolean) (defaults to: nil)

    If set to true a new session will be created and the load job will happen in the table created within that session. Note: This will work only for tables in _SESSION dataset else the property will be ignored by the backend.

  • session_id (string) (defaults to: nil)

    Session ID in which the load job must run.

Yields:

  • (load_job)

    a block for setting the load job

Yield Parameters:

  • load_job (LoadJob)

    the load job object to be updated

Returns:



2433
2434
2435
2436
2437
2438
2439
2440
2441
2442
2443
2444
2445
2446
2447
2448
2449
2450
2451
2452
2453
2454
# File 'lib/google/cloud/bigquery/table.rb', line 2433

def load_job files, format: nil, create: nil, write: nil, projection_fields: nil, jagged_rows: nil,
             quoted_newlines: nil, encoding: nil, delimiter: nil, ignore_unknown: nil, max_bad_records: nil,
             quote: nil, skip_leading: nil, job_id: nil, prefix: nil, labels: nil, autodetect: nil,
             null_marker: nil, dryrun: nil, create_session: nil, session_id: nil, schema: self.schema
  ensure_service!

  updater = load_job_updater format: format, create: create, write: write, projection_fields: projection_fields,
                             jagged_rows: jagged_rows, quoted_newlines: quoted_newlines, encoding: encoding,
                             delimiter: delimiter, ignore_unknown: ignore_unknown,
                             max_bad_records: max_bad_records, quote: quote, skip_leading: skip_leading,
                             dryrun: dryrun, job_id: job_id, prefix: prefix, schema: schema, labels: labels,
                             autodetect: autodetect, null_marker: null_marker, create_session: create_session,
                             session_id: session_id


  yield updater if block_given?

  job_gapi = updater.to_gapi

  return load_local files, job_gapi if local_file? files
  load_storage files, job_gapi
end

#locationString?

The geographic location where the table should reside. Possible values include EU and US. The default value is US.

Returns:

  • (String, nil)

    The location code.



942
943
944
945
946
# File 'lib/google/cloud/bigquery/table.rb', line 942

def location
  return nil if reference?
  ensure_full_data!
  @gapi.location
end

#materialized_view?Boolean?

Checks if the table's type is MATERIALIZED_VIEW, indicating that the table represents a BigQuery materialized view. See Dataset#create_materialized_view.

Returns:

  • (Boolean, nil)

    true when the type is MATERIALIZED_VIEW, false otherwise, if the object is a resource (see #resource?); nil if the object is a reference (see #reference?).

See Also:



913
914
915
916
# File 'lib/google/cloud/bigquery/table.rb', line 913

def materialized_view?
  return nil if reference?
  @gapi.type == "MATERIALIZED_VIEW"
end

#modified_atTime?

The date when this table was last modified.

Returns:

  • (Time, nil)

    The last modified time, or nil if not present or the object is a reference (see #reference?).



829
830
831
832
833
# File 'lib/google/cloud/bigquery/table.rb', line 829

def modified_at
  return nil if reference?
  ensure_full_data!
  Convert.millis_to_time @gapi.last_modified_time
end

#nameString?

The name of the table.

Returns:

  • (String, nil)

    The friendly name, or nil if the object is a reference (see #reference?).



674
675
676
677
# File 'lib/google/cloud/bigquery/table.rb', line 674

def name
  return nil if reference?
  @gapi.friendly_name
end

#name=(new_name) ⇒ Object

Updates the name of the table.

If the table is not a full resource representation (see #resource_full?), the full representation will be retrieved before the update to comply with ETag-based optimistic concurrency control.

Parameters:

  • new_name (String)

    The new friendly name.



690
691
692
693
694
# File 'lib/google/cloud/bigquery/table.rb', line 690

def name= new_name
  reload! unless resource_full?
  @gapi.update! friendly_name: new_name
  patch_gapi! :friendly_name
end

#param_typesHash

The types of the fields in the table, obtained from its schema. Types use the same format as the optional query parameter types.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"
table = dataset.table "my_table"

table.param_types

Returns:

  • (Hash)

    A hash with field names as keys, and types as values.



1146
1147
1148
1149
# File 'lib/google/cloud/bigquery/table.rb', line 1146

def param_types
  return nil if reference?
  schema.param_types
end

#policyPolicy

Gets the Cloud IAM access control policy for the table. The latest policy will be read from the service. See also #update_policy.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"
table = dataset.table "my_table"

policy = table.policy

policy.frozen? #=> true
binding_owner = policy.bindings.find { |b| b.role == "roles/owner" }
binding_owner.role #=> "roles/owner"
binding_owner.members #=> ["user:owner@example.com"]
binding_owner.frozen? #=> true
binding_owner.members.frozen? #=> true

Returns:

  • (Policy)

    The frozen policy for the table.

Raises:

  • (ArgumentError)

See Also:



1545
1546
1547
1548
1549
1550
# File 'lib/google/cloud/bigquery/table.rb', line 1545

def policy
  raise ArgumentError, "Block argument not supported: Use #update_policy instead." if block_given?
  ensure_service!
  gapi = service.get_table_policy dataset_id, table_id
  Policy.from_gapi(gapi).freeze
end

#project_idString

The ID of the Project containing this table.

Returns:

  • (String)

    The project ID.



156
157
158
159
# File 'lib/google/cloud/bigquery/table.rb', line 156

def project_id
  return reference.project_id if reference?
  @gapi.table_reference.project_id
end

#queryString?

The query that defines the view or materialized view. See #view? and #materialized_view?.

Returns:

  • (String, nil)

    The query that defines the view or materialized_view; or nil if not a view or materialized view.



1310
1311
1312
# File 'lib/google/cloud/bigquery/table.rb', line 1310

def query
  view? ? @gapi.view&.query : @gapi.materialized_view&.query
end

#query=(new_query) ⇒ Object

Updates the query that defines the view. (See #view?.) Not supported for materialized views.

This method sets the query using standard SQL. To specify legacy SQL or to use user-defined function resources for a view, use (#set_query) instead.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"
view = dataset.table "my_view"

view.query = "SELECT first_name FROM " \
             "`my_project.my_dataset.my_table`"

Parameters:

  • new_query (String)

    The query that defines the view.

See Also:



1338
1339
1340
# File 'lib/google/cloud/bigquery/table.rb', line 1338

def query= new_query
  set_query new_query
end

#query_id(standard_sql: nil, legacy_sql: nil) ⇒ String

The value returned by #id, wrapped in backticks (Standard SQL) or s quare brackets (Legacy SQL) to accommodate project IDs containing dashes. Useful in queries.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"
table = dataset.table "my_table"

data = bigquery.query "SELECT first_name FROM #{table.query_id}"

Parameters:

  • standard_sql (Boolean) (defaults to: nil)

    Specifies whether to use BigQuery's standard SQL dialect. Optional. The default value is true.

  • legacy_sql (Boolean) (defaults to: nil)

    Specifies whether to use BigQuery's legacy SQL dialect. Optional. The default value is false.

Returns:

  • (String)

    The appropriate table ID for use in queries, depending on SQL type.



658
659
660
661
662
663
664
# File 'lib/google/cloud/bigquery/table.rb', line 658

def query_id standard_sql: nil, legacy_sql: nil
  if Convert.resolve_legacy_sql standard_sql, legacy_sql
    "[#{project_id}:#{dataset_id}.#{table_id}]"
  else
    "`#{project_id}.#{dataset_id}.#{table_id}`"
  end
end

#query_legacy_sql?Boolean

Checks if the view's query is using legacy sql. See #view?.

Returns:

  • (Boolean)

    true when legacy sql is used, false otherwise; or nil if not a logical view.



1407
1408
1409
1410
1411
1412
# File 'lib/google/cloud/bigquery/table.rb', line 1407

def query_legacy_sql?
  return nil unless @gapi.view
  val = @gapi.view.use_legacy_sql
  return true if val.nil?
  val
end

#query_standard_sql?Boolean

Checks if the view's query is using standard sql. See #view?.

Returns:

  • (Boolean)

    true when standard sql is used, false otherwise.



1421
1422
1423
1424
# File 'lib/google/cloud/bigquery/table.rb', line 1421

def query_standard_sql?
  return nil unless @gapi.view
  !query_legacy_sql?
end

#query_udfsArray<String>?

The user-defined function resources used in the view's query. May be either a code resource to load from a Google Cloud Storage URI (gs://bucket/path), or an inline resource that contains code for a user-defined function (UDF). Providing an inline code resource is equivalent to providing a URI for a file containing the same code. See User-Defined Functions. See #view?.

Returns:

  • (Array<String>, nil)

    An array containing Google Cloud Storage URIs and/or inline source code, or nil if not a logical view.



1441
1442
1443
1444
1445
1446
# File 'lib/google/cloud/bigquery/table.rb', line 1441

def query_udfs
  return nil unless @gapi.view
  udfs_gapi = @gapi.view.user_defined_function_resources
  return [] if udfs_gapi.nil?
  Array(udfs_gapi).map { |udf| udf.inline_code || udf.resource_uri }
end

#range_partitioning?Boolean?

Checks if the table is range partitioned. See Creating and using integer range partitioned tables.

Returns:

  • (Boolean, nil)

    true when the table is range partitioned, or false otherwise, if the object is a resource (see #resource?); nil if the object is a reference (see #reference?).



220
221
222
223
# File 'lib/google/cloud/bigquery/table.rb', line 220

def range_partitioning?
  return nil if reference?
  !@gapi.range_partitioning.nil?
end

#range_partitioning_endInteger?

The end of range partitioning, exclusive. See Creating and using integer range partitioned tables.

Returns:

  • (Integer, nil)

    The end of range partitioning, exclusive, or nil if not range partitioned or the object is a reference (see #reference?).



281
282
283
284
285
# File 'lib/google/cloud/bigquery/table.rb', line 281

def range_partitioning_end
  return nil if reference?
  ensure_full_data!
  @gapi.range_partitioning.range.end if range_partitioning?
end

#range_partitioning_fieldInteger?

The field on which the table is range partitioned, if any. The field must be a top-level NULLABLE/REQUIRED field. The only supported type is INTEGER/INT64. See Creating and using integer range partitioned tables.

Returns:

  • (Integer, nil)

    The range partition field, or nil if not range partitioned or the object is a reference (see #reference?).



235
236
237
238
239
# File 'lib/google/cloud/bigquery/table.rb', line 235

def range_partitioning_field
  return nil if reference?
  ensure_full_data!
  @gapi.range_partitioning.field if range_partitioning?
end

#range_partitioning_intervalInteger?

The width of each interval. See Creating and using integer range partitioned tables.

Returns:

  • (Integer, nil)

    The width of each interval, for data in range partitions, or nil if not range partitioned or the object is a reference (see #reference?).



265
266
267
268
269
270
# File 'lib/google/cloud/bigquery/table.rb', line 265

def range_partitioning_interval
  return nil if reference?
  ensure_full_data!
  return nil unless range_partitioning?
  @gapi.range_partitioning.range.interval
end

#range_partitioning_startInteger?

The start of range partitioning, inclusive. See Creating and using integer range partitioned tables.

Returns:

  • (Integer, nil)

    The start of range partitioning, inclusive, or nil if not range partitioned or the object is a reference (see #reference?).



250
251
252
253
254
# File 'lib/google/cloud/bigquery/table.rb', line 250

def range_partitioning_start
  return nil if reference?
  ensure_full_data!
  @gapi.range_partitioning.range.start if range_partitioning?
end

#reference?Boolean

Whether the table was created without retrieving the resource representation from the BigQuery service.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new

dataset = bigquery.dataset "my_dataset"
table = dataset.table "my_table", skip_lookup: true

table.reference? # true
table.reload!
table.reference? # false

Returns:

  • (Boolean)

    true when the table is just a local reference object, false otherwise.



2909
2910
2911
# File 'lib/google/cloud/bigquery/table.rb', line 2909

def reference?
  @gapi.nil?
end

#refresh_interval_msInteger?

The maximum frequency in milliseconds at which the materialized view will be refreshed. See #materialized_view?.

Returns:

  • (Integer, nil)

    The maximum frequency in milliseconds; or nil if not a materialized view.



1501
1502
1503
# File 'lib/google/cloud/bigquery/table.rb', line 1501

def refresh_interval_ms
  @gapi.materialized_view&.refresh_interval_ms
end

#refresh_interval_ms=(new_refresh_interval_ms) ⇒ Object

Sets the maximum frequency at which the materialized view will be refreshed. See #materialized_view?.

Parameters:

  • new_refresh_interval_ms (Integer)

    The maximum frequency in milliseconds.



1513
1514
1515
1516
1517
1518
# File 'lib/google/cloud/bigquery/table.rb', line 1513

def refresh_interval_ms= new_refresh_interval_ms
  @gapi.materialized_view = Google::Apis::BigqueryV2::MaterializedViewDefinition.new(
    refresh_interval_ms: new_refresh_interval_ms
  )
  patch_gapi! :materialized_view
end

#reload!Google::Cloud::Bigquery::Table Also known as: refresh!

Reloads the table with current data from the BigQuery service.

Examples:

Skip retrieving the table from the service, then load it:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new

dataset = bigquery.dataset "my_dataset"
table = dataset.table "my_table", skip_lookup: true

table.reload!

Returns:



2851
2852
2853
2854
2855
2856
2857
# File 'lib/google/cloud/bigquery/table.rb', line 2851

def reload!
  ensure_service!
  @gapi = service.get_table dataset_id, table_id, metadata_view: 
  @reference = nil
  @exists = nil
  self
end

#require_partition_filterBoolean?

Whether queries over this table require a partition filter that can be used for partition elimination to be specified. See Partitioned Tables.

Returns:

  • (Boolean, nil)

    true when a partition filter will be required, false otherwise, or nil if the object is a reference (see #reference?).



479
480
481
482
483
# File 'lib/google/cloud/bigquery/table.rb', line 479

def require_partition_filter
  return nil if reference?
  ensure_full_data!
  @gapi.require_partition_filter
end

#require_partition_filter=(new_require) ⇒ Object

Sets whether queries over this table require a partition filter. See Partitioned Tables.

If the table is not a full resource representation (see #resource_full?), the full representation will be retrieved before the update to comply with ETag-based optimistic concurrency control.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"
table = dataset.create_table "my_table" do |t|
  t.require_partition_filter = true
end

Parameters:

  • new_require (Boolean)

    Whether queries over this table require a partition filter.



508
509
510
511
512
# File 'lib/google/cloud/bigquery/table.rb', line 508

def require_partition_filter= new_require
  reload! unless resource_full?
  @gapi.require_partition_filter = new_require
  patch_gapi! :require_partition_filter
end

#resource?Boolean

Whether the table was created with a resource representation from the BigQuery service.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new

dataset = bigquery.dataset "my_dataset"
table = dataset.table "my_table", skip_lookup: true

table.resource? # false
table.reload!
table.resource? # true

Returns:

  • (Boolean)

    true when the table was created with a resource representation, false otherwise.



2932
2933
2934
# File 'lib/google/cloud/bigquery/table.rb', line 2932

def resource?
  !@gapi.nil?
end

#resource_full?Boolean

Whether the table was created with a full resource representation from the BigQuery service.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new

dataset = bigquery.dataset "my_dataset"
table = dataset.table "my_table"

table.resource_full? # true

Returns:

  • (Boolean)

    true when the table was created with a full resource representation, false otherwise.



2981
2982
2983
# File 'lib/google/cloud/bigquery/table.rb', line 2981

def resource_full?
  @gapi.is_a? Google::Apis::BigqueryV2::Table
end

#resource_partial?Boolean

Whether the table was created with a partial resource representation from the BigQuery service by retrieval through Dataset#tables. See Tables: list response for the contents of the partial representation. Accessing any attribute outside of the partial representation will result in loading the full representation.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new

dataset = bigquery.dataset "my_dataset"
table = dataset.tables.first

table.resource_partial? # true
table.description # Loads the full resource.
table.resource_partial? # false

Returns:

  • (Boolean)

    true when the table was created with a partial resource representation, false otherwise.



2960
2961
2962
# File 'lib/google/cloud/bigquery/table.rb', line 2960

def resource_partial?
  @gapi.is_a? Google::Apis::BigqueryV2::TableList::Table
end

#restore(destination_table, create: nil, write: nil) {|job| ... } ⇒ Boolean

Restore the data from the table to another table using a synchronous method that blocks for a response. The source table type is SNAPSHOT and the destination table type is TABLE. Timeouts and transient errors are generally handled as needed to complete the job. See also #copy_job.

The geographic location for the job ("US", "EU", etc.) can be set via CopyJob::Updater#location= in a block passed to this method. If the table is a full resource representation (see #resource_full?), the location of the job will be automatically set to the location of the table.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"
table = dataset.table "my_table"
destination_table = dataset.table "my_destination_table"

table.restore destination_table

Passing a string identifier for the destination table:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"
table = dataset.table "my_table"

table.restore "other-project:other_dataset.other_table"

Parameters:

  • destination_table (Table, String)

    The destination for the copied data. This can also be a string identifier as specified by the Standard SQL Query Reference (project-name.dataset_id.table_id) or the Legacy SQL Query Reference (project-name:dataset_id.table_id). This is useful for referencing tables in other projects and datasets.

  • create (String) (defaults to: nil)

    Specifies whether the job is allowed to create new tables. The default value is needed.

    The following values are supported:

    • needed - Create the table if it does not exist.
    • never - The table must already exist. A 'notFound' error is raised if the table does not exist.
  • write (String) (defaults to: nil)

    Specifies how to handle data already present in the destination table. The default value is empty.

    The following values are supported:

    • truncate - BigQuery overwrites the table data.
    • append - BigQuery appends the data to the table.
    • empty - An error will be returned if the destination table already contains data.

Yields:

  • (job)

    a job configuration object

Yield Parameters:

Returns:

  • (Boolean)

    Returns true if the copy operation succeeded.



2050
2051
2052
2053
2054
2055
2056
# File 'lib/google/cloud/bigquery/table.rb', line 2050

def restore destination_table, create: nil, write: nil, &block
  copy_job_with_operation_type destination_table,
                               create: create,
                               write: write,
                               operation_type: OperationType::RESTORE,
                               &block
end

#rows_countInteger?

The number of rows in the table.

Returns:

  • (Integer, nil)

    The count of rows in the table, or nil if the object is a reference (see #reference?).



781
782
783
784
785
786
787
788
789
# File 'lib/google/cloud/bigquery/table.rb', line 781

def rows_count
  return nil if reference?
  ensure_full_data!
  begin
    Integer @gapi.num_rows
  rescue StandardError
    nil
  end
end

#schema(replace: false) {|schema| ... } ⇒ Google::Cloud::Bigquery::Schema?

Returns the table's schema. If the table is not a view (See #view?), this method can also be used to set, replace, or add to the schema by passing a block. See Schema for available methods.

If the table is not a full resource representation (see #resource_full?), the full representation will be retrieved.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"
table = dataset.create_table "my_table"

table.schema do |schema|
  schema.string "first_name", mode: :required
  schema.record "cities_lived", mode: :repeated do |nested_schema|
    nested_schema.string "place", mode: :required
    nested_schema.integer "number_of_years", mode: :required
  end
end

Load the schema from a file

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"
table = dataset.create_table "my_table"
table.schema do |schema|
  schema.load File.open("schema.json")
end

Parameters:

  • replace (Boolean) (defaults to: false)

    Whether to replace the existing schema with the new schema. If true, the fields will replace the existing schema. If false, the fields will be added to the existing schema. When a table already contains data, schema changes must be additive. Thus, the default value is false. When loading from a file this will always replace the schema, no matter what replace is set to. You can update the schema (for example, for a table that already contains data) by providing a schema file that includes the existing schema plus any new fields.

Yields:

  • (schema)

    a block for setting the schema

Yield Parameters:

  • schema (Schema)

    the object accepting the schema

Returns:



1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
# File 'lib/google/cloud/bigquery/table.rb', line 1070

def schema replace: false
  return nil if reference? && !block_given?
  reload! unless resource_full?
  schema_builder = Schema.from_gapi @gapi.schema
  if block_given?
    schema_builder = Schema.from_gapi if replace
    yield schema_builder
    if schema_builder.changed?
      @gapi.schema = schema_builder.to_gapi
      patch_gapi! :schema
    end
  end
  schema_builder.freeze
end

#set_query(query, standard_sql: nil, legacy_sql: nil, udfs: nil) ⇒ Object

Updates the query that defines the view. (See #view?.) Not supported for materialized views.

Allows setting of standard vs. legacy SQL and user-defined function resources.

Examples:

Update a view:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"
view = dataset.table "my_view"

view.set_query "SELECT first_name FROM " \
               "`my_project.my_dataset.my_table`",
               standard_sql: true

Parameters:

  • query (String)

    The query that defines the view.

  • standard_sql (Boolean) (defaults to: nil)

    Specifies whether to use BigQuery's standard SQL dialect. Optional. The default value is true.

  • legacy_sql (Boolean) (defaults to: nil)

    Specifies whether to use BigQuery's legacy SQL dialect. Optional. The default value is false.

  • udfs (Array<String>, String) (defaults to: nil)

    User-defined function resources used in a legacy SQL query. Optional.

    May be either a code resource to load from a Google Cloud Storage URI (gs://bucket/path), or an inline resource that contains code for a user-defined function (UDF). Providing an inline code resource is equivalent to providing a URI for a file containing the same code.

    This parameter is used for defining User Defined Function (UDF) resources only when using legacy SQL. Users of standard SQL should leverage either DDL (e.g. CREATE [TEMPORARY] FUNCTION ...) or the Routines API to define UDF resources.

    For additional information on migrating, see: Migrating to standard SQL - Differences in user-defined JavaScript functions

See Also:



1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
# File 'lib/google/cloud/bigquery/table.rb', line 1389

def set_query query, standard_sql: nil, legacy_sql: nil, udfs: nil
  raise "Updating the query is not supported for Table type: #{@gapi.type}" unless view?
  use_legacy_sql = Convert.resolve_legacy_sql standard_sql, legacy_sql
  @gapi.view = Google::Apis::BigqueryV2::ViewDefinition.new(
    query:                           query,
    use_legacy_sql:                  use_legacy_sql,
    user_defined_function_resources: udfs_gapi(udfs)
  )
  patch_gapi! :view
end

#snapshot(destination_table) {|job| ... } ⇒ Boolean

Takes snapshot of the data from the table to another table using a synchronous method that blocks for a response. The source table type is TABLE and the destination table type is SNAPSHOT. Timeouts and transient errors are generally handled as needed to complete the job. See also #copy_job.

The geographic location for the job ("US", "EU", etc.) can be set via CopyJob::Updater#location= in a block passed to this method. If the table is a full resource representation (see #resource_full?), the location of the job will be automatically set to the location of the table.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"
table = dataset.table "my_table"
destination_table = dataset.table "my_destination_table"

table.snapshot destination_table

Passing a string identifier for the destination table:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"
table = dataset.table "my_table"

table.snapshot "other-project:other_dataset.other_table"

Parameters:

  • destination_table (Table, String)

    The destination for the copied data. This can also be a string identifier as specified by the Standard SQL Query Reference (project-name.dataset_id.table_id) or the Legacy SQL Query Reference (project-name:dataset_id.table_id). This is useful for referencing tables in other projects and datasets.

Yields:

  • (job)

    a job configuration object

Yield Parameters:

Returns:

  • (Boolean)

    Returns true if the copy operation succeeded.



1979
1980
1981
1982
1983
# File 'lib/google/cloud/bigquery/table.rb', line 1979

def snapshot destination_table, &block
  copy_job_with_operation_type destination_table,
                               operation_type: OperationType::SNAPSHOT,
                               &block
end

#snapshot?Boolean?

Checks if the table's type is SNAPSHOT, indicating that the table represents a BigQuery table snapshot.

Returns:

  • (Boolean, nil)

    true when the type is SNAPSHOT, false otherwise, if the object is a resource (see #resource?); nil if the object is a reference (see #reference?).

See Also:



878
879
880
881
# File 'lib/google/cloud/bigquery/table.rb', line 878

def snapshot?
  return nil if reference?
  @gapi.type == "SNAPSHOT"
end

#snapshot_definitionGoogle::Apis::BigqueryV2::SnapshotDefinition?

The Information about base table and snapshot time of the table.

Returns:

  • (Google::Apis::BigqueryV2::SnapshotDefinition, nil)

    Snapshot definition of table snapshot, or nil if not snapshot or the object is a reference (see #reference?).



182
183
184
185
# File 'lib/google/cloud/bigquery/table.rb', line 182

def snapshot_definition
  return nil if reference?
  @gapi.snapshot_definition
end

#table?Boolean?

Checks if the table's type is TABLE.

Returns:

  • (Boolean, nil)

    true when the type is TABLE, false otherwise, if the object is a resource (see #resource?); nil if the object is a reference (see #reference?).



844
845
846
847
# File 'lib/google/cloud/bigquery/table.rb', line 844

def table?
  return nil if reference?
  @gapi.type == "TABLE"
end

#table_idString

A unique ID for this table.

Returns:

  • (String)

    The ID must contain only letters ([A-Za-z]), numbers ([0-9]), or underscores (_). The maximum length is 1,024 characters.



131
132
133
134
# File 'lib/google/cloud/bigquery/table.rb', line 131

def table_id
  return reference.table_id if reference?
  @gapi.table_reference.table_id
end

#test_iam_permissions(*permissions) ⇒ Array<String>

Tests the specified permissions against the Cloud IAM access control policy.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"
table = dataset.table "my_table"

permissions = table.test_iam_permissions "bigquery.tables.get",
                                         "bigquery.tables.delete"
permissions.include? "bigquery.tables.get"    #=> true
permissions.include? "bigquery.tables.delete" #=> false

Parameters:

Returns:

  • (Array<String>)

    The frozen array of permissions held by the caller.

See Also:



1614
1615
1616
1617
1618
1619
# File 'lib/google/cloud/bigquery/table.rb', line 1614

def test_iam_permissions *permissions
  permissions = Array(permissions).flatten
  ensure_service!
  gapi = service.test_table_permissions dataset_id, table_id, permissions
  gapi.permissions.freeze
end

#time_partitioning?Boolean?

Checks if the table is time partitioned. See Partitioned Tables.

Returns:

  • (Boolean, nil)

    true when the table is time partitioned, or false otherwise, if the object is a resource (see #resource?); nil if the object is a reference (see #reference?).



297
298
299
300
# File 'lib/google/cloud/bigquery/table.rb', line 297

def time_partitioning?
  return nil if reference?
  !@gapi.time_partitioning.nil?
end

#time_partitioning_expirationInteger?

The expiration for the time partitions, if any, in seconds. See Partitioned Tables.

Returns:

  • (Integer, nil)

    The expiration time, in seconds, for data in time partitions, or nil if not present or the object is a reference (see #reference?).



422
423
424
425
426
427
428
# File 'lib/google/cloud/bigquery/table.rb', line 422

def time_partitioning_expiration
  return nil if reference?
  ensure_full_data!
  return nil unless time_partitioning?
  return nil if @gapi.time_partitioning.expiration_ms.nil?
  @gapi.time_partitioning.expiration_ms / 1_000
end

#time_partitioning_expiration=(expiration) ⇒ Object

Sets the time partition expiration for the table. See Partitioned Tables. The table must also be time partitioned.

See #time_partitioning_type=.

If the table is not a full resource representation (see #resource_full?), the full representation will be retrieved before the update to comply with ETag-based optimistic concurrency control.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"
table = dataset.create_table "my_table" do |t|
  t.schema do |schema|
    schema.timestamp "dob", mode: :required
  end
  t.time_partitioning_type = "DAY"
  t.time_partitioning_field = "dob"
  t.time_partitioning_expiration = 86_400
end

Parameters:

  • expiration (Integer, nil)

    An expiration time, in seconds, for data in time partitions, , or nil to indicate no expiration time.



460
461
462
463
464
465
466
# File 'lib/google/cloud/bigquery/table.rb', line 460

def time_partitioning_expiration= expiration
  reload! unless resource_full?
  expiration_ms = expiration * 1000 if expiration
  @gapi.time_partitioning ||= Google::Apis::BigqueryV2::TimePartitioning.new
  @gapi.time_partitioning.expiration_ms = expiration_ms
  patch_gapi! :time_partitioning
end

#time_partitioning_fieldString?

The field on which the table is time partitioned, if any. If not set, the destination table is time partitioned by pseudo column _PARTITIONTIME; if set, the table is time partitioned by this field. See Partitioned Tables.

Returns:

  • (String, nil)

    The time partition field, if a field was configured. nil if not time partitioned, not set (time partitioned by pseudo column '_PARTITIONTIME') or the object is a reference (see #reference?).



367
368
369
370
371
# File 'lib/google/cloud/bigquery/table.rb', line 367

def time_partitioning_field
  return nil if reference?
  ensure_full_data!
  @gapi.time_partitioning.field if time_partitioning?
end

#time_partitioning_field=(field) ⇒ Object

Sets the field on which to time partition the table. If not set, the destination table is time partitioned by pseudo column _PARTITIONTIME; if set, the table is time partitioned by this field. See Partitioned Tables. The table must also be time partitioned.

See #time_partitioning_type=.

You can only set the time partitioning field while creating a table as in the example below. BigQuery does not allow you to change time partitioning on an existing table.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"
table = dataset.create_table "my_table" do |t|
  t.schema do |schema|
    schema.timestamp "dob", mode: :required
  end
  t.time_partitioning_type  = "DAY"
  t.time_partitioning_field = "dob"
end

Parameters:

  • field (String)

    The time partition field. The field must be a top-level TIMESTAMP or DATE field. Its mode must be NULLABLE or REQUIRED.



405
406
407
408
409
410
# File 'lib/google/cloud/bigquery/table.rb', line 405

def time_partitioning_field= field
  reload! unless resource_full?
  @gapi.time_partitioning ||= Google::Apis::BigqueryV2::TimePartitioning.new
  @gapi.time_partitioning.field = field
  patch_gapi! :time_partitioning
end

#time_partitioning_typeString?

The period for which the table is time partitioned, if any. See Partitioned Tables.

Returns:

  • (String, nil)

    The time partition type. The supported types are DAY, HOUR, MONTH, and YEAR, which will generate one partition per day, hour, month, and year, respectively; or nil if not set or the object is a reference (see #reference?).



313
314
315
316
317
# File 'lib/google/cloud/bigquery/table.rb', line 313

def time_partitioning_type
  return nil if reference?
  ensure_full_data!
  @gapi.time_partitioning.type if time_partitioning?
end

#time_partitioning_type=(type) ⇒ Object

Sets the time partitioning type for the table. See Partitioned Tables. The supported types are DAY, HOUR, MONTH, and YEAR, which will generate one partition per day, hour, month, and year, respectively.

You can only set time partitioning when creating a table as in the example below. BigQuery does not allow you to change time partitioning on an existing table.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"
table = dataset.create_table "my_table" do |t|
  t.schema do |schema|
    schema.timestamp "dob", mode: :required
  end
  t.time_partitioning_type  = "DAY"
  t.time_partitioning_field = "dob"
end

Parameters:

  • type (String)

    The time partition type. The supported types are DAY, HOUR, MONTH, and YEAR, which will generate one partition per day, hour, month, and year, respectively.



348
349
350
351
352
353
# File 'lib/google/cloud/bigquery/table.rb', line 348

def time_partitioning_type= type
  reload! unless resource_full?
  @gapi.time_partitioning ||= Google::Apis::BigqueryV2::TimePartitioning.new
  @gapi.time_partitioning.type = type
  patch_gapi! :time_partitioning
end

#typeString?

The type of the table like if its a TABLE, VIEW or SNAPSHOT etc.,

Returns:

  • (String, nil)

    Type of the table, or nil if the object is a reference (see #reference?).



169
170
171
172
# File 'lib/google/cloud/bigquery/table.rb', line 169

def type
  return nil if reference?
  @gapi.type
end

#update_policy {|policy| ... } ⇒ Policy

Updates the Cloud IAM access control policy for the table. The latest policy will be read from the service. See also #policy.

Examples:

Update the policy by passing a block.

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"
table = dataset.table "my_table"

table.update_policy do |p|
  p.grant role: "roles/viewer", members: "user:viewer@example.com"
  p.revoke role: "roles/editor", members: "user:editor@example.com"
  p.revoke role: "roles/owner"
end # 2 API calls

Yields:

  • (policy)

    A block for updating the policy. The latest policy will be read from the service and passed to the block. After the block completes, the modified policy will be written to the service.

Yield Parameters:

  • policy (Policy)

    The mutable Policy for the table.

Returns:

  • (Policy)

    The updated and frozen policy for the table.

Raises:

  • (ArgumentError)

See Also:



1578
1579
1580
1581
1582
1583
1584
1585
1586
1587
# File 'lib/google/cloud/bigquery/table.rb', line 1578

def update_policy
  raise ArgumentError, "A block updating the policy must be provided" unless block_given?
  ensure_service!
  gapi = service.get_table_policy dataset_id, table_id
  policy = Policy.from_gapi gapi
  yield policy
  # TODO: Check for changes before calling RPC
  gapi = service.set_table_policy dataset_id, table_id, policy.to_gapi
  Policy.from_gapi(gapi).freeze
end

#view?Boolean?

Checks if the table's type is VIEW, indicating that the table represents a BigQuery logical view. See Dataset#create_view.

Returns:

  • (Boolean, nil)

    true when the type is VIEW, false otherwise, if the object is a resource (see #resource?); nil if the object is a reference (see #reference?).

See Also:



861
862
863
864
# File 'lib/google/cloud/bigquery/table.rb', line 861

def view?
  return nil if reference?
  @gapi.type == "VIEW"
end