Class: Google::Cloud::Bigquery::Table
- Inherits:
-
Object
- Object
- Google::Cloud::Bigquery::Table
- Defined in:
- lib/google/cloud/bigquery/table.rb,
lib/google/cloud/bigquery/table/list.rb,
lib/google/cloud/bigquery/table/async_inserter.rb
Overview
Table
A named resource representing a BigQuery table that holds zero or more records. Every table is defined by a schema that may contain nested and repeated fields.
The Table class can also represent a logical view, which is a virtual table defined by a SQL query (see #view? and Dataset#create_view); or a materialized view, which is a precomputed view that periodically caches results of a query for increased performance and efficiency (see #materialized_view? and Dataset#create_materialized_view).
Direct Known Subclasses
Defined Under Namespace
Classes: AsyncInserter, List, Updater
Attributes collapse
-
#api_url ⇒ String?
A URL that can be used to access the table using the REST API.
-
#buffer_bytes ⇒ Integer?
A lower-bound estimate of the number of bytes currently in this table's streaming buffer, if one is present.
-
#buffer_oldest_at ⇒ Time?
The time of the oldest entry currently in this table's streaming buffer, if one is present.
-
#buffer_rows ⇒ Integer?
A lower-bound estimate of the number of rows currently in this table's streaming buffer, if one is present.
-
#clone? ⇒ Boolean?
Checks if the table's type is
CLONE
, indicating that the table represents a BigQuery table clone. -
#clone_definition ⇒ Google::Apis::BigqueryV2::CloneDefinition?
The Information about base table and clone time of the table.
-
#clustering? ⇒ Boolean?
Checks if the table is clustered.
-
#clustering_fields ⇒ Array<String>?
One or more fields on which data should be clustered.
-
#clustering_fields=(fields) ⇒ Object
Updates the list of fields on which data should be clustered.
-
#created_at ⇒ Time?
The time when this table was created.
-
#dataset_id ⇒ String
The ID of the
Dataset
containing this table. -
#description ⇒ String?
A user-friendly description of the table.
-
#description=(new_description) ⇒ Object
Updates the user-friendly description of the table.
-
#enable_refresh=(new_enable_refresh) ⇒ Object
Sets whether automatic refresh of the materialized view is enabled.
-
#enable_refresh? ⇒ Boolean?
Whether automatic refresh of the materialized view is enabled.
-
#encryption ⇒ EncryptionConfiguration?
The EncryptionConfiguration object that represents the custom encryption method used to protect the table.
-
#encryption=(value) ⇒ Object
Set the EncryptionConfiguration object that represents the custom encryption method used to protect the table.
-
#etag ⇒ String?
The ETag hash of the table.
-
#expires_at ⇒ Time?
The time when this table expires.
-
#external ⇒ External::DataSource?
The External::DataSource (or subclass) object that represents the external data source that the table represents.
-
#external=(external) ⇒ Object
Set the External::DataSource (or subclass) object that represents the external data source that the table represents.
-
#external? ⇒ Boolean?
Checks if the table's type is
EXTERNAL
, indicating that the table represents an External Data Source. -
#fields ⇒ Array<Schema::Field>?
The fields of the table, obtained from its schema.
-
#headers ⇒ Array<Symbol>?
The names of the columns in the table, obtained from its schema.
-
#id ⇒ String?
The combined Project ID, Dataset ID, and Table ID for this table, in the format specified by the Legacy SQL Query Reference (
project-name:dataset_id.table_id
). -
#labels ⇒ Hash<String, String>?
A hash of user-provided labels associated with this table.
-
#labels=(labels) ⇒ Object
Updates the hash of user-provided labels associated with this table.
-
#last_refresh_time ⇒ Time?
The time when the materialized view was last modified.
-
#location ⇒ String?
The geographic location where the table should reside.
-
#materialized_view? ⇒ Boolean?
Checks if the table's type is
MATERIALIZED_VIEW
, indicating that the table represents a BigQuery materialized view. -
#modified_at ⇒ Time?
The date when this table was last modified.
-
#name ⇒ String?
The name of the table.
-
#name=(new_name) ⇒ Object
Updates the name of the table.
-
#param_types ⇒ Hash
The types of the fields in the table, obtained from its schema.
-
#policy ⇒ Policy
Gets the Cloud IAM access control policy for the table.
-
#project_id ⇒ String
The ID of the
Project
containing this table. -
#query ⇒ String?
The query that defines the view or materialized view.
-
#query_id(standard_sql: nil, legacy_sql: nil) ⇒ String
The value returned by #id, wrapped in backticks (Standard SQL) or s quare brackets (Legacy SQL) to accommodate project IDs containing dashes.
-
#query_legacy_sql? ⇒ Boolean
Checks if the view's query is using legacy sql.
-
#query_standard_sql? ⇒ Boolean
Checks if the view's query is using standard sql.
-
#query_udfs ⇒ Array<String>?
The user-defined function resources used in the view's query.
-
#range_partitioning? ⇒ Boolean?
Checks if the table is range partitioned.
-
#range_partitioning_end ⇒ Integer?
The end of range partitioning, exclusive.
-
#range_partitioning_field ⇒ Integer?
The field on which the table is range partitioned, if any.
-
#range_partitioning_interval ⇒ Integer?
The width of each interval.
-
#range_partitioning_start ⇒ Integer?
The start of range partitioning, inclusive.
-
#refresh_interval_ms ⇒ Integer?
The maximum frequency in milliseconds at which the materialized view will be refreshed.
-
#refresh_interval_ms=(new_refresh_interval_ms) ⇒ Object
Sets the maximum frequency at which the materialized view will be refreshed.
-
#require_partition_filter ⇒ Boolean?
Whether queries over this table require a partition filter that can be used for partition elimination to be specified.
-
#require_partition_filter=(new_require) ⇒ Object
Sets whether queries over this table require a partition filter.
-
#schema(replace: false) {|schema| ... } ⇒ Google::Cloud::Bigquery::Schema?
Returns the table's schema.
-
#snapshot? ⇒ Boolean?
Checks if the table's type is
SNAPSHOT
, indicating that the table represents a BigQuery table snapshot. -
#snapshot_definition ⇒ Google::Apis::BigqueryV2::SnapshotDefinition?
The Information about base table and snapshot time of the table.
-
#table? ⇒ Boolean?
Checks if the table's type is
TABLE
. -
#table_id ⇒ String
A unique ID for this table.
-
#test_iam_permissions(*permissions) ⇒ Array<String>
Tests the specified permissions against the Cloud IAM access control policy.
-
#time_partitioning? ⇒ Boolean?
Checks if the table is time partitioned.
-
#time_partitioning_expiration ⇒ Integer?
The expiration for the time partitions, if any, in seconds.
-
#time_partitioning_expiration=(expiration) ⇒ Object
Sets the time partition expiration for the table.
-
#time_partitioning_field ⇒ String?
The field on which the table is time partitioned, if any.
-
#time_partitioning_field=(field) ⇒ Object
Sets the field on which to time partition the table.
-
#time_partitioning_type ⇒ String?
The period for which the table is time partitioned, if any.
-
#time_partitioning_type=(type) ⇒ Object
Sets the time partitioning type for the table.
-
#type ⇒ String?
The type of the table like if its a TABLE, VIEW or SNAPSHOT etc.,.
-
#update_policy {|policy| ... } ⇒ Policy
Updates the Cloud IAM access control policy for the table.
-
#view? ⇒ Boolean?
Checks if the table's type is
VIEW
, indicating that the table represents a BigQuery logical view.
Data collapse
-
#bytes_count ⇒ Integer?
The number of bytes in the table.
-
#clone(destination_table) {|job| ... } ⇒ Boolean
Clones the data from the table to another table using a synchronous method that blocks for a response.
-
#copy(destination_table, create: nil, write: nil) {|job| ... } ⇒ Boolean
Copies the data from the table to another table using a synchronous method that blocks for a response.
-
#copy_job(destination_table, create: nil, write: nil, job_id: nil, prefix: nil, labels: nil, dryrun: nil, operation_type: nil) {|job| ... } ⇒ Google::Cloud::Bigquery::CopyJob
Copies the data from the table to another table using an asynchronous method.
-
#data(token: nil, max: nil, start: nil) ⇒ Google::Cloud::Bigquery::Data
Retrieves data from the table.
-
#extract(extract_url, format: nil, compression: nil, delimiter: nil, header: nil) {|job| ... } ⇒ Boolean
Extracts the data from the table to a Google Cloud Storage file using a synchronous method that blocks for a response.
-
#extract_job(extract_url, format: nil, compression: nil, delimiter: nil, header: nil, job_id: nil, prefix: nil, labels: nil, dryrun: nil) {|job| ... } ⇒ Google::Cloud::Bigquery::ExtractJob
Extracts the data from the table to a Google Cloud Storage file using an asynchronous method.
-
#insert(rows, insert_ids: nil, skip_invalid: nil, ignore_unknown: nil) ⇒ Google::Cloud::Bigquery::InsertResponse
Inserts data into the table for near-immediate querying, without the need to complete a load operation before the data can appear in query results.
-
#insert_async(skip_invalid: nil, ignore_unknown: nil, max_bytes: 10_000_000, max_rows: 500, interval: 10, threads: 4) {|response| ... } ⇒ Table::AsyncInserter
Create an asynchronous inserter object used to insert rows in batches.
-
#load(files, format: nil, create: nil, write: nil, projection_fields: nil, jagged_rows: nil, quoted_newlines: nil, encoding: nil, delimiter: nil, ignore_unknown: nil, max_bad_records: nil, quote: nil, skip_leading: nil, autodetect: nil, null_marker: nil, session_id: nil, schema: self.schema) {|updater| ... } ⇒ Boolean
Loads data into the table.
-
#load_job(files, format: nil, create: nil, write: nil, projection_fields: nil, jagged_rows: nil, quoted_newlines: nil, encoding: nil, delimiter: nil, ignore_unknown: nil, max_bad_records: nil, quote: nil, skip_leading: nil, job_id: nil, prefix: nil, labels: nil, autodetect: nil, null_marker: nil, dryrun: nil, create_session: nil, session_id: nil, schema: self.schema) {|load_job| ... } ⇒ Google::Cloud::Bigquery::LoadJob
Loads data into the table.
-
#restore(destination_table, create: nil, write: nil) {|job| ... } ⇒ Boolean
Restore the data from the table to another table using a synchronous method that blocks for a response.
-
#rows_count ⇒ Integer?
The number of rows in the table.
-
#snapshot(destination_table) {|job| ... } ⇒ Boolean
Takes snapshot of the data from the table to another table using a synchronous method that blocks for a response.
Lifecycle collapse
-
#delete ⇒ Boolean
Permanently deletes the table.
-
#exists?(force: false) ⇒ Boolean
Determines whether the table exists in the BigQuery service.
-
#query=(new_query) ⇒ Object
Updates the query that defines the view.
-
#reference? ⇒ Boolean
Whether the table was created without retrieving the resource representation from the BigQuery service.
-
#reload! ⇒ Google::Cloud::Bigquery::Table
(also: #refresh!)
Reloads the table with current data from the BigQuery service.
-
#resource? ⇒ Boolean
Whether the table was created with a resource representation from the BigQuery service.
-
#resource_full? ⇒ Boolean
Whether the table was created with a full resource representation from the BigQuery service.
-
#resource_partial? ⇒ Boolean
Whether the table was created with a partial resource representation from the BigQuery service by retrieval through Dataset#tables.
-
#set_query(query, standard_sql: nil, legacy_sql: nil, udfs: nil) ⇒ Object
Updates the query that defines the view.
Instance Method Details
#api_url ⇒ String?
A URL that can be used to access the table using the REST API.
718 719 720 721 722 |
# File 'lib/google/cloud/bigquery/table.rb', line 718 def api_url return nil if reference? ensure_full_data! @gapi.self_link end |
#buffer_bytes ⇒ Integer?
A lower-bound estimate of the number of bytes currently in this table's streaming buffer, if one is present. This field will be absent if the table is not being streamed to or if there is no data in the streaming buffer.
1259 1260 1261 1262 1263 |
# File 'lib/google/cloud/bigquery/table.rb', line 1259 def buffer_bytes return nil if reference? ensure_full_data! @gapi.streaming_buffer&.estimated_bytes end |
#buffer_oldest_at ⇒ Time?
The time of the oldest entry currently in this table's streaming buffer, if one is present. This field will be absent if the table is not being streamed to or if there is no data in the streaming buffer.
1293 1294 1295 1296 1297 1298 1299 |
# File 'lib/google/cloud/bigquery/table.rb', line 1293 def buffer_oldest_at return nil if reference? ensure_full_data! return nil unless @gapi.streaming_buffer oldest_entry_time = @gapi.streaming_buffer.oldest_entry_time Convert.millis_to_time oldest_entry_time end |
#buffer_rows ⇒ Integer?
A lower-bound estimate of the number of rows currently in this table's streaming buffer, if one is present. This field will be absent if the table is not being streamed to or if there is no data in the streaming buffer.
1277 1278 1279 1280 1281 |
# File 'lib/google/cloud/bigquery/table.rb', line 1277 def buffer_rows return nil if reference? ensure_full_data! @gapi.streaming_buffer&.estimated_rows end |
#bytes_count ⇒ Integer?
The number of bytes in the table.
763 764 765 766 767 768 769 770 771 |
# File 'lib/google/cloud/bigquery/table.rb', line 763 def bytes_count return nil if reference? ensure_full_data! begin Integer @gapi.num_bytes rescue StandardError nil end end |
#clone(destination_table) {|job| ... } ⇒ Boolean
Clones the data from the table to another table using a synchronous method that blocks for a response. The source and destination table have the same table type, but only bill for unique data. Timeouts and transient errors are generally handled as needed to complete the job. See also #copy_job.
The geographic location for the job ("US", "EU", etc.) can be set via CopyJob::Updater#location= in a block passed to this method. If the table is a full resource representation (see #resource_full?), the location of the job will be automatically set to the location of the table.
1924 1925 1926 1927 1928 |
# File 'lib/google/cloud/bigquery/table.rb', line 1924 def clone destination_table, &block copy_job_with_operation_type destination_table, operation_type: OperationType::CLONE, &block end |
#clone? ⇒ Boolean?
Checks if the table's type is CLONE
, indicating that the table
represents a BigQuery table clone.
895 896 897 898 |
# File 'lib/google/cloud/bigquery/table.rb', line 895 def clone? return nil if reference? !@gapi.clone_definition.nil? end |
#clone_definition ⇒ Google::Apis::BigqueryV2::CloneDefinition?
The Information about base table and clone time of the table.
195 196 197 198 |
# File 'lib/google/cloud/bigquery/table.rb', line 195 def clone_definition return nil if reference? @gapi.clone_definition end |
#clustering? ⇒ Boolean?
Checks if the table is clustered.
See Google::Cloud::Bigquery::Table::Updater#clustering_fields=, #clustering_fields and #clustering_fields=.
531 532 533 534 |
# File 'lib/google/cloud/bigquery/table.rb', line 531 def clustering? return nil if reference? !@gapi.clustering.nil? end |
#clustering_fields ⇒ Array<String>?
One or more fields on which data should be clustered. Must be specified with time partitioning, data in the table will be first partitioned and subsequently clustered. The order of the returned fields determines the sort order of the data.
BigQuery supports clustering for both partitioned and non-partitioned tables.
See Google::Cloud::Bigquery::Table::Updater#clustering_fields=, #clustering_fields= and #clustering?.
559 560 561 562 563 |
# File 'lib/google/cloud/bigquery/table.rb', line 559 def clustering_fields return nil if reference? ensure_full_data! @gapi.clustering.fields if clustering? end |
#clustering_fields=(fields) ⇒ Object
Updates the list of fields on which data should be clustered.
Only top-level, non-repeated, simple-type fields are supported. When you cluster a table using multiple columns, the order of columns you specify is important. The order of the specified columns determines the sort order of the data.
BigQuery supports clustering for both partitioned and non-partitioned tables.
See Google::Cloud::Bigquery::Table::Updater#clustering_fields=, #clustering_fields and #clustering?.
601 602 603 604 605 606 607 608 609 610 |
# File 'lib/google/cloud/bigquery/table.rb', line 601 def clustering_fields= fields reload! unless resource_full? if fields @gapi.clustering ||= Google::Apis::BigqueryV2::Clustering.new @gapi.clustering.fields = fields else @gapi.clustering = nil end patch_gapi! :clustering end |
#copy(destination_table, create: nil, write: nil) {|job| ... } ⇒ Boolean
Copies the data from the table to another table using a synchronous method that blocks for a response. Timeouts and transient errors are generally handled as needed to complete the job. See also #copy_job.
The geographic location for the job ("US", "EU", etc.) can be set via CopyJob::Updater#location= in a block passed to this method. If the table is a full resource representation (see #resource_full?), the location of the job will be automatically set to the location of the table.
1866 1867 1868 1869 1870 1871 1872 |
# File 'lib/google/cloud/bigquery/table.rb', line 1866 def copy destination_table, create: nil, write: nil, &block copy_job_with_operation_type destination_table, create: create, write: write, operation_type: OperationType::COPY, &block end |
#copy_job(destination_table, create: nil, write: nil, job_id: nil, prefix: nil, labels: nil, dryrun: nil, operation_type: nil) {|job| ... } ⇒ Google::Cloud::Bigquery::CopyJob
Copies the data from the table to another table using an asynchronous method. In this method, a CopyJob is immediately returned. The caller may poll the service by repeatedly calling Job#reload! and Job#done? to detect when the job is done, or simply block until the job is done by calling #Job#wait_until_done!. See also #copy.
The geographic location for the job ("US", "EU", etc.) can be set via CopyJob::Updater#location= in a block passed to this method. If the table is a full resource representation (see #resource_full?), the location of the job will be automatically set to the location of the table.
1777 1778 1779 1780 1781 1782 1783 1784 1785 1786 1787 1788 1789 1790 1791 1792 1793 1794 1795 1796 1797 1798 1799 1800 |
# File 'lib/google/cloud/bigquery/table.rb', line 1777 def copy_job destination_table, create: nil, write: nil, job_id: nil, prefix: nil, labels: nil, dryrun: nil, operation_type: nil ensure_service! = { create: create, write: write, dryrun: dryrun, labels: labels, job_id: job_id, prefix: prefix, operation_type: operation_type } updater = CopyJob::Updater.( service, table_ref, Service.get_table_ref(destination_table, default_ref: table_ref), ) updater.location = location if location # may be table reference yield updater if block_given? job_gapi = updater.to_gapi gapi = service.copy_table job_gapi Job.from_gapi gapi, service end |
#created_at ⇒ Time?
The time when this table was created.
799 800 801 802 803 |
# File 'lib/google/cloud/bigquery/table.rb', line 799 def created_at return nil if reference? ensure_full_data! Convert.millis_to_time @gapi.creation_time end |
#data(token: nil, max: nil, start: nil) ⇒ Google::Cloud::Bigquery::Data
Retrieves data from the table.
If the table is not a full resource representation (see #resource_full?), the full representation will be retrieved before the data retrieval.
1667 1668 1669 1670 1671 1672 |
# File 'lib/google/cloud/bigquery/table.rb', line 1667 def data token: nil, max: nil, start: nil ensure_service! reload! unless resource_full? data_json = service.list_tabledata dataset_id, table_id, token: token, max: max, start: start Data.from_gapi_json data_json, gapi, nil, service end |
#dataset_id ⇒ String
The ID of the Dataset
containing this table.
144 145 146 147 |
# File 'lib/google/cloud/bigquery/table.rb', line 144 def dataset_id return reference.dataset_id if reference? @gapi.table_reference.dataset_id end |
#delete ⇒ Boolean
Permanently deletes the table.
2825 2826 2827 2828 2829 2830 2831 |
# File 'lib/google/cloud/bigquery/table.rb', line 2825 def delete ensure_service! service.delete_table dataset_id, table_id # Set flag for #exists? @exists = false true end |
#description ⇒ String?
A user-friendly description of the table.
732 733 734 735 736 |
# File 'lib/google/cloud/bigquery/table.rb', line 732 def description return nil if reference? ensure_full_data! @gapi.description end |
#description=(new_description) ⇒ Object
Updates the user-friendly description of the table.
If the table is not a full resource representation (see #resource_full?), the full representation will be retrieved before the update to comply with ETag-based optimistic concurrency control.
749 750 751 752 753 |
# File 'lib/google/cloud/bigquery/table.rb', line 749 def description= new_description reload! unless resource_full? @gapi.update! description: new_description patch_gapi! :description end |
#enable_refresh=(new_enable_refresh) ⇒ Object
Sets whether automatic refresh of the materialized view is enabled. When true, the materialized view is updated when the base table is updated. See #materialized_view?.
1473 1474 1475 1476 1477 1478 |
# File 'lib/google/cloud/bigquery/table.rb', line 1473 def enable_refresh= new_enable_refresh @gapi.materialized_view = Google::Apis::BigqueryV2::MaterializedViewDefinition.new( enable_refresh: new_enable_refresh ) patch_gapi! :materialized_view end |
#enable_refresh? ⇒ Boolean?
Whether automatic refresh of the materialized view is enabled. When true, the materialized view is updated when the base table is updated. The default value is true. See #materialized_view?.
1458 1459 1460 1461 1462 1463 |
# File 'lib/google/cloud/bigquery/table.rb', line 1458 def enable_refresh? return nil unless @gapi.materialized_view val = @gapi.materialized_view.enable_refresh return true if val.nil? val end |
#encryption ⇒ EncryptionConfiguration?
The EncryptionConfiguration object that represents the custom encryption method used to protect the table. If not set, Dataset#default_encryption is used.
Present only if the table is using custom encryption.
1165 1166 1167 1168 1169 1170 |
# File 'lib/google/cloud/bigquery/table.rb', line 1165 def encryption return nil if reference? ensure_full_data! return nil if @gapi.encryption_configuration.nil? EncryptionConfiguration.from_gapi(@gapi.encryption_configuration).freeze end |
#encryption=(value) ⇒ Object
Set the EncryptionConfiguration object that represents the custom encryption method used to protect the table. If not set, Dataset#default_encryption is used.
Present only if the table is using custom encryption.
If the table is not a full resource representation (see #resource_full?), the full representation will be retrieved before the update to comply with ETag-based optimistic concurrency control.
1190 1191 1192 1193 1194 |
# File 'lib/google/cloud/bigquery/table.rb', line 1190 def encryption= value reload! unless resource_full? @gapi.encryption_configuration = value.to_gapi patch_gapi! :encryption_configuration end |
#etag ⇒ String?
The ETag hash of the table.
704 705 706 707 708 |
# File 'lib/google/cloud/bigquery/table.rb', line 704 def etag return nil if reference? ensure_full_data! @gapi.etag end |
#exists?(force: false) ⇒ Boolean
Determines whether the table exists in the BigQuery service. The
result is cached locally. To refresh state, set force
to true
.
2881 2882 2883 2884 2885 2886 2887 2888 |
# File 'lib/google/cloud/bigquery/table.rb', line 2881 def exists? force: false return gapi_exists? if force # If we have a value, return it return @exists unless @exists.nil? # Always true if we have a gapi object return true if resource? gapi_exists? end |
#expires_at ⇒ Time?
The time when this table expires. If not present, the table will persist indefinitely. Expired tables will be deleted and their storage reclaimed.
815 816 817 818 819 |
# File 'lib/google/cloud/bigquery/table.rb', line 815 def expires_at return nil if reference? ensure_full_data! Convert.millis_to_time @gapi.expiration_time end |
#external ⇒ External::DataSource?
The External::DataSource (or subclass) object that represents the external data source that the table represents. Data can be queried the table, even though the data is not stored in BigQuery. Instead of loading or streaming the data, this object references the external data source.
Present only if the table represents an External Data Source. See #external? and External::DataSource.
1213 1214 1215 1216 1217 1218 |
# File 'lib/google/cloud/bigquery/table.rb', line 1213 def external return nil if reference? ensure_full_data! return nil if @gapi.external_data_configuration.nil? External.from_gapi(@gapi.external_data_configuration).freeze end |
#external=(external) ⇒ Object
Set the External::DataSource (or subclass) object that represents the external data source that the table represents. Data can be queried the table, even though the data is not stored in BigQuery. Instead of loading or streaming the data, this object references the external data source.
Use only if the table represents an External Data Source. See #external? and External::DataSource.
If the table is not a full resource representation (see #resource_full?), the full representation will be retrieved before the update to comply with ETag-based optimistic concurrency control.
1241 1242 1243 1244 1245 |
# File 'lib/google/cloud/bigquery/table.rb', line 1241 def external= external reload! unless resource_full? @gapi.external_data_configuration = external.to_gapi patch_gapi! :external_data_configuration end |
#external? ⇒ Boolean?
Checks if the table's type is EXTERNAL
, indicating that the table
represents an External Data Source. See #external? and
External::DataSource.
929 930 931 932 |
# File 'lib/google/cloud/bigquery/table.rb', line 929 def external? return nil if reference? @gapi.type == "EXTERNAL" end |
#extract(extract_url, format: nil, compression: nil, delimiter: nil, header: nil) {|job| ... } ⇒ Boolean
Extracts the data from the table to a Google Cloud Storage file using a synchronous method that blocks for a response. Timeouts and transient errors are generally handled as needed to complete the job. See also #extract_job.
The geographic location for the job ("US", "EU", etc.) can be set via ExtractJob::Updater#location= in a block passed to this method. If the table is a full resource representation (see #resource_full?), the location of the job will be automatically set to the location of the table.
2224 2225 2226 2227 2228 2229 2230 2231 2232 2233 2234 |
# File 'lib/google/cloud/bigquery/table.rb', line 2224 def extract extract_url, format: nil, compression: nil, delimiter: nil, header: nil, &block job = extract_job extract_url, format: format, compression: compression, delimiter: delimiter, header: header, &block job.wait_until_done! ensure_job_succeeded! job true end |
#extract_job(extract_url, format: nil, compression: nil, delimiter: nil, header: nil, job_id: nil, prefix: nil, labels: nil, dryrun: nil) {|job| ... } ⇒ Google::Cloud::Bigquery::ExtractJob
Extracts the data from the table to a Google Cloud Storage file using an asynchronous method. In this method, an ExtractJob is immediately returned. The caller may poll the service by repeatedly calling Job#reload! and Job#done? to detect when the job is done, or simply block until the job is done by calling #Job#wait_until_done!. See also #extract.
The geographic location for the job ("US", "EU", etc.) can be set via ExtractJob::Updater#location= in a block passed to this method. If the table is a full resource representation (see #resource_full?), the location of the job will automatically be set to the location of the table.
2148 2149 2150 2151 2152 2153 2154 2155 2156 2157 2158 2159 2160 2161 |
# File 'lib/google/cloud/bigquery/table.rb', line 2148 def extract_job extract_url, format: nil, compression: nil, delimiter: nil, header: nil, job_id: nil, prefix: nil, labels: nil, dryrun: nil ensure_service! = { format: format, compression: compression, delimiter: delimiter, header: header, dryrun: dryrun, job_id: job_id, prefix: prefix, labels: labels } updater = ExtractJob::Updater. service, table_ref, extract_url, updater.location = location if location # may be table reference yield updater if block_given? job_gapi = updater.to_gapi gapi = service.extract_table job_gapi Job.from_gapi gapi, service end |
#fields ⇒ Array<Schema::Field>?
The fields of the table, obtained from its schema.
1103 1104 1105 1106 |
# File 'lib/google/cloud/bigquery/table.rb', line 1103 def fields return nil if reference? schema.fields end |
#headers ⇒ Array<Symbol>?
The names of the columns in the table, obtained from its schema.
1126 1127 1128 1129 |
# File 'lib/google/cloud/bigquery/table.rb', line 1126 def headers return nil if reference? schema.headers end |
#id ⇒ String?
The combined Project ID, Dataset ID, and Table ID for this table, in
the format specified by the Legacy SQL Query
Reference
(project-name:dataset_id.table_id
). This is useful for referencing
tables in other projects and datasets. To use this value in queries
see #query_id.
625 626 627 628 |
# File 'lib/google/cloud/bigquery/table.rb', line 625 def id return nil if reference? @gapi.id end |
#insert(rows, insert_ids: nil, skip_invalid: nil, ignore_unknown: nil) ⇒ Google::Cloud::Bigquery::InsertResponse
Inserts data into the table for near-immediate querying, without the need to complete a load operation before the data can appear in query results.
Simple Ruby types are generally accepted per JSON rules, along with the following support for BigQuery's more complex types:
BigQuery | Ruby | Notes |
---|---|---|
NUMERIC |
BigDecimal |
BigDecimal values will be rounded to scale 9. |
BIGNUMERIC |
String |
Pass as String to avoid rounding to scale 9. |
DATETIME |
DateTime |
DATETIME does not support time zone. |
DATE |
Date |
|
GEOGRAPHY |
String |
Well-known text (WKT) or GeoJSON. |
JSON |
String (Stringified JSON) |
String, as JSON does not have a schema to verify. |
TIMESTAMP |
Time |
|
TIME |
Google::Cloud::BigQuery::Time |
|
BYTES |
File , IO , StringIO , or similar |
|
ARRAY |
Array |
Nested arrays, nil values are not supported. |
STRUCT |
Hash |
Hash keys may be strings or symbols. |
For GEOGRAPHY
data, see Working with BigQuery GIS data.
Because BigQuery's streaming API is designed for high insertion rates, modifications to the underlying table metadata are eventually consistent when interacting with the streaming system. In most cases metadata changes are propagated within minutes, but during this period API responses may reflect the inconsistent state of the table.
The value :skip
can be provided to skip the generation of IDs for all rows, or to skip the generation of an
ID for a specific row in the array.
2734 2735 2736 2737 2738 2739 2740 2741 2742 2743 2744 2745 2746 2747 2748 2749 2750 2751 2752 |
# File 'lib/google/cloud/bigquery/table.rb', line 2734 def insert rows, insert_ids: nil, skip_invalid: nil, ignore_unknown: nil rows = [rows] if rows.is_a? Hash raise ArgumentError, "No rows provided" if rows.empty? insert_ids = Array.new(rows.count) { :skip } if insert_ids == :skip insert_ids = Array insert_ids if insert_ids.count.positive? && insert_ids.count != rows.count raise ArgumentError, "insert_ids must be the same size as rows" end ensure_service! gapi = service.insert_tabledata dataset_id, table_id, rows, skip_invalid: skip_invalid, ignore_unknown: ignore_unknown, insert_ids: insert_ids InsertResponse.from_gapi rows, gapi end |
#insert_async(skip_invalid: nil, ignore_unknown: nil, max_bytes: 10_000_000, max_rows: 500, interval: 10, threads: 4) {|response| ... } ⇒ Table::AsyncInserter
Create an asynchronous inserter object used to insert rows in batches.
2801 2802 2803 2804 2805 2806 2807 |
# File 'lib/google/cloud/bigquery/table.rb', line 2801 def insert_async skip_invalid: nil, ignore_unknown: nil, max_bytes: 10_000_000, max_rows: 500, interval: 10, threads: 4, &block ensure_service! AsyncInserter.new self, skip_invalid: skip_invalid, ignore_unknown: ignore_unknown, max_bytes: max_bytes, max_rows: max_rows, interval: interval, threads: threads, &block end |
#labels ⇒ Hash<String, String>?
A hash of user-provided labels associated with this table. Labels are used to organize and group tables. See Using Labels.
The returned hash is frozen and changes are not allowed. Use #labels= to replace the entire hash.
970 971 972 973 974 975 |
# File 'lib/google/cloud/bigquery/table.rb', line 970 def labels return nil if reference? m = @gapi.labels m = m.to_h if m.respond_to? :to_h m.dup.freeze end |
#labels=(labels) ⇒ Object
Updates the hash of user-provided labels associated with this table. Labels are used to organize and group tables. See Using Labels.
If the table is not a full resource representation (see #resource_full?), the full representation will be retrieved before the update to comply with ETag-based optimistic concurrency control.
1014 1015 1016 1017 1018 |
# File 'lib/google/cloud/bigquery/table.rb', line 1014 def labels= labels reload! unless resource_full? @gapi.labels = labels patch_gapi! :labels end |
#last_refresh_time ⇒ Time?
The time when the materialized view was last modified. See #materialized_view?.
1488 1489 1490 |
# File 'lib/google/cloud/bigquery/table.rb', line 1488 def last_refresh_time Convert.millis_to_time @gapi.materialized_view&.last_refresh_time end |
#load(files, format: nil, create: nil, write: nil, projection_fields: nil, jagged_rows: nil, quoted_newlines: nil, encoding: nil, delimiter: nil, ignore_unknown: nil, max_bad_records: nil, quote: nil, skip_leading: nil, autodetect: nil, null_marker: nil, session_id: nil, schema: self.schema) {|updater| ... } ⇒ Boolean
Loads data into the table. You can pass a google-cloud storage file path or a google-cloud storage file instance. Or, you can upload a file directly. See Loading Data with a POST Request.
The geographic location for the job ("US", "EU", etc.) can be set via LoadJob::Updater#location= in a block passed to this method. If the table is a full resource representation (see #resource_full?), the location of the job will be automatically set to the location of the table.
2621 2622 2623 2624 2625 2626 2627 2628 2629 2630 2631 2632 2633 2634 |
# File 'lib/google/cloud/bigquery/table.rb', line 2621 def load files, format: nil, create: nil, write: nil, projection_fields: nil, jagged_rows: nil, quoted_newlines: nil, encoding: nil, delimiter: nil, ignore_unknown: nil, max_bad_records: nil, quote: nil, skip_leading: nil, autodetect: nil, null_marker: nil, session_id: nil, schema: self.schema, &block job = load_job files, format: format, create: create, write: write, projection_fields: projection_fields, jagged_rows: jagged_rows, quoted_newlines: quoted_newlines, encoding: encoding, delimiter: delimiter, ignore_unknown: ignore_unknown, max_bad_records: max_bad_records, quote: quote, skip_leading: skip_leading, autodetect: autodetect, null_marker: null_marker, session_id: session_id, schema: schema, &block job.wait_until_done! ensure_job_succeeded! job true end |
#load_job(files, format: nil, create: nil, write: nil, projection_fields: nil, jagged_rows: nil, quoted_newlines: nil, encoding: nil, delimiter: nil, ignore_unknown: nil, max_bad_records: nil, quote: nil, skip_leading: nil, job_id: nil, prefix: nil, labels: nil, autodetect: nil, null_marker: nil, dryrun: nil, create_session: nil, session_id: nil, schema: self.schema) {|load_job| ... } ⇒ Google::Cloud::Bigquery::LoadJob
Loads data into the table. You can pass a google-cloud storage file path or a google-cloud storage file instance. Or, you can upload a file directly. See Loading Data with a POST Request.
The geographic location for the job ("US", "EU", etc.) can be set via LoadJob::Updater#location= in a block passed to this method. If the table is a full resource representation (see #resource_full?), the location of the job will be automatically set to the location of the table.
2433 2434 2435 2436 2437 2438 2439 2440 2441 2442 2443 2444 2445 2446 2447 2448 2449 2450 2451 2452 2453 2454 |
# File 'lib/google/cloud/bigquery/table.rb', line 2433 def load_job files, format: nil, create: nil, write: nil, projection_fields: nil, jagged_rows: nil, quoted_newlines: nil, encoding: nil, delimiter: nil, ignore_unknown: nil, max_bad_records: nil, quote: nil, skip_leading: nil, job_id: nil, prefix: nil, labels: nil, autodetect: nil, null_marker: nil, dryrun: nil, create_session: nil, session_id: nil, schema: self.schema ensure_service! updater = load_job_updater format: format, create: create, write: write, projection_fields: projection_fields, jagged_rows: jagged_rows, quoted_newlines: quoted_newlines, encoding: encoding, delimiter: delimiter, ignore_unknown: ignore_unknown, max_bad_records: max_bad_records, quote: quote, skip_leading: skip_leading, dryrun: dryrun, job_id: job_id, prefix: prefix, schema: schema, labels: labels, autodetect: autodetect, null_marker: null_marker, create_session: create_session, session_id: session_id yield updater if block_given? job_gapi = updater.to_gapi return load_local files, job_gapi if local_file? files load_storage files, job_gapi end |
#location ⇒ String?
The geographic location where the table should reside. Possible
values include EU
and US
. The default value is US
.
942 943 944 945 946 |
# File 'lib/google/cloud/bigquery/table.rb', line 942 def location return nil if reference? ensure_full_data! @gapi.location end |
#materialized_view? ⇒ Boolean?
Checks if the table's type is MATERIALIZED_VIEW
, indicating that
the table represents a BigQuery materialized view.
See Dataset#create_materialized_view.
913 914 915 916 |
# File 'lib/google/cloud/bigquery/table.rb', line 913 def materialized_view? return nil if reference? @gapi.type == "MATERIALIZED_VIEW" end |
#modified_at ⇒ Time?
The date when this table was last modified.
829 830 831 832 833 |
# File 'lib/google/cloud/bigquery/table.rb', line 829 def modified_at return nil if reference? ensure_full_data! Convert.millis_to_time @gapi.last_modified_time end |
#name ⇒ String?
The name of the table.
674 675 676 677 |
# File 'lib/google/cloud/bigquery/table.rb', line 674 def name return nil if reference? @gapi.friendly_name end |
#name=(new_name) ⇒ Object
Updates the name of the table.
If the table is not a full resource representation (see #resource_full?), the full representation will be retrieved before the update to comply with ETag-based optimistic concurrency control.
690 691 692 693 694 |
# File 'lib/google/cloud/bigquery/table.rb', line 690 def name= new_name reload! unless resource_full? @gapi.update! friendly_name: new_name patch_gapi! :friendly_name end |
#param_types ⇒ Hash
The types of the fields in the table, obtained from its schema. Types use the same format as the optional query parameter types.
1146 1147 1148 1149 |
# File 'lib/google/cloud/bigquery/table.rb', line 1146 def param_types return nil if reference? schema.param_types end |
#policy ⇒ Policy
Gets the Cloud IAM access control policy for the table. The latest policy will be read from the service. See also #update_policy.
1545 1546 1547 1548 1549 1550 |
# File 'lib/google/cloud/bigquery/table.rb', line 1545 def policy raise ArgumentError, "Block argument not supported: Use #update_policy instead." if block_given? ensure_service! gapi = service.get_table_policy dataset_id, table_id Policy.from_gapi(gapi).freeze end |
#project_id ⇒ String
The ID of the Project
containing this table.
156 157 158 159 |
# File 'lib/google/cloud/bigquery/table.rb', line 156 def project_id return reference.project_id if reference? @gapi.table_reference.project_id end |
#query ⇒ String?
The query that defines the view or materialized view. See #view? and #materialized_view?.
1310 1311 1312 |
# File 'lib/google/cloud/bigquery/table.rb', line 1310 def query view? ? @gapi.view&.query : @gapi.materialized_view&.query end |
#query=(new_query) ⇒ Object
Updates the query that defines the view. (See #view?.) Not supported for materialized views.
This method sets the query using standard SQL. To specify legacy SQL or to use user-defined function resources for a view, use (#set_query) instead.
1338 1339 1340 |
# File 'lib/google/cloud/bigquery/table.rb', line 1338 def query= new_query set_query new_query end |
#query_id(standard_sql: nil, legacy_sql: nil) ⇒ String
The value returned by #id, wrapped in backticks (Standard SQL) or s quare brackets (Legacy SQL) to accommodate project IDs containing dashes. Useful in queries.
658 659 660 661 662 663 664 |
# File 'lib/google/cloud/bigquery/table.rb', line 658 def query_id standard_sql: nil, legacy_sql: nil if Convert.resolve_legacy_sql standard_sql, legacy_sql "[#{project_id}:#{dataset_id}.#{table_id}]" else "`#{project_id}.#{dataset_id}.#{table_id}`" end end |
#query_legacy_sql? ⇒ Boolean
Checks if the view's query is using legacy sql. See #view?.
1407 1408 1409 1410 1411 1412 |
# File 'lib/google/cloud/bigquery/table.rb', line 1407 def query_legacy_sql? return nil unless @gapi.view val = @gapi.view.use_legacy_sql return true if val.nil? val end |
#query_standard_sql? ⇒ Boolean
Checks if the view's query is using standard sql. See #view?.
1421 1422 1423 1424 |
# File 'lib/google/cloud/bigquery/table.rb', line 1421 def query_standard_sql? return nil unless @gapi.view !query_legacy_sql? end |
#query_udfs ⇒ Array<String>?
The user-defined function resources used in the view's query. May be
either a code resource to load from a Google Cloud Storage URI
(gs://bucket/path
), or an inline resource that contains code for a
user-defined function (UDF). Providing an inline code resource is
equivalent to providing a URI for a file containing the same code. See
User-Defined
Functions.
See #view?.
1441 1442 1443 1444 1445 1446 |
# File 'lib/google/cloud/bigquery/table.rb', line 1441 def query_udfs return nil unless @gapi.view udfs_gapi = @gapi.view.user_defined_function_resources return [] if udfs_gapi.nil? Array(udfs_gapi).map { |udf| udf.inline_code || udf.resource_uri } end |
#range_partitioning? ⇒ Boolean?
Checks if the table is range partitioned. See Creating and using integer range partitioned tables.
220 221 222 223 |
# File 'lib/google/cloud/bigquery/table.rb', line 220 def range_partitioning? return nil if reference? !@gapi.range_partitioning.nil? end |
#range_partitioning_end ⇒ Integer?
The end of range partitioning, exclusive. See Creating and using integer range partitioned tables.
281 282 283 284 285 |
# File 'lib/google/cloud/bigquery/table.rb', line 281 def range_partitioning_end return nil if reference? ensure_full_data! @gapi.range_partitioning.range.end if range_partitioning? end |
#range_partitioning_field ⇒ Integer?
The field on which the table is range partitioned, if any. The field must be a top-level NULLABLE/REQUIRED
field. The only supported type is INTEGER/INT64
. See Creating and using integer range partitioned
tables.
235 236 237 238 239 |
# File 'lib/google/cloud/bigquery/table.rb', line 235 def range_partitioning_field return nil if reference? ensure_full_data! @gapi.range_partitioning.field if range_partitioning? end |
#range_partitioning_interval ⇒ Integer?
The width of each interval. See Creating and using integer range partitioned tables.
265 266 267 268 269 270 |
# File 'lib/google/cloud/bigquery/table.rb', line 265 def range_partitioning_interval return nil if reference? ensure_full_data! return nil unless range_partitioning? @gapi.range_partitioning.range.interval end |
#range_partitioning_start ⇒ Integer?
The start of range partitioning, inclusive. See Creating and using integer range partitioned tables.
250 251 252 253 254 |
# File 'lib/google/cloud/bigquery/table.rb', line 250 def range_partitioning_start return nil if reference? ensure_full_data! @gapi.range_partitioning.range.start if range_partitioning? end |
#reference? ⇒ Boolean
Whether the table was created without retrieving the resource representation from the BigQuery service.
2909 2910 2911 |
# File 'lib/google/cloud/bigquery/table.rb', line 2909 def reference? @gapi.nil? end |
#refresh_interval_ms ⇒ Integer?
The maximum frequency in milliseconds at which the materialized view will be refreshed. See #materialized_view?.
1501 1502 1503 |
# File 'lib/google/cloud/bigquery/table.rb', line 1501 def refresh_interval_ms @gapi.materialized_view&.refresh_interval_ms end |
#refresh_interval_ms=(new_refresh_interval_ms) ⇒ Object
Sets the maximum frequency at which the materialized view will be refreshed. See #materialized_view?.
1513 1514 1515 1516 1517 1518 |
# File 'lib/google/cloud/bigquery/table.rb', line 1513 def refresh_interval_ms= new_refresh_interval_ms @gapi.materialized_view = Google::Apis::BigqueryV2::MaterializedViewDefinition.new( refresh_interval_ms: new_refresh_interval_ms ) patch_gapi! :materialized_view end |
#reload! ⇒ Google::Cloud::Bigquery::Table Also known as: refresh!
Reloads the table with current data from the BigQuery service.
2851 2852 2853 2854 2855 2856 2857 |
# File 'lib/google/cloud/bigquery/table.rb', line 2851 def reload! ensure_service! @gapi = service.get_table dataset_id, table_id, metadata_view: @reference = nil @exists = nil self end |
#require_partition_filter ⇒ Boolean?
Whether queries over this table require a partition filter that can be used for partition elimination to be specified. See Partitioned Tables.
479 480 481 482 483 |
# File 'lib/google/cloud/bigquery/table.rb', line 479 def require_partition_filter return nil if reference? ensure_full_data! @gapi.require_partition_filter end |
#require_partition_filter=(new_require) ⇒ Object
Sets whether queries over this table require a partition filter. See Partitioned Tables.
If the table is not a full resource representation (see #resource_full?), the full representation will be retrieved before the update to comply with ETag-based optimistic concurrency control.
508 509 510 511 512 |
# File 'lib/google/cloud/bigquery/table.rb', line 508 def require_partition_filter= new_require reload! unless resource_full? @gapi.require_partition_filter = new_require patch_gapi! :require_partition_filter end |
#resource? ⇒ Boolean
Whether the table was created with a resource representation from the BigQuery service.
2932 2933 2934 |
# File 'lib/google/cloud/bigquery/table.rb', line 2932 def resource? !@gapi.nil? end |
#resource_full? ⇒ Boolean
Whether the table was created with a full resource representation from the BigQuery service.
2981 2982 2983 |
# File 'lib/google/cloud/bigquery/table.rb', line 2981 def resource_full? @gapi.is_a? Google::Apis::BigqueryV2::Table end |
#resource_partial? ⇒ Boolean
Whether the table was created with a partial resource representation from the BigQuery service by retrieval through Dataset#tables. See Tables: list response for the contents of the partial representation. Accessing any attribute outside of the partial representation will result in loading the full representation.
2960 2961 2962 |
# File 'lib/google/cloud/bigquery/table.rb', line 2960 def resource_partial? @gapi.is_a? Google::Apis::BigqueryV2::TableList::Table end |
#restore(destination_table, create: nil, write: nil) {|job| ... } ⇒ Boolean
Restore the data from the table to another table using a synchronous method that blocks for a response. The source table type is SNAPSHOT and the destination table type is TABLE. Timeouts and transient errors are generally handled as needed to complete the job. See also #copy_job.
The geographic location for the job ("US", "EU", etc.) can be set via CopyJob::Updater#location= in a block passed to this method. If the table is a full resource representation (see #resource_full?), the location of the job will be automatically set to the location of the table.
2050 2051 2052 2053 2054 2055 2056 |
# File 'lib/google/cloud/bigquery/table.rb', line 2050 def restore destination_table, create: nil, write: nil, &block copy_job_with_operation_type destination_table, create: create, write: write, operation_type: OperationType::RESTORE, &block end |
#rows_count ⇒ Integer?
The number of rows in the table.
781 782 783 784 785 786 787 788 789 |
# File 'lib/google/cloud/bigquery/table.rb', line 781 def rows_count return nil if reference? ensure_full_data! begin Integer @gapi.num_rows rescue StandardError nil end end |
#schema(replace: false) {|schema| ... } ⇒ Google::Cloud::Bigquery::Schema?
Returns the table's schema. If the table is not a view (See #view?), this method can also be used to set, replace, or add to the schema by passing a block. See Schema for available methods.
If the table is not a full resource representation (see #resource_full?), the full representation will be retrieved.
1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 |
# File 'lib/google/cloud/bigquery/table.rb', line 1070 def schema replace: false return nil if reference? && !block_given? reload! unless resource_full? schema_builder = Schema.from_gapi @gapi.schema if block_given? schema_builder = Schema.from_gapi if replace yield schema_builder if schema_builder.changed? @gapi.schema = schema_builder.to_gapi patch_gapi! :schema end end schema_builder.freeze end |
#set_query(query, standard_sql: nil, legacy_sql: nil, udfs: nil) ⇒ Object
Updates the query that defines the view. (See #view?.) Not supported for materialized views.
Allows setting of standard vs. legacy SQL and user-defined function resources.
1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 |
# File 'lib/google/cloud/bigquery/table.rb', line 1389 def set_query query, standard_sql: nil, legacy_sql: nil, udfs: nil raise "Updating the query is not supported for Table type: #{@gapi.type}" unless view? use_legacy_sql = Convert.resolve_legacy_sql standard_sql, legacy_sql @gapi.view = Google::Apis::BigqueryV2::ViewDefinition.new( query: query, use_legacy_sql: use_legacy_sql, user_defined_function_resources: udfs_gapi(udfs) ) patch_gapi! :view end |
#snapshot(destination_table) {|job| ... } ⇒ Boolean
Takes snapshot of the data from the table to another table using a synchronous method that blocks for a response. The source table type is TABLE and the destination table type is SNAPSHOT. Timeouts and transient errors are generally handled as needed to complete the job. See also #copy_job.
The geographic location for the job ("US", "EU", etc.) can be set via CopyJob::Updater#location= in a block passed to this method. If the table is a full resource representation (see #resource_full?), the location of the job will be automatically set to the location of the table.
1979 1980 1981 1982 1983 |
# File 'lib/google/cloud/bigquery/table.rb', line 1979 def snapshot destination_table, &block copy_job_with_operation_type destination_table, operation_type: OperationType::SNAPSHOT, &block end |
#snapshot? ⇒ Boolean?
Checks if the table's type is SNAPSHOT
, indicating that the table
represents a BigQuery table snapshot.
878 879 880 881 |
# File 'lib/google/cloud/bigquery/table.rb', line 878 def snapshot? return nil if reference? @gapi.type == "SNAPSHOT" end |
#snapshot_definition ⇒ Google::Apis::BigqueryV2::SnapshotDefinition?
The Information about base table and snapshot time of the table.
182 183 184 185 |
# File 'lib/google/cloud/bigquery/table.rb', line 182 def snapshot_definition return nil if reference? @gapi.snapshot_definition end |
#table? ⇒ Boolean?
Checks if the table's type is TABLE
.
844 845 846 847 |
# File 'lib/google/cloud/bigquery/table.rb', line 844 def table? return nil if reference? @gapi.type == "TABLE" end |
#table_id ⇒ String
A unique ID for this table.
131 132 133 134 |
# File 'lib/google/cloud/bigquery/table.rb', line 131 def table_id return reference.table_id if reference? @gapi.table_reference.table_id end |
#test_iam_permissions(*permissions) ⇒ Array<String>
Tests the specified permissions against the Cloud IAM access control policy.
1614 1615 1616 1617 1618 1619 |
# File 'lib/google/cloud/bigquery/table.rb', line 1614 def * = Array().flatten ensure_service! gapi = service. dataset_id, table_id, gapi..freeze end |
#time_partitioning? ⇒ Boolean?
Checks if the table is time partitioned. See Partitioned Tables.
297 298 299 300 |
# File 'lib/google/cloud/bigquery/table.rb', line 297 def time_partitioning? return nil if reference? !@gapi.time_partitioning.nil? end |
#time_partitioning_expiration ⇒ Integer?
The expiration for the time partitions, if any, in seconds. See Partitioned Tables.
422 423 424 425 426 427 428 |
# File 'lib/google/cloud/bigquery/table.rb', line 422 def time_partitioning_expiration return nil if reference? ensure_full_data! return nil unless time_partitioning? return nil if @gapi.time_partitioning.expiration_ms.nil? @gapi.time_partitioning.expiration_ms / 1_000 end |
#time_partitioning_expiration=(expiration) ⇒ Object
Sets the time partition expiration for the table. See Partitioned Tables. The table must also be time partitioned.
If the table is not a full resource representation (see #resource_full?), the full representation will be retrieved before the update to comply with ETag-based optimistic concurrency control.
460 461 462 463 464 465 466 |
# File 'lib/google/cloud/bigquery/table.rb', line 460 def time_partitioning_expiration= expiration reload! unless resource_full? expiration_ms = expiration * 1000 if expiration @gapi.time_partitioning ||= Google::Apis::BigqueryV2::TimePartitioning.new @gapi.time_partitioning.expiration_ms = expiration_ms patch_gapi! :time_partitioning end |
#time_partitioning_field ⇒ String?
The field on which the table is time partitioned, if any. If not
set, the destination table is time partitioned by pseudo column
_PARTITIONTIME
; if set, the table is time partitioned by this field. See
Partitioned Tables.
367 368 369 370 371 |
# File 'lib/google/cloud/bigquery/table.rb', line 367 def time_partitioning_field return nil if reference? ensure_full_data! @gapi.time_partitioning.field if time_partitioning? end |
#time_partitioning_field=(field) ⇒ Object
Sets the field on which to time partition the table. If not
set, the destination table is time partitioned by pseudo column
_PARTITIONTIME
; if set, the table is time partitioned by this field. See
Partitioned Tables.
The table must also be time partitioned.
You can only set the time partitioning field while creating a table as in the example below. BigQuery does not allow you to change time partitioning on an existing table.
405 406 407 408 409 410 |
# File 'lib/google/cloud/bigquery/table.rb', line 405 def time_partitioning_field= field reload! unless resource_full? @gapi.time_partitioning ||= Google::Apis::BigqueryV2::TimePartitioning.new @gapi.time_partitioning.field = field patch_gapi! :time_partitioning end |
#time_partitioning_type ⇒ String?
The period for which the table is time partitioned, if any. See Partitioned Tables.
313 314 315 316 317 |
# File 'lib/google/cloud/bigquery/table.rb', line 313 def time_partitioning_type return nil if reference? ensure_full_data! @gapi.time_partitioning.type if time_partitioning? end |
#time_partitioning_type=(type) ⇒ Object
Sets the time partitioning type for the table. See Partitioned
Tables.
The supported types are DAY
, HOUR
, MONTH
, and YEAR
, which will
generate one partition per day, hour, month, and year, respectively.
You can only set time partitioning when creating a table as in the example below. BigQuery does not allow you to change time partitioning on an existing table.
348 349 350 351 352 353 |
# File 'lib/google/cloud/bigquery/table.rb', line 348 def time_partitioning_type= type reload! unless resource_full? @gapi.time_partitioning ||= Google::Apis::BigqueryV2::TimePartitioning.new @gapi.time_partitioning.type = type patch_gapi! :time_partitioning end |
#type ⇒ String?
The type of the table like if its a TABLE, VIEW or SNAPSHOT etc.,
169 170 171 172 |
# File 'lib/google/cloud/bigquery/table.rb', line 169 def type return nil if reference? @gapi.type end |
#update_policy {|policy| ... } ⇒ Policy
Updates the Cloud IAM access control policy for the table. The latest policy will be read from the service. See also #policy.
1578 1579 1580 1581 1582 1583 1584 1585 1586 1587 |
# File 'lib/google/cloud/bigquery/table.rb', line 1578 def update_policy raise ArgumentError, "A block updating the policy must be provided" unless block_given? ensure_service! gapi = service.get_table_policy dataset_id, table_id policy = Policy.from_gapi gapi yield policy # TODO: Check for changes before calling RPC gapi = service.set_table_policy dataset_id, table_id, policy.to_gapi Policy.from_gapi(gapi).freeze end |
#view? ⇒ Boolean?
Checks if the table's type is VIEW
, indicating that the table
represents a BigQuery logical view. See Dataset#create_view.
861 862 863 864 |
# File 'lib/google/cloud/bigquery/table.rb', line 861 def view? return nil if reference? @gapi.type == "VIEW" end |