Class: Google::Cloud::Bigquery::Dataset
- Inherits:
-
Object
- Object
- Google::Cloud::Bigquery::Dataset
- Defined in:
- lib/google/cloud/bigquery/dataset.rb,
lib/google/cloud/bigquery/dataset/tag.rb,
lib/google/cloud/bigquery/dataset/list.rb,
lib/google/cloud/bigquery/dataset/access.rb
Overview
Dataset
Represents a Dataset. A dataset is a grouping mechanism that holds zero or more tables. Datasets are the lowest level unit of access control; you cannot control access at the table level. A dataset is contained within a specific project.
Direct Known Subclasses
Defined Under Namespace
Classes: Access, List, Tag, Updater
Attributes collapse
-
#access {|access| ... } ⇒ Google::Cloud::Bigquery::Dataset::Access
Retrieves the access rules for a Dataset.
-
#api_url ⇒ String?
A URL that can be used to access the dataset using the REST API.
-
#created_at ⇒ Time?
The time when this dataset was created.
-
#dataset_id ⇒ String
A unique ID for this dataset, without the project name.
-
#default_encryption ⇒ EncryptionConfiguration?
The EncryptionConfiguration object that represents the default encryption method for all tables and models in the dataset.
-
#default_encryption=(value) ⇒ Object
Set the EncryptionConfiguration object that represents the default encryption method for all tables and models in the dataset.
-
#default_expiration ⇒ Integer?
The default lifetime of all tables in the dataset, in milliseconds.
-
#default_expiration=(new_default_expiration) ⇒ Object
Updates the default lifetime of all tables in the dataset, in milliseconds.
-
#description ⇒ String?
A user-friendly description of the dataset.
-
#description=(new_description) ⇒ Object
Updates the user-friendly description of the dataset.
-
#etag ⇒ String?
The ETag hash of the dataset.
-
#labels ⇒ Hash<String, String>?
A hash of user-provided labels associated with this dataset.
-
#labels=(labels) ⇒ Object
Updates the hash of user-provided labels associated with this dataset.
-
#location ⇒ String?
The geographic location where the dataset should reside.
-
#modified_at ⇒ Time?
The date when this dataset or any of its tables was last modified.
-
#name ⇒ String?
A descriptive name for the dataset.
-
#name=(new_name) ⇒ Object
Updates the descriptive name for the dataset.
-
#project_id ⇒ String
The ID of the project containing this dataset.
-
#storage_billing_model ⇒ String?
Gets the Storage Billing Model for the dataset.
-
#storage_billing_model=(value) ⇒ Object
Sets the Storage Billing Model for the dataset.
-
#tags ⇒ Google::Cloud::Bigquery::Dataset::Tag
Retrieves the tags associated with this dataset.
Lifecycle collapse
-
#delete(force: nil) ⇒ Boolean
Permanently deletes the dataset.
Table collapse
-
#create_materialized_view(table_id, query, name: nil, description: nil, enable_refresh: nil, refresh_interval_ms: nil) ⇒ Google::Cloud::Bigquery::Table
Creates a new materialized view.
-
#create_table(table_id, name: nil, description: nil) {|table| ... } ⇒ Google::Cloud::Bigquery::Table
Creates a new table.
-
#create_view(table_id, query, name: nil, description: nil, standard_sql: nil, legacy_sql: nil, udfs: nil) ⇒ Google::Cloud::Bigquery::Table
Creates a new view, which is a virtual table defined by the given SQL query.
-
#table(table_id, skip_lookup: nil, view: nil) ⇒ Google::Cloud::Bigquery::Table?
Retrieves an existing table by ID.
-
#tables(token: nil, max: nil) ⇒ Array<Google::Cloud::Bigquery::Table>
Retrieves the list of tables belonging to the dataset.
Model collapse
-
#model(model_id, skip_lookup: nil) ⇒ Google::Cloud::Bigquery::Model?
Retrieves an existing model by ID.
-
#models(token: nil, max: nil) ⇒ Array<Google::Cloud::Bigquery::Model>
Retrieves the list of models belonging to the dataset.
Routine collapse
-
#create_routine(routine_id) {|routine| ... } ⇒ Google::Cloud::Bigquery::Routine
Creates a new routine.
-
#routine(routine_id, skip_lookup: nil) ⇒ Google::Cloud::Bigquery::Routine?
Retrieves an existing routine by ID.
-
#routines(token: nil, max: nil, filter: nil) ⇒ Array<Google::Cloud::Bigquery::Routine>
Retrieves the list of routines belonging to the dataset.
Data collapse
-
#build_access_entry(target_types: nil) ⇒ Google::Apis::BigqueryV2::DatasetAccessEntry
Build an object of type Google::Apis::BigqueryV2::DatasetAccessEntry from the self.
-
#exists?(force: false) ⇒ Boolean
Determines whether the dataset exists in the BigQuery service.
-
#external(url, format: nil) {|ext| ... } ⇒ External::DataSource
Creates a new External::DataSource (or subclass) object that represents the external data source that can be queried from directly, even though the data is not stored in BigQuery.
-
#insert(table_id, rows, insert_ids: nil, skip_invalid: nil, ignore_unknown: nil, autocreate: nil) {|table| ... } ⇒ Google::Cloud::Bigquery::InsertResponse
Inserts data into the given table for near-immediate querying, without the need to complete a load operation before the data can appear in query results.
-
#insert_async(table_id, skip_invalid: nil, ignore_unknown: nil, max_bytes: 10_000_000, max_rows: 500, interval: 10, threads: 4, view: nil) {|response| ... } ⇒ Table::AsyncInserter
Create an asynchronous inserter object used to insert rows in batches.
-
#load(table_id, files, format: nil, create: nil, write: nil, projection_fields: nil, jagged_rows: nil, quoted_newlines: nil, encoding: nil, delimiter: nil, ignore_unknown: nil, max_bad_records: nil, quote: nil, skip_leading: nil, schema: nil, autodetect: nil, null_marker: nil, session_id: nil) {|updater| ... } ⇒ Boolean
Loads data into the provided destination table using a synchronous method that blocks for a response.
-
#load_job(table_id, files, format: nil, create: nil, write: nil, projection_fields: nil, jagged_rows: nil, quoted_newlines: nil, encoding: nil, delimiter: nil, ignore_unknown: nil, max_bad_records: nil, quote: nil, skip_leading: nil, schema: nil, job_id: nil, prefix: nil, labels: nil, autodetect: nil, null_marker: nil, dryrun: nil, create_session: nil, session_id: nil) {|updater| ... } ⇒ Google::Cloud::Bigquery::LoadJob
Loads data into the provided destination table using an asynchronous method.
-
#query(query, params: nil, types: nil, external: nil, max: nil, cache: true, standard_sql: nil, legacy_sql: nil, session_id: nil) {|job| ... } ⇒ Google::Cloud::Bigquery::Data
Queries data and waits for the results.
-
#query_job(query, params: nil, types: nil, external: nil, priority: "INTERACTIVE", cache: true, table: nil, create: nil, write: nil, dryrun: nil, standard_sql: nil, legacy_sql: nil, large_results: nil, flatten: nil, maximum_billing_tier: nil, maximum_bytes_billed: nil, job_id: nil, prefix: nil, labels: nil, udfs: nil, create_session: nil, session_id: nil) {|job| ... } ⇒ Google::Cloud::Bigquery::QueryJob
Queries data by creating a query job.
-
#reference? ⇒ Boolean
Whether the dataset was created without retrieving the resource representation from the BigQuery service.
-
#reload! ⇒ Google::Cloud::Bigquery::Dataset
(also: #refresh!)
Reloads the dataset with current data from the BigQuery service.
-
#resource? ⇒ Boolean
Whether the dataset was created with a resource representation from the BigQuery service.
-
#resource_full? ⇒ Boolean
Whether the dataset was created with a full resource representation from the BigQuery service.
-
#resource_partial? ⇒ Boolean
Whether the dataset was created with a partial resource representation from the BigQuery service by retrieval through Project#datasets.
Instance Method Details
#access {|access| ... } ⇒ Google::Cloud::Bigquery::Dataset::Access
Retrieves the access rules for a Dataset. The rules can be updated when passing a block, see Access for all the methods available.
If the dataset is not a full resource representation (see #resource_full?), the full representation will be retrieved before the update to comply with ETag-based optimistic concurrency control.
505 506 507 508 509 510 511 512 513 514 515 516 517 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 505 def access ensure_full_data! reload! unless resource_full? access_builder = Access.from_gapi @gapi if block_given? yield access_builder if access_builder.changed? @gapi.update! access: access_builder.to_gapi patch_gapi! :access end end access_builder.freeze end |
#api_url ⇒ String?
A URL that can be used to access the dataset using the REST API.
158 159 160 161 162 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 158 def api_url return nil if reference? ensure_full_data! @gapi.self_link end |
#build_access_entry(target_types: nil) ⇒ Google::Apis::BigqueryV2::DatasetAccessEntry
Build an object of type Google::Apis::BigqueryV2::DatasetAccessEntry from the self.
2837 2838 2839 2840 2841 2842 2843 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 2837 def build_access_entry target_types: nil params = { dataset: dataset_ref, target_types: target_types }.compact Google::Apis::BigqueryV2::DatasetAccessEntry.new(**params) end |
#create_materialized_view(table_id, query, name: nil, description: nil, enable_refresh: nil, refresh_interval_ms: nil) ⇒ Google::Cloud::Bigquery::Table
Creates a new materialized view.
Materialized views are precomputed views that periodically cache results of a query for increased performance and efficiency. BigQuery leverages precomputed results from materialized views and whenever possible reads only delta changes from the base table to compute up-to-date results.
Queries that use materialized views are generally faster and consume less resources than queries that retrieve the same data only from the base table. Materialized views are helpful to significantly boost performance of workloads that have the characteristic of common and repeated queries.
For logical views, see #create_view.
827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 827 def create_materialized_view table_id, query, name: nil, description: nil, enable_refresh: nil, refresh_interval_ms: nil new_view_opts = { table_reference: Google::Apis::BigqueryV2::TableReference.new( project_id: project_id, dataset_id: dataset_id, table_id: table_id ), friendly_name: name, description: description, materialized_view: Google::Apis::BigqueryV2::MaterializedViewDefinition.new( enable_refresh: enable_refresh, query: query, refresh_interval_ms: refresh_interval_ms ) }.compact new_view = Google::Apis::BigqueryV2::Table.new(**new_view_opts) gapi = service.insert_table dataset_id, new_view Table.from_gapi gapi, service end |
#create_routine(routine_id) {|routine| ... } ⇒ Google::Cloud::Bigquery::Routine
Creates a new routine. The following attributes may be set in the yielded block: Routine::Updater#routine_type=, Routine::Updater#language=, Routine::Updater#arguments=, Routine::Updater#return_type=, Routine::Updater#imported_libraries=, Routine::Updater#body=, and Routine::Updater#description=.
1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 1099 def create_routine routine_id ensure_service! new_tb = Google::Apis::BigqueryV2::Routine.new( routine_reference: Google::Apis::BigqueryV2::RoutineReference.new( project_id: project_id, dataset_id: dataset_id, routine_id: routine_id ) ) updater = Routine::Updater.new new_tb yield updater if block_given? gapi = service.insert_routine dataset_id, updater.to_gapi Routine.from_gapi gapi, service end |
#create_table(table_id, name: nil, description: nil) {|table| ... } ⇒ Google::Cloud::Bigquery::Table
Creates a new table. If you are adapting existing code that was written for the Rest API , you can pass the table's schema as a hash (see example.)
666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 666 def create_table table_id, name: nil, description: nil ensure_service! new_tb = Google::Apis::BigqueryV2::Table.new( table_reference: Google::Apis::BigqueryV2::TableReference.new( project_id: project_id, dataset_id: dataset_id, table_id: table_id ) ) updater = Table::Updater.new(new_tb).tap do |tb| tb.name = name unless name.nil? tb.description = description unless description.nil? end yield updater if block_given? gapi = service.insert_table dataset_id, updater.to_gapi Table.from_gapi gapi, service end |
#create_view(table_id, query, name: nil, description: nil, standard_sql: nil, legacy_sql: nil, udfs: nil) ⇒ Google::Cloud::Bigquery::Table
Creates a new view, which is a virtual table defined by the given SQL query.
With BigQuery's logical views, the query that defines the view is re-executed every time the view is queried. Queries are billed according to the total amount of data in all table fields referenced directly or indirectly by the top-level query. (See Table#view? and Table#query.)
For materialized views, see #create_materialized_view.
751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 751 def create_view table_id, query, name: nil, description: nil, standard_sql: nil, legacy_sql: nil, udfs: nil use_legacy_sql = Convert.resolve_legacy_sql standard_sql, legacy_sql new_view_opts = { table_reference: Google::Apis::BigqueryV2::TableReference.new( project_id: project_id, dataset_id: dataset_id, table_id: table_id ), friendly_name: name, description: description, view: Google::Apis::BigqueryV2::ViewDefinition.new( query: query, use_legacy_sql: use_legacy_sql, user_defined_function_resources: udfs_gapi(udfs) ) }.compact new_view = Google::Apis::BigqueryV2::Table.new(**new_view_opts) gapi = service.insert_table dataset_id, new_view Table.from_gapi gapi, service end |
#created_at ⇒ Time?
The time when this dataset was created.
241 242 243 244 245 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 241 def created_at return nil if reference? ensure_full_data! Convert.millis_to_time @gapi.creation_time end |
#dataset_id ⇒ String
A unique ID for this dataset, without the project name.
78 79 80 81 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 78 def dataset_id return reference.dataset_id if reference? @gapi.dataset_reference.dataset_id end |
#default_encryption ⇒ EncryptionConfiguration?
The EncryptionConfiguration object that represents the default encryption method for all tables and models in the dataset. Once this property is set, all newly-created partitioned tables and models in the dataset will have their encryption set to this value, unless table creation request (or query) overrides it.
Present only if this dataset is using custom default encryption.
373 374 375 376 377 378 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 373 def default_encryption return nil if reference? ensure_full_data! return nil if @gapi.default_encryption_configuration.nil? EncryptionConfiguration.from_gapi(@gapi.default_encryption_configuration).freeze end |
#default_encryption=(value) ⇒ Object
Set the EncryptionConfiguration object that represents the default encryption method for all tables and models in the dataset. Once this property is set, all newly-created partitioned tables and models in the dataset will have their encryption set to this value, unless table creation request (or query) overrides it.
If the dataset is not a full resource representation (see #resource_full?), the full representation will be retrieved before the update to comply with ETag-based optimistic concurrency control.
409 410 411 412 413 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 409 def default_encryption= value ensure_full_data! @gapi.default_encryption_configuration = value.to_gapi patch_gapi! :default_encryption_configuration end |
#default_expiration ⇒ Integer?
The default lifetime of all tables in the dataset, in milliseconds.
204 205 206 207 208 209 210 211 212 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 204 def default_expiration return nil if reference? ensure_full_data! begin Integer @gapi.default_table_expiration_ms rescue StandardError nil end end |
#default_expiration=(new_default_expiration) ⇒ Object
Updates the default lifetime of all tables in the dataset, in milliseconds.
If the dataset is not a full resource representation (see #resource_full?), the full representation will be retrieved before the update to comply with ETag-based optimistic concurrency control.
227 228 229 230 231 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 227 def default_expiration= new_default_expiration reload! unless resource_full? @gapi.update! default_table_expiration_ms: new_default_expiration patch_gapi! :default_table_expiration_ms end |
#delete(force: nil) ⇒ Boolean
Permanently deletes the dataset. The dataset must be empty before it
can be deleted unless the force
option is set to true
.
554 555 556 557 558 559 560 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 554 def delete force: nil ensure_service! service.delete_dataset dataset_id, force # Set flag for #exists? @exists = false true end |
#description ⇒ String?
A user-friendly description of the dataset.
172 173 174 175 176 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 172 def description return nil if reference? ensure_full_data! @gapi.description end |
#description=(new_description) ⇒ Object
Updates the user-friendly description of the dataset.
If the dataset is not a full resource representation (see #resource_full?), the full representation will be retrieved before the update to comply with ETag-based optimistic concurrency control.
189 190 191 192 193 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 189 def description= new_description reload! unless resource_full? @gapi.update! description: new_description patch_gapi! :description end |
#etag ⇒ String?
The ETag hash of the dataset.
144 145 146 147 148 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 144 def etag return nil if reference? ensure_full_data! @gapi.etag end |
#exists?(force: false) ⇒ Boolean
Determines whether the dataset exists in the BigQuery service. The
result is cached locally. To refresh state, set force
to true
.
2474 2475 2476 2477 2478 2479 2480 2481 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 2474 def exists? force: false return gapi_exists? if force # If we have a memoized value, return it return @exists unless @exists.nil? # Always true if we have a gapi object return true if resource? gapi_exists? end |
#external(url, format: nil) {|ext| ... } ⇒ External::DataSource
Creates a new External::DataSource (or subclass) object that represents the external data source that can be queried from directly, even though the data is not stored in BigQuery. Instead of loading or streaming the data, this object references the external data source.
1930 1931 1932 1933 1934 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 1930 def external url, format: nil ext = External.from_urls url, format yield ext if block_given? ext end |
#insert(table_id, rows, insert_ids: nil, skip_invalid: nil, ignore_unknown: nil, autocreate: nil) {|table| ... } ⇒ Google::Cloud::Bigquery::InsertResponse
Inserts data into the given table for near-immediate querying, without the need to complete a load operation before the data can appear in query results.
Simple Ruby types are generally accepted per JSON rules, along with the following support for BigQuery's more complex types:
BigQuery | Ruby | Notes |
---|---|---|
NUMERIC |
BigDecimal |
BigDecimal values will be rounded to scale 9. |
BIGNUMERIC |
String |
Pass as String to avoid rounding to scale 9. |
DATETIME |
DateTime |
DATETIME does not support time zone. |
DATE |
Date |
|
GEOGRAPHY |
String |
|
JSON |
String (Stringified JSON) |
String, as JSON does not have a schema to verify. |
TIMESTAMP |
Time |
|
TIME |
Google::Cloud::BigQuery::Time |
|
BYTES |
File , IO , StringIO , or similar |
|
ARRAY |
Array |
Nested arrays, nil values are not supported. |
STRUCT |
Hash |
Hash keys may be strings or symbols. |
Because BigQuery's streaming API is designed for high insertion rates, modifications to the underlying table metadata are eventually consistent when interacting with the streaming system. In most cases metadata changes are propagated within minutes, but during this period API responses may reflect the inconsistent state of the table.
The value :skip
can be provided to skip the generation of IDs for all rows, or to skip the generation of an
ID for a specific row in the array.
2715 2716 2717 2718 2719 2720 2721 2722 2723 2724 2725 2726 2727 2728 2729 2730 2731 2732 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 2715 def insert table_id, rows, insert_ids: nil, skip_invalid: nil, ignore_unknown: nil, autocreate: nil, &block rows = [rows] if rows.is_a? Hash raise ArgumentError, "No rows provided" if rows.empty? insert_ids = Array.new(rows.count) { :skip } if insert_ids == :skip insert_ids = Array insert_ids if insert_ids.count.positive? && insert_ids.count != rows.count raise ArgumentError, "insert_ids must be the same size as rows" end if autocreate insert_data_with_autocreate table_id, rows, skip_invalid: skip_invalid, ignore_unknown: ignore_unknown, insert_ids: insert_ids, &block else insert_data table_id, rows, skip_invalid: skip_invalid, ignore_unknown: ignore_unknown, insert_ids: insert_ids end end |
#insert_async(table_id, skip_invalid: nil, ignore_unknown: nil, max_bytes: 10_000_000, max_rows: 500, interval: 10, threads: 4, view: nil) {|response| ... } ⇒ Table::AsyncInserter
Create an asynchronous inserter object used to insert rows in batches.
2808 2809 2810 2811 2812 2813 2814 2815 2816 2817 2818 2819 2820 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 2808 def insert_async table_id, skip_invalid: nil, ignore_unknown: nil, max_bytes: 10_000_000, max_rows: 500, interval: 10, threads: 4, view: nil, &block ensure_service! # Get table, don't use Dataset#table which handles NotFoundError gapi = service.get_table dataset_id, table_id, metadata_view: view table = Table.from_gapi gapi, service, metadata_view: view # Get the AsyncInserter from the table table.insert_async skip_invalid: skip_invalid, ignore_unknown: ignore_unknown, max_bytes: max_bytes, max_rows: max_rows, interval: interval, threads: threads, &block end |
#labels ⇒ Hash<String, String>?
A hash of user-provided labels associated with this dataset. Labels are used to organize and group datasets. See Using Labels.
The returned hash is frozen and changes are not allowed. Use #labels= to replace the entire hash.
297 298 299 300 301 302 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 297 def labels return nil if reference? m = @gapi.labels m = m.to_h if m.respond_to? :to_h m.dup.freeze end |
#labels=(labels) ⇒ Object
Updates the hash of user-provided labels associated with this dataset. Labels are used to organize and group datasets. See Using Labels.
If the dataset is not a full resource representation (see #resource_full?), the full representation will be retrieved before the update to comply with ETag-based optimistic concurrency control.
340 341 342 343 344 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 340 def labels= labels reload! unless resource_full? @gapi.labels = labels patch_gapi! :labels end |
#load(table_id, files, format: nil, create: nil, write: nil, projection_fields: nil, jagged_rows: nil, quoted_newlines: nil, encoding: nil, delimiter: nil, ignore_unknown: nil, max_bad_records: nil, quote: nil, skip_leading: nil, schema: nil, autodetect: nil, null_marker: nil, session_id: nil) {|updater| ... } ⇒ Boolean
Loads data into the provided destination table using a synchronous method that blocks for a response. Timeouts and transient errors are generally handled as needed to complete the job. See also #load_job.
For the source of the data, you can pass a google-cloud storage file
path or a google-cloud-storage File
instance. Or, you can upload a
file directly. See Loading Data with a POST
Request.
The geographic location for the job ("US", "EU", etc.) can be set via LoadJob::Updater#location= in a block passed to this method. If the dataset is a full resource representation (see #resource_full?), the location of the job will be automatically set to the location of the dataset.
2416 2417 2418 2419 2420 2421 2422 2423 2424 2425 2426 2427 2428 2429 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 2416 def load table_id, files, format: nil, create: nil, write: nil, projection_fields: nil, jagged_rows: nil, quoted_newlines: nil, encoding: nil, delimiter: nil, ignore_unknown: nil, max_bad_records: nil, quote: nil, skip_leading: nil, schema: nil, autodetect: nil, null_marker: nil, session_id: nil, &block job = load_job table_id, files, format: format, create: create, write: write, projection_fields: projection_fields, jagged_rows: jagged_rows, quoted_newlines: quoted_newlines, encoding: encoding, delimiter: delimiter, ignore_unknown: ignore_unknown, max_bad_records: max_bad_records, quote: quote, skip_leading: skip_leading, schema: schema, autodetect: autodetect, null_marker: null_marker, session_id: session_id, &block job.wait_until_done! ensure_job_succeeded! job true end |
#load_job(table_id, files, format: nil, create: nil, write: nil, projection_fields: nil, jagged_rows: nil, quoted_newlines: nil, encoding: nil, delimiter: nil, ignore_unknown: nil, max_bad_records: nil, quote: nil, skip_leading: nil, schema: nil, job_id: nil, prefix: nil, labels: nil, autodetect: nil, null_marker: nil, dryrun: nil, create_session: nil, session_id: nil) {|updater| ... } ⇒ Google::Cloud::Bigquery::LoadJob
Loads data into the provided destination table using an asynchronous method. In this method, a LoadJob is immediately returned. The caller may poll the service by repeatedly calling Job#reload! and Job#done? to detect when the job is done, or simply block until the job is done by calling #Job#wait_until_done!. See also #load.
For the source of the data, you can pass a google-cloud storage file
path or a google-cloud-storage File
instance. Or, you can upload a
file directly. See Loading Data with a POST
Request.
The geographic location for the job ("US", "EU", etc.) can be set via LoadJob::Updater#location= in a block passed to this method. If the dataset is a full resource representation (see #resource_full?), the location of the job will be automatically set to the location of the dataset.
2185 2186 2187 2188 2189 2190 2191 2192 2193 2194 2195 2196 2197 2198 2199 2200 2201 2202 2203 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 2185 def load_job table_id, files, format: nil, create: nil, write: nil, projection_fields: nil, jagged_rows: nil, quoted_newlines: nil, encoding: nil, delimiter: nil, ignore_unknown: nil, max_bad_records: nil, quote: nil, skip_leading: nil, schema: nil, job_id: nil, prefix: nil, labels: nil, autodetect: nil, null_marker: nil, dryrun: nil, create_session: nil, session_id: nil ensure_service! updater = load_job_updater table_id, format: format, create: create, write: write, projection_fields: projection_fields, jagged_rows: jagged_rows, quoted_newlines: quoted_newlines, encoding: encoding, delimiter: delimiter, ignore_unknown: ignore_unknown, max_bad_records: max_bad_records, quote: quote, skip_leading: skip_leading, dryrun: dryrun, schema: schema, job_id: job_id, prefix: prefix, labels: labels, autodetect: autodetect, null_marker: null_marker, create_session: create_session, session_id: session_id yield updater if block_given? load_local_or_uri files, updater end |
#location ⇒ String?
The geographic location where the dataset should reside. Possible
values include EU
and US
. The default value is US
.
270 271 272 273 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 270 def location return nil if reference? @gapi.location end |
#model(model_id, skip_lookup: nil) ⇒ Google::Cloud::Bigquery::Model?
Retrieves an existing model by ID.
980 981 982 983 984 985 986 987 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 980 def model model_id, skip_lookup: nil ensure_service! return Model.new_reference project_id, dataset_id, model_id, service if skip_lookup gapi = service.get_model dataset_id, model_id Model.from_gapi_json gapi, service rescue Google::Cloud::NotFoundError nil end |
#models(token: nil, max: nil) ⇒ Array<Google::Cloud::Bigquery::Model>
Retrieves the list of models belonging to the dataset.
1023 1024 1025 1026 1027 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 1023 def models token: nil, max: nil ensure_service! gapi = service.list_models dataset_id, token: token, max: max Model::List.from_gapi gapi, service, dataset_id, max end |
#modified_at ⇒ Time?
The date when this dataset or any of its tables was last modified.
255 256 257 258 259 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 255 def modified_at return nil if reference? ensure_full_data! Convert.millis_to_time @gapi.last_modified_time end |
#name ⇒ String?
A descriptive name for the dataset.
113 114 115 116 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 113 def name return nil if reference? @gapi.friendly_name end |
#name=(new_name) ⇒ Object
Updates the descriptive name for the dataset.
If the dataset is not a full resource representation (see #resource_full?), the full representation will be retrieved before the update to comply with ETag-based optimistic concurrency control.
130 131 132 133 134 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 130 def name= new_name reload! unless resource_full? @gapi.update! friendly_name: new_name patch_gapi! :friendly_name end |
#project_id ⇒ String
The ID of the project containing this dataset.
90 91 92 93 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 90 def project_id return reference.project_id if reference? @gapi.dataset_reference.project_id end |
#query(query, params: nil, types: nil, external: nil, max: nil, cache: true, standard_sql: nil, legacy_sql: nil, session_id: nil) {|job| ... } ⇒ Google::Cloud::Bigquery::Data
Queries data and waits for the results. In this method, a QueryJob is created and its results are saved to a temporary table, then read from the table. Timeouts and transient errors are generally handled as needed to complete the query. When used for executing DDL/DML statements, this method does not return row data.
Sets the current dataset as the default dataset in the query. Useful for using unqualified table names.
The geographic location for the job ("US", "EU", etc.) can be set via QueryJob::Updater#location= in a block passed to this method. If the dataset is a full resource representation (see #resource_full?), the location of the job will be automatically set to the location of the dataset.
1858 1859 1860 1861 1862 1863 1864 1865 1866 1867 1868 1869 1870 1871 1872 1873 1874 1875 1876 1877 1878 1879 1880 1881 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 1858 def query query, params: nil, types: nil, external: nil, max: nil, cache: true, standard_sql: nil, legacy_sql: nil, session_id: nil, &block job = query_job query, params: params, types: types, external: external, cache: cache, standard_sql: standard_sql, legacy_sql: legacy_sql, session_id: session_id, &block job.wait_until_done! ensure_job_succeeded! job job.data max: max end |
#query_job(query, params: nil, types: nil, external: nil, priority: "INTERACTIVE", cache: true, table: nil, create: nil, write: nil, dryrun: nil, standard_sql: nil, legacy_sql: nil, large_results: nil, flatten: nil, maximum_billing_tier: nil, maximum_bytes_billed: nil, job_id: nil, prefix: nil, labels: nil, udfs: nil, create_session: nil, session_id: nil) {|job| ... } ⇒ Google::Cloud::Bigquery::QueryJob
Queries data by creating a query job.
Sets the current dataset as the default dataset in the query. Useful for using unqualified table names.
The geographic location for the job ("US", "EU", etc.) can be set via QueryJob::Updater#location= in a block passed to this method. If the dataset is a full resource representation (see #resource_full?), the location of the job will be automatically set to the location of the dataset.
1542 1543 1544 1545 1546 1547 1548 1549 1550 1551 1552 1553 1554 1555 1556 1557 1558 1559 1560 1561 1562 1563 1564 1565 1566 1567 1568 1569 1570 1571 1572 1573 1574 1575 1576 1577 1578 1579 1580 1581 1582 1583 1584 1585 1586 1587 1588 1589 1590 1591 1592 1593 1594 1595 1596 1597 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 1542 def query_job query, params: nil, types: nil, external: nil, priority: "INTERACTIVE", cache: true, table: nil, create: nil, write: nil, dryrun: nil, standard_sql: nil, legacy_sql: nil, large_results: nil, flatten: nil, maximum_billing_tier: nil, maximum_bytes_billed: nil, job_id: nil, prefix: nil, labels: nil, udfs: nil, create_session: nil, session_id: nil ensure_service! = { params: params, types: types, external: external, priority: priority, cache: cache, table: table, create: create, write: write, dryrun: dryrun, standard_sql: standard_sql, legacy_sql: legacy_sql, large_results: large_results, flatten: flatten, maximum_billing_tier: maximum_billing_tier, maximum_bytes_billed: maximum_bytes_billed, job_id: job_id, prefix: prefix, labels: labels, udfs: udfs, create_session: create_session, session_id: session_id } updater = QueryJob::Updater. service, query, updater.dataset = self updater.location = location if location # may be dataset reference yield updater if block_given? gapi = service.query_job updater.to_gapi Job.from_gapi gapi, service end |
#reference? ⇒ Boolean
Whether the dataset was created without retrieving the resource representation from the BigQuery service.
2501 2502 2503 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 2501 def reference? @gapi.nil? end |
#reload! ⇒ Google::Cloud::Bigquery::Dataset Also known as: refresh!
Reloads the dataset with current data from the BigQuery service.
2445 2446 2447 2448 2449 2450 2451 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 2445 def reload! ensure_service! @gapi = service.get_dataset dataset_id @reference = nil @exists = nil self end |
#resource? ⇒ Boolean
Whether the dataset was created with a resource representation from the BigQuery service.
2523 2524 2525 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 2523 def resource? !@gapi.nil? end |
#resource_full? ⇒ Boolean
Whether the dataset was created with a full resource representation from the BigQuery service.
2570 2571 2572 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 2570 def resource_full? @gapi.is_a? Google::Apis::BigqueryV2::Dataset end |
#resource_partial? ⇒ Boolean
Whether the dataset was created with a partial resource representation from the BigQuery service by retrieval through Project#datasets. See Datasets: list response for the contents of the partial representation. Accessing any attribute outside of the partial representation will result in loading the full representation.
2550 2551 2552 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 2550 def resource_partial? @gapi.is_a? Google::Apis::BigqueryV2::DatasetList::Dataset end |
#routine(routine_id, skip_lookup: nil) ⇒ Google::Cloud::Bigquery::Routine?
Retrieves an existing routine by ID.
1146 1147 1148 1149 1150 1151 1152 1153 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 1146 def routine routine_id, skip_lookup: nil ensure_service! return Routine.new_reference project_id, dataset_id, routine_id, service if skip_lookup gapi = service.get_routine dataset_id, routine_id Routine.from_gapi gapi, service rescue Google::Cloud::NotFoundError nil end |
#routines(token: nil, max: nil, filter: nil) ⇒ Array<Google::Cloud::Bigquery::Routine>
Retrieves the list of routines belonging to the dataset.
1191 1192 1193 1194 1195 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 1191 def routines token: nil, max: nil, filter: nil ensure_service! gapi = service.list_routines dataset_id, token: token, max: max, filter: filter Routine::List.from_gapi gapi, service, dataset_id, max, filter: filter end |
#storage_billing_model ⇒ String?
Gets the Storage Billing Model for the dataset.
436 437 438 439 440 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 436 def storage_billing_model return nil if reference? ensure_full_data! @gapi.storage_billing_model end |
#storage_billing_model=(value) ⇒ Object
Sets the Storage Billing Model for the dataset.
459 460 461 462 463 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 459 def storage_billing_model= value ensure_full_data! @gapi.storage_billing_model = value patch_gapi! :storage_billing_model end |
#table(table_id, skip_lookup: nil, view: nil) ⇒ Google::Cloud::Bigquery::Table?
Retrieves an existing table by ID.
899 900 901 902 903 904 905 906 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 899 def table table_id, skip_lookup: nil, view: nil ensure_service! return Table.new_reference project_id, dataset_id, table_id, service if skip_lookup gapi = service.get_table dataset_id, table_id, metadata_view: view Table.from_gapi gapi, service, metadata_view: view rescue Google::Cloud::NotFoundError nil end |
#tables(token: nil, max: nil) ⇒ Array<Google::Cloud::Bigquery::Table>
Retrieves the list of tables belonging to the dataset.
942 943 944 945 946 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 942 def tables token: nil, max: nil ensure_service! gapi = service.list_tables dataset_id, token: token, max: max Table::List.from_gapi gapi, service, dataset_id, max end |
#tags ⇒ Google::Cloud::Bigquery::Dataset::Tag
Retrieves the tags associated with this dataset. Tag keys are globally unique, and managed via the resource manager API.
for more information.
528 529 530 531 532 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 528 def ensure_full_data! return nil if @gapi..nil? @gapi..map { |gapi| Tag.from_gapi gapi } end |