Class: Google::Cloud::Bigquery::External::DataSource
- Inherits:
-
Object
- Object
- Google::Cloud::Bigquery::External::DataSource
- Defined in:
- lib/google/cloud/bigquery/external.rb
Overview
DataSource
External::DataSource and its subclasses represents an external data source that can be queried from directly, even though the data is not stored in BigQuery. Instead of loading or streaming the data, this object references the external data source.
The AVRO and Datastore Backup formats use DataSource. See CsvSource, JsonSource, SheetsSource, BigtableSource for the other formats.
Direct Known Subclasses
Instance Method Summary collapse
-
#autodetect ⇒ Boolean
Indicates if the schema and format options are detected automatically.
-
#autodetect=(new_autodetect) ⇒ Object
Set whether to detect schema and format options automatically.
-
#avro? ⇒ Boolean
Whether the data format is "AVRO".
-
#backup? ⇒ Boolean
Whether the data format is "DATASTORE_BACKUP".
-
#bigtable? ⇒ Boolean
Whether the data format is "BIGTABLE".
-
#compression ⇒ String
The compression type of the data source.
-
#compression=(new_compression) ⇒ Object
Set the compression type of the data source.
-
#csv? ⇒ Boolean
Whether the data format is "CSV".
-
#format ⇒ String
The data format.
-
#hive_partitioning? ⇒ Boolean
Checks if hive partitioning options are set.
-
#hive_partitioning_mode ⇒ String?
The mode of hive partitioning to use when reading data.
-
#hive_partitioning_mode=(mode) ⇒ Object
Sets the mode of hive partitioning to use when reading data.
-
#hive_partitioning_require_partition_filter=(require_partition_filter) ⇒ Object
Sets whether queries over the table using this external data source require a partition filter that can be used for partition elimination to be specified.
-
#hive_partitioning_require_partition_filter? ⇒ Boolean
Whether queries over the table using this external data source require a partition filter that can be used for partition elimination to be specified.
-
#hive_partitioning_source_uri_prefix ⇒ String?
The common prefix for all source uris when hive partition detection is requested.
-
#hive_partitioning_source_uri_prefix=(source_uri_prefix) ⇒ Object
Sets the common prefix for all source uris when hive partition detection is requested.
-
#ignore_unknown ⇒ Boolean
Indicates if BigQuery should allow extra values that are not represented in the table schema.
-
#ignore_unknown=(new_ignore_unknown) ⇒ Object
Set whether BigQuery should allow extra values that are not represented in the table schema.
-
#json? ⇒ Boolean
Whether the data format is "NEWLINE_DELIMITED_JSON".
-
#max_bad_records ⇒ Integer
The maximum number of bad records that BigQuery can ignore when reading data.
-
#max_bad_records=(new_max_bad_records) ⇒ Object
Set the maximum number of bad records that BigQuery can ignore when reading data.
-
#orc? ⇒ Boolean
Whether the data format is "ORC".
-
#parquet? ⇒ Boolean
Whether the data format is "PARQUET".
-
#sheets? ⇒ Boolean
Whether the data format is "GOOGLE_SHEETS".
-
#urls ⇒ Array<String>
The fully-qualified URIs that point to your data in Google Cloud.
Instance Method Details
#autodetect ⇒ Boolean
Indicates if the schema and format options are detected automatically.
435 436 437 |
# File 'lib/google/cloud/bigquery/external.rb', line 435 def autodetect @gapi.autodetect end |
#autodetect=(new_autodetect) ⇒ Object
Set whether to detect schema and format options automatically. Any option specified explicitly will be honored.
457 458 459 460 |
# File 'lib/google/cloud/bigquery/external.rb', line 457 def autodetect= new_autodetect frozen_check! @gapi.autodetect = new_autodetect end |
#avro? ⇒ Boolean
Whether the data format is "AVRO".
300 301 302 |
# File 'lib/google/cloud/bigquery/external.rb', line 300 def avro? @gapi.source_format == "AVRO" end |
#backup? ⇒ Boolean
Whether the data format is "DATASTORE_BACKUP".
320 321 322 |
# File 'lib/google/cloud/bigquery/external.rb', line 320 def backup? @gapi.source_format == "DATASTORE_BACKUP" end |
#bigtable? ⇒ Boolean
Whether the data format is "BIGTABLE".
340 341 342 |
# File 'lib/google/cloud/bigquery/external.rb', line 340 def bigtable? @gapi.source_format == "BIGTABLE" end |
#compression ⇒ String
The compression type of the data source. Possible values include
"GZIP"
and nil
. The default value is nil
. This setting is
ignored for Google Cloud Bigtable, Google Cloud Datastore backups
and Avro formats. Optional.
481 482 483 |
# File 'lib/google/cloud/bigquery/external.rb', line 481 def compression @gapi.compression end |
#compression=(new_compression) ⇒ Object
Set the compression type of the data source. Possible values include
"GZIP"
and nil
. The default value is nil
. This setting is
ignored for Google Cloud Bigtable, Google Cloud Datastore backups
and Avro formats. Optional.
505 506 507 508 |
# File 'lib/google/cloud/bigquery/external.rb', line 505 def compression= new_compression frozen_check! @gapi.compression = new_compression end |
#csv? ⇒ Boolean
Whether the data format is "CSV".
240 241 242 |
# File 'lib/google/cloud/bigquery/external.rb', line 240 def csv? @gapi.source_format == "CSV" end |
#format ⇒ String
The data format. For CSV files, specify "CSV". For Google sheets, specify "GOOGLE_SHEETS". For newline-delimited JSON, specify "NEWLINE_DELIMITED_JSON". For Avro files, specify "AVRO". For Google Cloud Datastore backups, specify "DATASTORE_BACKUP". [Beta] For Google Cloud Bigtable, specify "BIGTABLE".
220 221 222 |
# File 'lib/google/cloud/bigquery/external.rb', line 220 def format @gapi.source_format end |
#hive_partitioning? ⇒ Boolean
Checks if hive partitioning options are set.
Not all storage formats support hive partitioning. Requesting hive partitioning on an unsupported format
will lead to an error. Currently supported types include: avro
, csv
, json
, orc
and parquet
.
If your data is stored in ORC or Parquet on Cloud Storage, see Querying columnar formats on Cloud
Storage.
652 653 654 |
# File 'lib/google/cloud/bigquery/external.rb', line 652 def hive_partitioning? !@gapi..nil? end |
#hive_partitioning_mode ⇒ String?
The mode of hive partitioning to use when reading data. The following modes are supported:
AUTO
: automatically infer partition key name(s) and type(s).STRINGS
: automatically infer partition key name(s). All types are interpreted as strings.CUSTOM
: partition key schema is encoded in the source URI prefix.
683 684 685 |
# File 'lib/google/cloud/bigquery/external.rb', line 683 def hive_partitioning_mode @gapi..mode if hive_partitioning? end |
#hive_partitioning_mode=(mode) ⇒ Object
Sets the mode of hive partitioning to use when reading data. The following modes are supported:
auto
: automatically infer partition key name(s) and type(s).strings
: automatically infer partition key name(s). All types are interpreted as strings.custom
: partition key schema is encoded in the source URI prefix.
Not all storage formats support hive partitioning. Requesting hive partitioning on an unsupported format
will lead to an error. Currently supported types include: avro
, csv
, json
, orc
and parquet
.
If your data is stored in ORC or Parquet on Cloud Storage, see Querying columnar formats on Cloud
Storage.
See #format, #hive_partitioning_require_partition_filter= and #hive_partitioning_source_uri_prefix=.
721 722 723 724 |
# File 'lib/google/cloud/bigquery/external.rb', line 721 def hive_partitioning_mode= mode @gapi. ||= Google::Apis::BigqueryV2::HivePartitioningOptions.new @gapi..mode = mode.to_s.upcase end |
#hive_partitioning_require_partition_filter=(require_partition_filter) ⇒ Object
Sets whether queries over the table using this external data source require a partition filter that can be used for partition elimination to be specified.
See #format, #hive_partitioning_mode= and #hive_partitioning_source_uri_prefix=.
782 783 784 785 |
# File 'lib/google/cloud/bigquery/external.rb', line 782 def hive_partitioning_require_partition_filter= require_partition_filter @gapi. ||= Google::Apis::BigqueryV2::HivePartitioningOptions.new @gapi..require_partition_filter = require_partition_filter end |
#hive_partitioning_require_partition_filter? ⇒ Boolean
Whether queries over the table using this external data source require a partition filter that can be used for partition elimination to be specified. Note that this field should only be true when creating a permanent external table or querying a temporary external table.
751 752 753 754 |
# File 'lib/google/cloud/bigquery/external.rb', line 751 def hive_partitioning_require_partition_filter? return false unless hive_partitioning? !@gapi..require_partition_filter.nil? end |
#hive_partitioning_source_uri_prefix ⇒ String?
The common prefix for all source uris when hive partition detection is requested. The prefix must end immediately before the partition key encoding begins. For example, consider files following this data layout:
gs://bucket/path_to_table/dt=2019-01-01/country=BR/id=7/file.avro
gs://bucket/path_to_table/dt=2018-12-31/country=CA/id=3/file.avro
When hive partitioning is requested with either AUTO
or STRINGS
mode, the common prefix can be either of
gs://bucket/path_to_table
or gs://bucket/path_to_table/
(trailing slash does not matter).
820 821 822 |
# File 'lib/google/cloud/bigquery/external.rb', line 820 def hive_partitioning_source_uri_prefix @gapi..source_uri_prefix if hive_partitioning? end |
#hive_partitioning_source_uri_prefix=(source_uri_prefix) ⇒ Object
Sets the common prefix for all source uris when hive partition detection is requested. The prefix must end immediately before the partition key encoding begins. For example, consider files following this data layout:
gs://bucket/path_to_table/dt=2019-01-01/country=BR/id=7/file.avro
gs://bucket/path_to_table/dt=2018-12-31/country=CA/id=3/file.avro
When hive partitioning is requested with either AUTO
or STRINGS
mode, the common prefix can be either of
gs://bucket/path_to_table
or gs://bucket/path_to_table/
(trailing slash does not matter).
See #format, #hive_partitioning_mode= and #hive_partitioning_require_partition_filter=.
859 860 861 862 |
# File 'lib/google/cloud/bigquery/external.rb', line 859 def hive_partitioning_source_uri_prefix= source_uri_prefix @gapi. ||= Google::Apis::BigqueryV2::HivePartitioningOptions.new @gapi..source_uri_prefix = source_uri_prefix end |
#ignore_unknown ⇒ Boolean
Indicates if BigQuery should allow extra values that are not
represented in the table schema. If true
, the extra values are
ignored. If false
, records with extra columns are treated as bad
records, and if there are too many bad records, an invalid error is
returned in the job result. The default value is false
.
BigQuery treats trailing columns as an extra in CSV
, named values
that don't match any column names in JSON
. This setting is ignored
for Google Cloud Bigtable, Google Cloud Datastore backups and Avro
formats. Optional.
536 537 538 |
# File 'lib/google/cloud/bigquery/external.rb', line 536 def ignore_unknown @gapi.ignore_unknown_values end |
#ignore_unknown=(new_ignore_unknown) ⇒ Object
Set whether BigQuery should allow extra values that are not
represented in the table schema. If true
, the extra values are
ignored. If false
, records with extra columns are treated as bad
records, and if there are too many bad records, an invalid error is
returned in the job result. The default value is false
.
BigQuery treats trailing columns as an extra in CSV
, named values
that don't match any column names in JSON
. This setting is ignored
for Google Cloud Bigtable, Google Cloud Datastore backups and Avro
formats. Optional.
566 567 568 569 |
# File 'lib/google/cloud/bigquery/external.rb', line 566 def ignore_unknown= new_ignore_unknown frozen_check! @gapi.ignore_unknown_values = new_ignore_unknown end |
#json? ⇒ Boolean
Whether the data format is "NEWLINE_DELIMITED_JSON".
260 261 262 |
# File 'lib/google/cloud/bigquery/external.rb', line 260 def json? @gapi.source_format == "NEWLINE_DELIMITED_JSON" end |
#max_bad_records ⇒ Integer
The maximum number of bad records that BigQuery can ignore when reading data. If the number of bad records exceeds this value, an invalid error is returned in the job result. The default value is 0, which requires that all records are valid. This setting is ignored for Google Cloud Bigtable, Google Cloud Datastore backups and Avro formats.
593 594 595 |
# File 'lib/google/cloud/bigquery/external.rb', line 593 def max_bad_records @gapi.max_bad_records end |
#max_bad_records=(new_max_bad_records) ⇒ Object
Set the maximum number of bad records that BigQuery can ignore when reading data. If the number of bad records exceeds this value, an invalid error is returned in the job result. The default value is 0, which requires that all records are valid. This setting is ignored for Google Cloud Bigtable, Google Cloud Datastore backups and Avro formats.
619 620 621 622 |
# File 'lib/google/cloud/bigquery/external.rb', line 619 def max_bad_records= new_max_bad_records frozen_check! @gapi.max_bad_records = new_max_bad_records end |
#orc? ⇒ Boolean
Whether the data format is "ORC".
363 364 365 |
# File 'lib/google/cloud/bigquery/external.rb', line 363 def orc? @gapi.source_format == "ORC" end |
#parquet? ⇒ Boolean
Whether the data format is "PARQUET".
386 387 388 |
# File 'lib/google/cloud/bigquery/external.rb', line 386 def parquet? @gapi.source_format == "PARQUET" end |
#sheets? ⇒ Boolean
Whether the data format is "GOOGLE_SHEETS".
280 281 282 |
# File 'lib/google/cloud/bigquery/external.rb', line 280 def sheets? @gapi.source_format == "GOOGLE_SHEETS" end |
#urls ⇒ Array<String>
The fully-qualified URIs that point to your data in Google Cloud. For Google Cloud Storage URIs: Each URI can contain one '' wildcard character and it must come after the 'bucket' name. Size limits related to load jobs apply to external data sources. For Google Cloud Bigtable URIs: Exactly one URI can be specified and it has be a fully specified and valid HTTPS URL for a Google Cloud Bigtable table. For Google Cloud Datastore backups, exactly one URI can be specified, and it must end with '.backup_info'. Also, the '' wildcard character is not allowed.
413 414 415 |
# File 'lib/google/cloud/bigquery/external.rb', line 413 def urls @gapi.source_uris end |