Class: Google::Cloud::Bigquery::External::DataSource

Inherits:
Object
  • Object
show all
Defined in:
lib/google/cloud/bigquery/external.rb

Overview

DataSource

External::DataSource and its subclasses represents an external data source that can be queried from directly, even though the data is not stored in BigQuery. Instead of loading or streaming the data, this object references the external data source.

The AVRO and Datastore Backup formats use DataSource. See CsvSource, JsonSource, SheetsSource, BigtableSource for the other formats.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new

avro_url = "gs://bucket/path/to/data.avro"
avro_table = bigquery.external avro_url do |avro|
  avro.autodetect = true
end

data = bigquery.query "SELECT * FROM my_ext_table",
                      external: { my_ext_table: avro_table }

# Iterate over the first page of results
data.each do |row|
  puts row[:name]
end
# Retrieve the next page of results
data = data.next if data.next?

Hive partitioning options:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new

gcs_uri = "gs://cloud-samples-data/bigquery/hive-partitioning-samples/autolayout/*"
source_uri_prefix = "gs://cloud-samples-data/bigquery/hive-partitioning-samples/autolayout/"
external_data = bigquery.external gcs_uri, format: :parquet do |ext|
  ext.hive_partitioning_mode = :auto
  ext.hive_partitioning_require_partition_filter = true
  ext.hive_partitioning_source_uri_prefix = source_uri_prefix
end

external_data.hive_partitioning? #=> true
external_data.hive_partitioning_mode #=> "AUTO"
external_data.hive_partitioning_require_partition_filter? #=> true
external_data.hive_partitioning_source_uri_prefix #=> source_uri_prefix

Direct Known Subclasses

BigtableSource, CsvSource, JsonSource, SheetsSource

Instance Method Summary collapse

Instance Method Details

#autodetectBoolean

Indicates if the schema and format options are detected automatically.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new

csv_url = "gs://bucket/path/to/data.csv"
csv_table = bigquery.external csv_url do |csv|
  csv.autodetect = true
end

csv_table.autodetect #=> true

Returns:

  • (Boolean)


435
436
437
# File 'lib/google/cloud/bigquery/external.rb', line 435

def autodetect
  @gapi.autodetect
end

#autodetect=(new_autodetect) ⇒ Object

Set whether to detect schema and format options automatically. Any option specified explicitly will be honored.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new

csv_url = "gs://bucket/path/to/data.csv"
csv_table = bigquery.external csv_url do |csv|
  csv.autodetect = true
end

csv_table.autodetect #=> true

Parameters:

  • new_autodetect (Boolean)

    New autodetect value



457
458
459
460
# File 'lib/google/cloud/bigquery/external.rb', line 457

def autodetect= new_autodetect
  frozen_check!
  @gapi.autodetect = new_autodetect
end

#avro?Boolean

Whether the data format is "AVRO".

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new

avro_url = "gs://bucket/path/to/data.avro"
avro_table = bigquery.external avro_url

avro_table.format #=> "AVRO"
avro_table.avro? #=> true

Returns:

  • (Boolean)


300
301
302
# File 'lib/google/cloud/bigquery/external.rb', line 300

def avro?
  @gapi.source_format == "AVRO"
end

#backup?Boolean

Whether the data format is "DATASTORE_BACKUP".

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new

backup_url = "gs://bucket/path/to/data.backup_info"
backup_table = bigquery.external backup_url

backup_table.format #=> "DATASTORE_BACKUP"
backup_table.backup? #=> true

Returns:

  • (Boolean)


320
321
322
# File 'lib/google/cloud/bigquery/external.rb', line 320

def backup?
  @gapi.source_format == "DATASTORE_BACKUP"
end

#bigtable?Boolean

Whether the data format is "BIGTABLE".

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new

bigtable_url = "https://googleapis.com/bigtable/projects/..."
bigtable_table = bigquery.external bigtable_url

bigtable_table.format #=> "BIGTABLE"
bigtable_table.bigtable? #=> true

Returns:

  • (Boolean)


340
341
342
# File 'lib/google/cloud/bigquery/external.rb', line 340

def bigtable?
  @gapi.source_format == "BIGTABLE"
end

#compressionString

The compression type of the data source. Possible values include "GZIP" and nil. The default value is nil. This setting is ignored for Google Cloud Bigtable, Google Cloud Datastore backups and Avro formats. Optional.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new

csv_url = "gs://bucket/path/to/data.csv"
csv_table = bigquery.external csv_url do |csv|
  csv.compression = "GZIP"
end

csv_table.compression #=> "GZIP"

Returns:

  • (String)


481
482
483
# File 'lib/google/cloud/bigquery/external.rb', line 481

def compression
  @gapi.compression
end

#compression=(new_compression) ⇒ Object

Set the compression type of the data source. Possible values include "GZIP" and nil. The default value is nil. This setting is ignored for Google Cloud Bigtable, Google Cloud Datastore backups and Avro formats. Optional.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new

csv_url = "gs://bucket/path/to/data.csv"
csv_table = bigquery.external csv_url do |csv|
  csv.compression = "GZIP"
end

csv_table.compression #=> "GZIP"

Parameters:

  • new_compression (String)

    New compression value



505
506
507
508
# File 'lib/google/cloud/bigquery/external.rb', line 505

def compression= new_compression
  frozen_check!
  @gapi.compression = new_compression
end

#csv?Boolean

Whether the data format is "CSV".

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new

csv_url = "gs://bucket/path/to/data.csv"
csv_table = bigquery.external csv_url

csv_table.format #=> "CSV"
csv_table.csv? #=> true

Returns:

  • (Boolean)


240
241
242
# File 'lib/google/cloud/bigquery/external.rb', line 240

def csv?
  @gapi.source_format == "CSV"
end

#formatString

The data format. For CSV files, specify "CSV". For Google sheets, specify "GOOGLE_SHEETS". For newline-delimited JSON, specify "NEWLINE_DELIMITED_JSON". For Avro files, specify "AVRO". For Google Cloud Datastore backups, specify "DATASTORE_BACKUP". [Beta] For Google Cloud Bigtable, specify "BIGTABLE".

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new

csv_url = "gs://bucket/path/to/data.csv"
csv_table = bigquery.external csv_url

csv_table.format #=> "CSV"

Returns:

  • (String)


220
221
222
# File 'lib/google/cloud/bigquery/external.rb', line 220

def format
  @gapi.source_format
end

#hive_partitioning?Boolean

Checks if hive partitioning options are set.

Not all storage formats support hive partitioning. Requesting hive partitioning on an unsupported format will lead to an error. Currently supported types include: avro, csv, json, orc and parquet. If your data is stored in ORC or Parquet on Cloud Storage, see Querying columnar formats on Cloud Storage.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new

gcs_uri = "gs://cloud-samples-data/bigquery/hive-partitioning-samples/autolayout/*"
source_uri_prefix = "gs://cloud-samples-data/bigquery/hive-partitioning-samples/autolayout/"
external_data = bigquery.external gcs_uri, format: :parquet do |ext|
  ext.hive_partitioning_mode = :auto
  ext.hive_partitioning_require_partition_filter = true
  ext.hive_partitioning_source_uri_prefix = source_uri_prefix
end

external_data.hive_partitioning? #=> true
external_data.hive_partitioning_mode #=> "AUTO"
external_data.hive_partitioning_require_partition_filter? #=> true
external_data.hive_partitioning_source_uri_prefix #=> source_uri_prefix

Returns:

  • (Boolean)

    true when hive partitioning options are set, or false otherwise.



652
653
654
# File 'lib/google/cloud/bigquery/external.rb', line 652

def hive_partitioning?
  !@gapi.hive_partitioning_options.nil?
end

#hive_partitioning_modeString?

The mode of hive partitioning to use when reading data. The following modes are supported:

  1. AUTO: automatically infer partition key name(s) and type(s).
  2. STRINGS: automatically infer partition key name(s). All types are interpreted as strings.
  3. CUSTOM: partition key schema is encoded in the source URI prefix.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new

gcs_uri = "gs://cloud-samples-data/bigquery/hive-partitioning-samples/autolayout/*"
source_uri_prefix = "gs://cloud-samples-data/bigquery/hive-partitioning-samples/autolayout/"
external_data = bigquery.external gcs_uri, format: :parquet do |ext|
  ext.hive_partitioning_mode = :auto
  ext.hive_partitioning_require_partition_filter = true
  ext.hive_partitioning_source_uri_prefix = source_uri_prefix
end

external_data.hive_partitioning? #=> true
external_data.hive_partitioning_mode #=> "AUTO"
external_data.hive_partitioning_require_partition_filter? #=> true
external_data.hive_partitioning_source_uri_prefix #=> source_uri_prefix

Returns:

  • (String, nil)

    The mode of hive partitioning, or nil if not set.



683
684
685
# File 'lib/google/cloud/bigquery/external.rb', line 683

def hive_partitioning_mode
  @gapi.hive_partitioning_options.mode if hive_partitioning?
end

#hive_partitioning_mode=(mode) ⇒ Object

Sets the mode of hive partitioning to use when reading data. The following modes are supported:

  1. auto: automatically infer partition key name(s) and type(s).
  2. strings: automatically infer partition key name(s). All types are interpreted as strings.
  3. custom: partition key schema is encoded in the source URI prefix.

Not all storage formats support hive partitioning. Requesting hive partitioning on an unsupported format will lead to an error. Currently supported types include: avro, csv, json, orc and parquet. If your data is stored in ORC or Parquet on Cloud Storage, see Querying columnar formats on Cloud Storage.

See #format, #hive_partitioning_require_partition_filter= and #hive_partitioning_source_uri_prefix=.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new

gcs_uri = "gs://cloud-samples-data/bigquery/hive-partitioning-samples/autolayout/*"
source_uri_prefix = "gs://cloud-samples-data/bigquery/hive-partitioning-samples/autolayout/"
external_data = bigquery.external gcs_uri, format: :parquet do |ext|
  ext.hive_partitioning_mode = :auto
  ext.hive_partitioning_require_partition_filter = true
  ext.hive_partitioning_source_uri_prefix = source_uri_prefix
end

external_data.hive_partitioning? #=> true
external_data.hive_partitioning_mode #=> "AUTO"
external_data.hive_partitioning_require_partition_filter? #=> true
external_data.hive_partitioning_source_uri_prefix #=> source_uri_prefix

Parameters:

  • mode (String, Symbol)

    The mode of hive partitioning to use when reading data.



721
722
723
724
# File 'lib/google/cloud/bigquery/external.rb', line 721

def hive_partitioning_mode= mode
  @gapi.hive_partitioning_options ||= Google::Apis::BigqueryV2::HivePartitioningOptions.new
  @gapi.hive_partitioning_options.mode = mode.to_s.upcase
end

#hive_partitioning_require_partition_filter=(require_partition_filter) ⇒ Object

Sets whether queries over the table using this external data source require a partition filter that can be used for partition elimination to be specified.

See #format, #hive_partitioning_mode= and #hive_partitioning_source_uri_prefix=.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new

gcs_uri = "gs://cloud-samples-data/bigquery/hive-partitioning-samples/autolayout/*"
source_uri_prefix = "gs://cloud-samples-data/bigquery/hive-partitioning-samples/autolayout/"
external_data = bigquery.external gcs_uri, format: :parquet do |ext|
  ext.hive_partitioning_mode = :auto
  ext.hive_partitioning_require_partition_filter = true
  ext.hive_partitioning_source_uri_prefix = source_uri_prefix
end

external_data.hive_partitioning? #=> true
external_data.hive_partitioning_mode #=> "AUTO"
external_data.hive_partitioning_require_partition_filter? #=> true
external_data.hive_partitioning_source_uri_prefix #=> source_uri_prefix

Parameters:

  • require_partition_filter (Boolean)

    true if a partition filter must be specified, false otherwise.



782
783
784
785
# File 'lib/google/cloud/bigquery/external.rb', line 782

def hive_partitioning_require_partition_filter= require_partition_filter
  @gapi.hive_partitioning_options ||= Google::Apis::BigqueryV2::HivePartitioningOptions.new
  @gapi.hive_partitioning_options.require_partition_filter = require_partition_filter
end

#hive_partitioning_require_partition_filter?Boolean

Whether queries over the table using this external data source require a partition filter that can be used for partition elimination to be specified. Note that this field should only be true when creating a permanent external table or querying a temporary external table.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new

gcs_uri = "gs://cloud-samples-data/bigquery/hive-partitioning-samples/autolayout/*"
source_uri_prefix = "gs://cloud-samples-data/bigquery/hive-partitioning-samples/autolayout/"
external_data = bigquery.external gcs_uri, format: :parquet do |ext|
  ext.hive_partitioning_mode = :auto
  ext.hive_partitioning_require_partition_filter = true
  ext.hive_partitioning_source_uri_prefix = source_uri_prefix
end

external_data.hive_partitioning? #=> true
external_data.hive_partitioning_mode #=> "AUTO"
external_data.hive_partitioning_require_partition_filter? #=> true
external_data.hive_partitioning_source_uri_prefix #=> source_uri_prefix

Returns:

  • (Boolean)

    true when queries over this table require a partition filter, or false otherwise.



751
752
753
754
# File 'lib/google/cloud/bigquery/external.rb', line 751

def hive_partitioning_require_partition_filter?
  return false unless hive_partitioning?
  !@gapi.hive_partitioning_options.require_partition_filter.nil?
end

#hive_partitioning_source_uri_prefixString?

The common prefix for all source uris when hive partition detection is requested. The prefix must end immediately before the partition key encoding begins. For example, consider files following this data layout:

gs://bucket/path_to_table/dt=2019-01-01/country=BR/id=7/file.avro
gs://bucket/path_to_table/dt=2018-12-31/country=CA/id=3/file.avro

When hive partitioning is requested with either AUTO or STRINGS mode, the common prefix can be either of gs://bucket/path_to_table or gs://bucket/path_to_table/ (trailing slash does not matter).

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new

gcs_uri = "gs://cloud-samples-data/bigquery/hive-partitioning-samples/autolayout/*"
source_uri_prefix = "gs://cloud-samples-data/bigquery/hive-partitioning-samples/autolayout/"
external_data = bigquery.external gcs_uri, format: :parquet do |ext|
  ext.hive_partitioning_mode = :auto
  ext.hive_partitioning_require_partition_filter = true
  ext.hive_partitioning_source_uri_prefix = source_uri_prefix
end

external_data.hive_partitioning? #=> true
external_data.hive_partitioning_mode #=> "AUTO"
external_data.hive_partitioning_require_partition_filter? #=> true
external_data.hive_partitioning_source_uri_prefix #=> source_uri_prefix

Returns:

  • (String, nil)

    The common prefix for all source uris, or nil if not set.



820
821
822
# File 'lib/google/cloud/bigquery/external.rb', line 820

def hive_partitioning_source_uri_prefix
  @gapi.hive_partitioning_options.source_uri_prefix if hive_partitioning?
end

#hive_partitioning_source_uri_prefix=(source_uri_prefix) ⇒ Object

Sets the common prefix for all source uris when hive partition detection is requested. The prefix must end immediately before the partition key encoding begins. For example, consider files following this data layout:

gs://bucket/path_to_table/dt=2019-01-01/country=BR/id=7/file.avro
gs://bucket/path_to_table/dt=2018-12-31/country=CA/id=3/file.avro

When hive partitioning is requested with either AUTO or STRINGS mode, the common prefix can be either of gs://bucket/path_to_table or gs://bucket/path_to_table/ (trailing slash does not matter).

See #format, #hive_partitioning_mode= and #hive_partitioning_require_partition_filter=.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new

gcs_uri = "gs://cloud-samples-data/bigquery/hive-partitioning-samples/autolayout/*"
source_uri_prefix = "gs://cloud-samples-data/bigquery/hive-partitioning-samples/autolayout/"
external_data = bigquery.external gcs_uri, format: :parquet do |ext|
  ext.hive_partitioning_mode = :auto
  ext.hive_partitioning_require_partition_filter = true
  ext.hive_partitioning_source_uri_prefix = source_uri_prefix
end

external_data.hive_partitioning? #=> true
external_data.hive_partitioning_mode #=> "AUTO"
external_data.hive_partitioning_require_partition_filter? #=> true
external_data.hive_partitioning_source_uri_prefix #=> source_uri_prefix

Parameters:

  • source_uri_prefix (String)

    The common prefix for all source uris.



859
860
861
862
# File 'lib/google/cloud/bigquery/external.rb', line 859

def hive_partitioning_source_uri_prefix= source_uri_prefix
  @gapi.hive_partitioning_options ||= Google::Apis::BigqueryV2::HivePartitioningOptions.new
  @gapi.hive_partitioning_options.source_uri_prefix = source_uri_prefix
end

#ignore_unknownBoolean

Indicates if BigQuery should allow extra values that are not represented in the table schema. If true, the extra values are ignored. If false, records with extra columns are treated as bad records, and if there are too many bad records, an invalid error is returned in the job result. The default value is false.

BigQuery treats trailing columns as an extra in CSV, named values that don't match any column names in JSON. This setting is ignored for Google Cloud Bigtable, Google Cloud Datastore backups and Avro formats. Optional.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new

csv_url = "gs://bucket/path/to/data.csv"
csv_table = bigquery.external csv_url do |csv|
  csv.ignore_unknown = true
end

csv_table.ignore_unknown #=> true

Returns:

  • (Boolean)


536
537
538
# File 'lib/google/cloud/bigquery/external.rb', line 536

def ignore_unknown
  @gapi.ignore_unknown_values
end

#ignore_unknown=(new_ignore_unknown) ⇒ Object

Set whether BigQuery should allow extra values that are not represented in the table schema. If true, the extra values are ignored. If false, records with extra columns are treated as bad records, and if there are too many bad records, an invalid error is returned in the job result. The default value is false.

BigQuery treats trailing columns as an extra in CSV, named values that don't match any column names in JSON. This setting is ignored for Google Cloud Bigtable, Google Cloud Datastore backups and Avro formats. Optional.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new

csv_url = "gs://bucket/path/to/data.csv"
csv_table = bigquery.external csv_url do |csv|
  csv.ignore_unknown = true
end

csv_table.ignore_unknown #=> true

Parameters:

  • new_ignore_unknown (Boolean)

    New ignore_unknown value



566
567
568
569
# File 'lib/google/cloud/bigquery/external.rb', line 566

def ignore_unknown= new_ignore_unknown
  frozen_check!
  @gapi.ignore_unknown_values = new_ignore_unknown
end

#json?Boolean

Whether the data format is "NEWLINE_DELIMITED_JSON".

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new

json_url = "gs://bucket/path/to/data.json"
json_table = bigquery.external json_url

json_table.format #=> "NEWLINE_DELIMITED_JSON"
json_table.json? #=> true

Returns:

  • (Boolean)


260
261
262
# File 'lib/google/cloud/bigquery/external.rb', line 260

def json?
  @gapi.source_format == "NEWLINE_DELIMITED_JSON"
end

#max_bad_recordsInteger

The maximum number of bad records that BigQuery can ignore when reading data. If the number of bad records exceeds this value, an invalid error is returned in the job result. The default value is 0, which requires that all records are valid. This setting is ignored for Google Cloud Bigtable, Google Cloud Datastore backups and Avro formats.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new

csv_url = "gs://bucket/path/to/data.csv"
csv_table = bigquery.external csv_url do |csv|
  csv.max_bad_records = 10
end

csv_table.max_bad_records #=> 10

Returns:

  • (Integer)


593
594
595
# File 'lib/google/cloud/bigquery/external.rb', line 593

def max_bad_records
  @gapi.max_bad_records
end

#max_bad_records=(new_max_bad_records) ⇒ Object

Set the maximum number of bad records that BigQuery can ignore when reading data. If the number of bad records exceeds this value, an invalid error is returned in the job result. The default value is 0, which requires that all records are valid. This setting is ignored for Google Cloud Bigtable, Google Cloud Datastore backups and Avro formats.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new

csv_url = "gs://bucket/path/to/data.csv"
csv_table = bigquery.external csv_url do |csv|
  csv.max_bad_records = 10
end

csv_table.max_bad_records #=> 10

Parameters:

  • new_max_bad_records (Integer)

    New max_bad_records value



619
620
621
622
# File 'lib/google/cloud/bigquery/external.rb', line 619

def max_bad_records= new_max_bad_records
  frozen_check!
  @gapi.max_bad_records = new_max_bad_records
end

#orc?Boolean

Whether the data format is "ORC".

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new

gcs_uri = "gs://cloud-samples-data/bigquery/hive-partitioning-samples/autolayout/*"
source_uri_prefix = "gs://cloud-samples-data/bigquery/hive-partitioning-samples/autolayout/"
external_data = bigquery.external gcs_uri, format: :orc do |ext|
  ext.hive_partitioning_mode = :auto
  ext.hive_partitioning_source_uri_prefix = source_uri_prefix
end
external_data.format #=> "ORC"
external_data.orc? #=> true

Returns:

  • (Boolean)


363
364
365
# File 'lib/google/cloud/bigquery/external.rb', line 363

def orc?
  @gapi.source_format == "ORC"
end

#parquet?Boolean

Whether the data format is "PARQUET".

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new

gcs_uri = "gs://cloud-samples-data/bigquery/hive-partitioning-samples/autolayout/*"
source_uri_prefix = "gs://cloud-samples-data/bigquery/hive-partitioning-samples/autolayout/"
external_data = bigquery.external gcs_uri, format: :parquet do |ext|
  ext.hive_partitioning_mode = :auto
  ext.hive_partitioning_source_uri_prefix = source_uri_prefix
end
external_data.format #=> "PARQUET"
external_data.parquet? #=> true

Returns:

  • (Boolean)


386
387
388
# File 'lib/google/cloud/bigquery/external.rb', line 386

def parquet?
  @gapi.source_format == "PARQUET"
end

#sheets?Boolean

Whether the data format is "GOOGLE_SHEETS".

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new

sheets_url = "https://docs.google.com/spreadsheets/d/1234567980"
sheets_table = bigquery.external sheets_url

sheets_table.format #=> "GOOGLE_SHEETS"
sheets_table.sheets? #=> true

Returns:

  • (Boolean)


280
281
282
# File 'lib/google/cloud/bigquery/external.rb', line 280

def sheets?
  @gapi.source_format == "GOOGLE_SHEETS"
end

#urlsArray<String>

The fully-qualified URIs that point to your data in Google Cloud. For Google Cloud Storage URIs: Each URI can contain one '' wildcard character and it must come after the 'bucket' name. Size limits related to load jobs apply to external data sources. For Google Cloud Bigtable URIs: Exactly one URI can be specified and it has be a fully specified and valid HTTPS URL for a Google Cloud Bigtable table. For Google Cloud Datastore backups, exactly one URI can be specified, and it must end with '.backup_info'. Also, the '' wildcard character is not allowed.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new

csv_url = "gs://bucket/path/to/data.csv"
csv_table = bigquery.external csv_url

csv_table.urls #=> ["gs://bucket/path/to/data.csv"]

Returns:

  • (Array<String>)


413
414
415
# File 'lib/google/cloud/bigquery/external.rb', line 413

def urls
  @gapi.source_uris
end