Class: Google::Apis::BigqueryV2::ExternalDataConfiguration
- Inherits:
-
Object
- Object
- Google::Apis::BigqueryV2::ExternalDataConfiguration
- Includes:
- Core::Hashable, Core::JsonObjectSupport
- Defined in:
- lib/google/apis/bigquery_v2/classes.rb,
lib/google/apis/bigquery_v2/representations.rb,
lib/google/apis/bigquery_v2/representations.rb
Instance Attribute Summary collapse
-
#autodetect ⇒ Boolean
(also: #autodetect?)
Try to detect schema and format options automatically.
-
#avro_options ⇒ Google::Apis::BigqueryV2::AvroOptions
Additional properties to set if sourceFormat is set to Avro.
-
#bigtable_options ⇒ Google::Apis::BigqueryV2::BigtableOptions
[Optional] Additional options if sourceFormat is set to BIGTABLE.
-
#compression ⇒ String
[Optional] The compression type of the data source.
-
#connection_id ⇒ String
[Optional, Trusted Tester] Connection for external data source.
-
#csv_options ⇒ Google::Apis::BigqueryV2::CsvOptions
Additional properties to set if sourceFormat is set to CSV.
-
#decimal_target_types ⇒ Array<String>
[Optional] Defines the list of possible SQL data types to which the source decimal values are converted.
-
#file_set_spec_type ⇒ String
[Optional] Specifies how source URIs are interpreted for constructing the file set to load.
-
#google_sheets_options ⇒ Google::Apis::BigqueryV2::GoogleSheetsOptions
[Optional] Additional options if sourceFormat is set to GOOGLE_SHEETS.
-
#hive_partitioning_options ⇒ Google::Apis::BigqueryV2::HivePartitioningOptions
[Optional] Options to configure hive partitioning support.
-
#ignore_unknown_values ⇒ Boolean
(also: #ignore_unknown_values?)
[Optional] Indicates if BigQuery should allow extra values that are not represented in the table schema.
-
#json_options ⇒ Google::Apis::BigqueryV2::JsonOptions
Additional properties to set if
sourceFormatis set toNEWLINE_DELIMITED_JSON. -
#max_bad_records ⇒ Fixnum
[Optional] The maximum number of bad records that BigQuery can ignore when reading data.
-
#metadata_cache_mode ⇒ String
[Optional] Metadata Cache Mode for the table.
-
#object_metadata ⇒ String
ObjectMetadata is used to create Object Tables.
-
#parquet_options ⇒ Google::Apis::BigqueryV2::ParquetOptions
Additional properties to set if sourceFormat is set to Parquet.
-
#reference_file_schema_uri ⇒ String
[Optional] Provide a referencing file with the expected table schema.
-
#schema ⇒ Google::Apis::BigqueryV2::TableSchema
[Optional] The schema for the data.
-
#source_format ⇒ String
[Required] The data format.
-
#source_uris ⇒ Array<String>
[Required] The fully-qualified URIs that point to your data in Google Cloud.
Instance Method Summary collapse
-
#initialize(**args) ⇒ ExternalDataConfiguration
constructor
A new instance of ExternalDataConfiguration.
-
#update!(**args) ⇒ Object
Update properties of this object.
Constructor Details
#initialize(**args) ⇒ ExternalDataConfiguration
Returns a new instance of ExternalDataConfiguration.
2829 2830 2831 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 2829 def initialize(**args) update!(**args) end |
Instance Attribute Details
#autodetect ⇒ Boolean Also known as: autodetect?
Try to detect schema and format options automatically. Any option specified
explicitly will be honored.
Corresponds to the JSON property autodetect
2678 2679 2680 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 2678 def autodetect @autodetect end |
#avro_options ⇒ Google::Apis::BigqueryV2::AvroOptions
Additional properties to set if sourceFormat is set to Avro.
Corresponds to the JSON property avroOptions
2684 2685 2686 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 2684 def @avro_options end |
#bigtable_options ⇒ Google::Apis::BigqueryV2::BigtableOptions
[Optional] Additional options if sourceFormat is set to BIGTABLE.
Corresponds to the JSON property bigtableOptions
2689 2690 2691 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 2689 def @bigtable_options end |
#compression ⇒ String
[Optional] The compression type of the data source. Possible values include
GZIP and NONE. The default value is NONE. This setting is ignored for Google
Cloud Bigtable, Google Cloud Datastore backups and Avro formats.
Corresponds to the JSON property compression
2696 2697 2698 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 2696 def compression @compression end |
#connection_id ⇒ String
[Optional, Trusted Tester] Connection for external data source.
Corresponds to the JSON property connectionId
2701 2702 2703 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 2701 def connection_id @connection_id end |
#csv_options ⇒ Google::Apis::BigqueryV2::CsvOptions
Additional properties to set if sourceFormat is set to CSV.
Corresponds to the JSON property csvOptions
2706 2707 2708 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 2706 def @csv_options end |
#decimal_target_types ⇒ Array<String>
[Optional] Defines the list of possible SQL data types to which the source
decimal values are converted. This list and the precision and the scale
parameters of the decimal field determine the target type. In the order of
NUMERIC, BIGNUMERIC, and STRING, a type is picked if it is in the specified
list and if it supports the precision and the scale. STRING supports all
precision and scale values. If none of the listed types supports the precision
and the scale, the type supporting the widest range in the specified list is
picked, and if a value exceeds the supported range when reading the data, an
error will be thrown. Example: Suppose the value of this field is ["NUMERIC", "
BIGNUMERIC"]. If (precision,scale) is: (38,9) -> NUMERIC; (39,9) -> BIGNUMERIC
(NUMERIC cannot hold 30 integer digits); (38,10) -> BIGNUMERIC (NUMERIC cannot
hold 10 fractional digits); (76,38) -> BIGNUMERIC; (77,38) -> BIGNUMERIC (
error if value exeeds supported range). This field cannot contain duplicate
types. The order of the types in this field is ignored. For example, ["
BIGNUMERIC", "NUMERIC"] is the same as ["NUMERIC", "BIGNUMERIC"] and NUMERIC
always takes precedence over BIGNUMERIC. Defaults to ["NUMERIC", "STRING"] for
ORC and ["NUMERIC"] for the other file formats.
Corresponds to the JSON property decimalTargetTypes
2727 2728 2729 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 2727 def decimal_target_types @decimal_target_types end |
#file_set_spec_type ⇒ String
[Optional] Specifies how source URIs are interpreted for constructing the file
set to load. By default source URIs are expanded against the underlying
storage. Other options include specifying manifest files. Only applicable to
object storage systems.
Corresponds to the JSON property fileSetSpecType
2735 2736 2737 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 2735 def file_set_spec_type @file_set_spec_type end |
#google_sheets_options ⇒ Google::Apis::BigqueryV2::GoogleSheetsOptions
[Optional] Additional options if sourceFormat is set to GOOGLE_SHEETS.
Corresponds to the JSON property googleSheetsOptions
2740 2741 2742 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 2740 def @google_sheets_options end |
#hive_partitioning_options ⇒ Google::Apis::BigqueryV2::HivePartitioningOptions
[Optional] Options to configure hive partitioning support.
Corresponds to the JSON property hivePartitioningOptions
2745 2746 2747 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 2745 def @hive_partitioning_options end |
#ignore_unknown_values ⇒ Boolean Also known as: ignore_unknown_values?
[Optional] Indicates if BigQuery should allow extra values that are not
represented in the table schema. If true, the extra values are ignored. If
false, records with extra columns are treated as bad records, and if there are
too many bad records, an invalid error is returned in the job result. The
default value is false. The sourceFormat property determines what BigQuery
treats as an extra value: CSV: Trailing columns JSON: Named values that don't
match any column names Google Cloud Bigtable: This setting is ignored. Google
Cloud Datastore backups: This setting is ignored. Avro: This setting is
ignored.
Corresponds to the JSON property ignoreUnknownValues
2758 2759 2760 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 2758 def ignore_unknown_values @ignore_unknown_values end |
#json_options ⇒ Google::Apis::BigqueryV2::JsonOptions
Additional properties to set if sourceFormat is set to
NEWLINE_DELIMITED_JSON.
Corresponds to the JSON property jsonOptions
2765 2766 2767 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 2765 def @json_options end |
#max_bad_records ⇒ Fixnum
[Optional] The maximum number of bad records that BigQuery can ignore when
reading data. If the number of bad records exceeds this value, an invalid
error is returned in the job result. This is only valid for CSV, JSON, and
Google Sheets. The default value is 0, which requires that all records are
valid. This setting is ignored for Google Cloud Bigtable, Google Cloud
Datastore backups and Avro formats.
Corresponds to the JSON property maxBadRecords
2775 2776 2777 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 2775 def max_bad_records @max_bad_records end |
#metadata_cache_mode ⇒ String
[Optional] Metadata Cache Mode for the table. Set this to enable caching of
metadata from external data source.
Corresponds to the JSON property metadataCacheMode
2781 2782 2783 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 2781 def @metadata_cache_mode end |
#object_metadata ⇒ String
ObjectMetadata is used to create Object Tables. Object Tables contain a
listing of objects (with their metadata) found at the source_uris. If
ObjectMetadata is set, source_format should be omitted. Currently SIMPLE is
the only supported Object Metadata type.
Corresponds to the JSON property objectMetadata
2789 2790 2791 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 2789 def @object_metadata end |
#parquet_options ⇒ Google::Apis::BigqueryV2::ParquetOptions
Additional properties to set if sourceFormat is set to Parquet.
Corresponds to the JSON property parquetOptions
2794 2795 2796 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 2794 def @parquet_options end |
#reference_file_schema_uri ⇒ String
[Optional] Provide a referencing file with the expected table schema. Enabled
for the format: AVRO, PARQUET, ORC.
Corresponds to the JSON property referenceFileSchemaUri
2800 2801 2802 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 2800 def reference_file_schema_uri @reference_file_schema_uri end |
#schema ⇒ Google::Apis::BigqueryV2::TableSchema
[Optional] The schema for the data. Schema is required for CSV and JSON
formats. Schema is disallowed for Google Cloud Bigtable, Cloud Datastore
backups, and Avro formats.
Corresponds to the JSON property schema
2807 2808 2809 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 2807 def schema @schema end |
#source_format ⇒ String
[Required] The data format. For CSV files, specify "CSV". For Google sheets,
specify "GOOGLE_SHEETS". For newline-delimited JSON, specify "
NEWLINE_DELIMITED_JSON". For Avro files, specify "AVRO". For Google Cloud
Datastore backups, specify "DATASTORE_BACKUP". [Beta] For Google Cloud
Bigtable, specify "BIGTABLE".
Corresponds to the JSON property sourceFormat
2816 2817 2818 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 2816 def source_format @source_format end |
#source_uris ⇒ Array<String>
[Required] The fully-qualified URIs that point to your data in Google Cloud.
For Google Cloud Storage URIs: Each URI can contain one '' wildcard character
and it must come after the 'bucket' name. Size limits related to load jobs
apply to external data sources. For Google Cloud Bigtable URIs: Exactly one
URI can be specified and it has be a fully specified and valid HTTPS URL for a
Google Cloud Bigtable table. For Google Cloud Datastore backups, exactly one
URI can be specified. Also, the '' wildcard character is not allowed.
Corresponds to the JSON property sourceUris
2827 2828 2829 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 2827 def source_uris @source_uris end |
Instance Method Details
#update!(**args) ⇒ Object
Update properties of this object
2834 2835 2836 2837 2838 2839 2840 2841 2842 2843 2844 2845 2846 2847 2848 2849 2850 2851 2852 2853 2854 2855 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 2834 def update!(**args) @autodetect = args[:autodetect] if args.key?(:autodetect) @avro_options = args[:avro_options] if args.key?(:avro_options) @bigtable_options = args[:bigtable_options] if args.key?(:bigtable_options) @compression = args[:compression] if args.key?(:compression) @connection_id = args[:connection_id] if args.key?(:connection_id) @csv_options = args[:csv_options] if args.key?(:csv_options) @decimal_target_types = args[:decimal_target_types] if args.key?(:decimal_target_types) @file_set_spec_type = args[:file_set_spec_type] if args.key?(:file_set_spec_type) @google_sheets_options = args[:google_sheets_options] if args.key?(:google_sheets_options) @hive_partitioning_options = args[:hive_partitioning_options] if args.key?(:hive_partitioning_options) @ignore_unknown_values = args[:ignore_unknown_values] if args.key?(:ignore_unknown_values) @json_options = args[:json_options] if args.key?(:json_options) @max_bad_records = args[:max_bad_records] if args.key?(:max_bad_records) @metadata_cache_mode = args[:metadata_cache_mode] if args.key?(:metadata_cache_mode) @object_metadata = args[:object_metadata] if args.key?(:object_metadata) @parquet_options = args[:parquet_options] if args.key?(:parquet_options) @reference_file_schema_uri = args[:reference_file_schema_uri] if args.key?(:reference_file_schema_uri) @schema = args[:schema] if args.key?(:schema) @source_format = args[:source_format] if args.key?(:source_format) @source_uris = args[:source_uris] if args.key?(:source_uris) end |