Class: Google::Apis::BigqueryV2::JobConfigurationLoad
- Inherits:
-
Object
- Object
- Google::Apis::BigqueryV2::JobConfigurationLoad
- Includes:
- Core::Hashable, Core::JsonObjectSupport
- Defined in:
- lib/google/apis/bigquery_v2/classes.rb,
lib/google/apis/bigquery_v2/representations.rb,
lib/google/apis/bigquery_v2/representations.rb
Overview
JobConfigurationLoad contains the configuration properties for loading data into a destination table.
Instance Attribute Summary collapse
-
#allow_jagged_rows ⇒ Boolean
(also: #allow_jagged_rows?)
Optional.
-
#allow_quoted_newlines ⇒ Boolean
(also: #allow_quoted_newlines?)
Indicates if BigQuery should allow quoted data sections that contain newline characters in a CSV file.
-
#autodetect ⇒ Boolean
(also: #autodetect?)
Optional.
-
#clustering ⇒ Google::Apis::BigqueryV2::Clustering
Configures table clustering.
-
#column_name_character_map ⇒ String
Optional.
-
#connection_properties ⇒ Array<Google::Apis::BigqueryV2::ConnectionProperty>
Optional.
-
#copy_files_only ⇒ Boolean
(also: #copy_files_only?)
Optional.
-
#create_disposition ⇒ String
Optional.
-
#create_session ⇒ Boolean
(also: #create_session?)
Optional.
-
#decimal_target_types ⇒ Array<String>
Defines the list of possible SQL data types to which the source decimal values are converted.
-
#destination_encryption_configuration ⇒ Google::Apis::BigqueryV2::EncryptionConfiguration
Custom encryption configuration (e.g., Cloud KMS keys) Corresponds to the JSON property
destinationEncryptionConfiguration
. -
#destination_table ⇒ Google::Apis::BigqueryV2::TableReference
[Required] The destination table to load the data into.
-
#destination_table_properties ⇒ Google::Apis::BigqueryV2::DestinationTableProperties
Properties for the destination table.
-
#encoding ⇒ String
Optional.
-
#field_delimiter ⇒ String
Optional.
-
#file_set_spec_type ⇒ String
Optional.
-
#hive_partitioning_options ⇒ Google::Apis::BigqueryV2::HivePartitioningOptions
Options for configuring hive partitioning detect.
-
#ignore_unknown_values ⇒ Boolean
(also: #ignore_unknown_values?)
Optional.
-
#json_extension ⇒ String
Optional.
-
#max_bad_records ⇒ Fixnum
Optional.
-
#null_marker ⇒ String
Optional.
-
#parquet_options ⇒ Google::Apis::BigqueryV2::ParquetOptions
Parquet Options for load and make external tables.
-
#preserve_ascii_control_characters ⇒ Boolean
(also: #preserve_ascii_control_characters?)
Optional.
-
#projection_fields ⇒ Array<String>
If sourceFormat is set to "DATASTORE_BACKUP", indicates which entity properties to load into BigQuery from a Cloud Datastore backup.
-
#quote ⇒ String
Optional.
-
#range_partitioning ⇒ Google::Apis::BigqueryV2::RangePartitioning
Range partitioning specification for the destination table.
-
#reference_file_schema_uri ⇒ String
Optional.
-
#schema ⇒ Google::Apis::BigqueryV2::TableSchema
Schema of a table Corresponds to the JSON property
schema
. -
#schema_inline ⇒ String
[Deprecated] The inline schema.
-
#schema_inline_format ⇒ String
[Deprecated] The format of the schemaInline property.
-
#schema_update_options ⇒ Array<String>
Allows the schema of the destination table to be updated as a side effect of the load job if a schema is autodetected or supplied in the job configuration.
-
#skip_leading_rows ⇒ Fixnum
Optional.
-
#source_format ⇒ String
Optional.
-
#source_uris ⇒ Array<String>
[Required] The fully-qualified URIs that point to your data in Google Cloud.
-
#time_partitioning ⇒ Google::Apis::BigqueryV2::TimePartitioning
Time-based partitioning specification for the destination table.
-
#use_avro_logical_types ⇒ Boolean
(also: #use_avro_logical_types?)
Optional.
-
#write_disposition ⇒ String
Optional.
Instance Method Summary collapse
-
#initialize(**args) ⇒ JobConfigurationLoad
constructor
A new instance of JobConfigurationLoad.
-
#update!(**args) ⇒ Object
Update properties of this object.
Constructor Details
#initialize(**args) ⇒ JobConfigurationLoad
Returns a new instance of JobConfigurationLoad.
4809 4810 4811 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 4809 def initialize(**args) update!(**args) end |
Instance Attribute Details
#allow_jagged_rows ⇒ Boolean Also known as: allow_jagged_rows?
Optional. Accept rows that are missing trailing optional columns. The missing
values are treated as nulls. If false, records with missing trailing columns
are treated as bad records, and if there are too many bad records, an invalid
error is returned in the job result. The default value is false. Only
applicable to CSV, ignored for other formats.
Corresponds to the JSON property allowJaggedRows
4482 4483 4484 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 4482 def allow_jagged_rows @allow_jagged_rows end |
#allow_quoted_newlines ⇒ Boolean Also known as: allow_quoted_newlines?
Indicates if BigQuery should allow quoted data sections that contain newline
characters in a CSV file. The default value is false.
Corresponds to the JSON property allowQuotedNewlines
4489 4490 4491 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 4489 def allow_quoted_newlines @allow_quoted_newlines end |
#autodetect ⇒ Boolean Also known as: autodetect?
Optional. Indicates if we should automatically infer the options and schema
for CSV and JSON sources.
Corresponds to the JSON property autodetect
4496 4497 4498 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 4496 def autodetect @autodetect end |
#clustering ⇒ Google::Apis::BigqueryV2::Clustering
Configures table clustering.
Corresponds to the JSON property clustering
4502 4503 4504 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 4502 def clustering @clustering end |
#column_name_character_map ⇒ String
Optional. Character map supported for column names in CSV/Parquet loads.
Defaults to STRICT and can be overridden by Project Config Service. Using this
option with unsupporting load formats will result in an error.
Corresponds to the JSON property columnNameCharacterMap
4509 4510 4511 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 4509 def column_name_character_map @column_name_character_map end |
#connection_properties ⇒ Array<Google::Apis::BigqueryV2::ConnectionProperty>
Optional. Connection properties which can modify the load job behavior.
Currently, only the 'session_id' connection property is supported, and is used
to resolve _SESSION appearing as the dataset id.
Corresponds to the JSON property connectionProperties
4516 4517 4518 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 4516 def connection_properties @connection_properties end |
#copy_files_only ⇒ Boolean Also known as: copy_files_only?
Optional. [Experimental] Configures the load job to copy files directly to the
destination BigLake managed table, bypassing file content reading and
rewriting. Copying files only is supported when all the following are true: *
source_uris
are located in the same Cloud Storage location as the destination
table's storage_uri
location. * source_format
is PARQUET
. *
destination_table
is an existing BigLake managed table. The table's schema
does not have flexible column names. The table's columns do not have type
parameters other than precision and scale. * No options other than the above
are specified.
Corresponds to the JSON property copyFilesOnly
4529 4530 4531 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 4529 def copy_files_only @copy_files_only end |
#create_disposition ⇒ String
Optional. Specifies whether the job is allowed to create new tables. The
following values are supported: * CREATE_IF_NEEDED: If the table does not
exist, BigQuery creates the table. * CREATE_NEVER: The table must already
exist. If it does not, a 'notFound' error is returned in the job result. The
default value is CREATE_IF_NEEDED. Creation, truncation and append actions
occur as one atomic update upon job completion.
Corresponds to the JSON property createDisposition
4540 4541 4542 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 4540 def create_disposition @create_disposition end |
#create_session ⇒ Boolean Also known as: create_session?
Optional. If this property is true, the job creates a new session using a
randomly generated session_id. To continue using a created session with
subsequent queries, pass the existing session identifier as a
ConnectionProperty
value. The session identifier is returned as part of the
SessionInfo
message within the query statistics. The new session's location
will be set to Job.JobReference.location
if it is present, otherwise it's
set to the default location based on existing routing logic.
Corresponds to the JSON property createSession
4551 4552 4553 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 4551 def create_session @create_session end |
#decimal_target_types ⇒ Array<String>
Defines the list of possible SQL data types to which the source decimal values
are converted. This list and the precision and the scale parameters of the
decimal field determine the target type. In the order of NUMERIC, BIGNUMERIC,
and STRING, a type is picked if it is in the specified list and if it supports
the precision and the scale. STRING supports all precision and scale values.
If none of the listed types supports the precision and the scale, the type
supporting the widest range in the specified list is picked, and if a value
exceeds the supported range when reading the data, an error will be thrown.
Example: Suppose the value of this field is ["NUMERIC", "BIGNUMERIC"]. If (
precision,scale) is: * (38,9) -> NUMERIC; * (39,9) -> BIGNUMERIC (NUMERIC
cannot hold 30 integer digits); * (38,10) -> BIGNUMERIC (NUMERIC cannot hold
10 fractional digits); * (76,38) -> BIGNUMERIC; * (77,38) -> BIGNUMERIC (error
if value exeeds supported range). This field cannot contain duplicate types.
The order of the types in this field is ignored. For example, ["BIGNUMERIC", "
NUMERIC"] is the same as ["NUMERIC", "BIGNUMERIC"] and NUMERIC always takes
precedence over BIGNUMERIC. Defaults to ["NUMERIC", "STRING"] for ORC and ["
NUMERIC"] for the other file formats.
Corresponds to the JSON property decimalTargetTypes
4573 4574 4575 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 4573 def decimal_target_types @decimal_target_types end |
#destination_encryption_configuration ⇒ Google::Apis::BigqueryV2::EncryptionConfiguration
Custom encryption configuration (e.g., Cloud KMS keys)
Corresponds to the JSON property destinationEncryptionConfiguration
4578 4579 4580 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 4578 def destination_encryption_configuration @destination_encryption_configuration end |
#destination_table ⇒ Google::Apis::BigqueryV2::TableReference
[Required] The destination table to load the data into.
Corresponds to the JSON property destinationTable
4583 4584 4585 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 4583 def destination_table @destination_table end |
#destination_table_properties ⇒ Google::Apis::BigqueryV2::DestinationTableProperties
Properties for the destination table.
Corresponds to the JSON property destinationTableProperties
4588 4589 4590 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 4588 def destination_table_properties @destination_table_properties end |
#encoding ⇒ String
Optional. The character encoding of the data. The supported values are UTF-8,
ISO-8859-1, UTF-16BE, UTF-16LE, UTF-32BE, and UTF-32LE. The default value is
UTF-8. BigQuery decodes the data after the raw, binary data has been split
using the values of the quote
and fieldDelimiter
properties. If you don't
specify an encoding, or if you specify a UTF-8 encoding when the CSV file is
not UTF-8 encoded, BigQuery attempts to convert the data to UTF-8. Generally,
your data loads successfully, but it may not match byte-for-byte what you
expect. To avoid this, specify the correct encoding by using the --encoding
flag. If BigQuery can't convert a character other than the ASCII 0
character,
BigQuery converts the character to the standard Unicode replacement character:
�.
Corresponds to the JSON property encoding
4603 4604 4605 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 4603 def encoding @encoding end |
#field_delimiter ⇒ String
Optional. The separator character for fields in a CSV file. The separator is
interpreted as a single byte. For files encoded in ISO-8859-1, any single
character can be used as a separator. For files encoded in UTF-8, characters
represented in decimal range 1-127 (U+0001-U+007F) can be used without any
modification. UTF-8 characters encoded with multiple bytes (i.e. U+0080 and
above) will have only the first byte used for separating fields. The remaining
bytes will be treated as a part of the field. BigQuery also supports the
escape sequence "\t" (U+0009) to specify a tab separator. The default value is
comma (",", U+002C).
Corresponds to the JSON property fieldDelimiter
4616 4617 4618 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 4616 def field_delimiter @field_delimiter end |
#file_set_spec_type ⇒ String
Optional. Specifies how source URIs are interpreted for constructing the file
set to load. By default, source URIs are expanded against the underlying
storage. You can also specify manifest files to control how the file set is
constructed. This option is only applicable to object storage systems.
Corresponds to the JSON property fileSetSpecType
4624 4625 4626 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 4624 def file_set_spec_type @file_set_spec_type end |
#hive_partitioning_options ⇒ Google::Apis::BigqueryV2::HivePartitioningOptions
Options for configuring hive partitioning detect.
Corresponds to the JSON property hivePartitioningOptions
4629 4630 4631 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 4629 def @hive_partitioning_options end |
#ignore_unknown_values ⇒ Boolean Also known as: ignore_unknown_values?
Optional. Indicates if BigQuery should allow extra values that are not
represented in the table schema. If true, the extra values are ignored. If
false, records with extra columns are treated as bad records, and if there are
too many bad records, an invalid error is returned in the job result. The
default value is false. The sourceFormat property determines what BigQuery
treats as an extra value: CSV: Trailing columns JSON: Named values that don't
match any column names in the table schema Avro, Parquet, ORC: Fields in the
file schema that don't exist in the table schema.
Corresponds to the JSON property ignoreUnknownValues
4641 4642 4643 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 4641 def ignore_unknown_values @ignore_unknown_values end |
#json_extension ⇒ String
Optional. Load option to be used together with source_format newline-delimited
JSON to indicate that a variant of JSON is being loaded. To load newline-
delimited GeoJSON, specify GEOJSON (and source_format must be set to
NEWLINE_DELIMITED_JSON).
Corresponds to the JSON property jsonExtension
4650 4651 4652 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 4650 def json_extension @json_extension end |
#max_bad_records ⇒ Fixnum
Optional. The maximum number of bad records that BigQuery can ignore when
running the job. If the number of bad records exceeds this value, an invalid
error is returned in the job result. The default value is 0, which requires
that all records are valid. This is only supported for CSV and
NEWLINE_DELIMITED_JSON file formats.
Corresponds to the JSON property maxBadRecords
4659 4660 4661 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 4659 def max_bad_records @max_bad_records end |
#null_marker ⇒ String
Optional. Specifies a string that represents a null value in a CSV file. For
example, if you specify "\N", BigQuery interprets "\N" as a null value when
loading a CSV file. The default value is the empty string. If you set this
property to a custom value, BigQuery throws an error if an empty string is
present for all data types except for STRING and BYTE. For STRING and BYTE
columns, BigQuery interprets the empty string as an empty value.
Corresponds to the JSON property nullMarker
4669 4670 4671 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 4669 def null_marker @null_marker end |
#parquet_options ⇒ Google::Apis::BigqueryV2::ParquetOptions
Parquet Options for load and make external tables.
Corresponds to the JSON property parquetOptions
4674 4675 4676 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 4674 def @parquet_options end |
#preserve_ascii_control_characters ⇒ Boolean Also known as: preserve_ascii_control_characters?
Optional. When sourceFormat is set to "CSV", this indicates whether the
embedded ASCII control characters (the first 32 characters in the ASCII-table,
from '\x00' to '\x1F') are preserved.
Corresponds to the JSON property preserveAsciiControlCharacters
4681 4682 4683 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 4681 def preserve_ascii_control_characters @preserve_ascii_control_characters end |
#projection_fields ⇒ Array<String>
If sourceFormat is set to "DATASTORE_BACKUP", indicates which entity
properties to load into BigQuery from a Cloud Datastore backup. Property names
are case sensitive and must be top-level properties. If no properties are
specified, BigQuery loads all properties. If any named property isn't found in
the Cloud Datastore backup, an invalid error is returned in the job result.
Corresponds to the JSON property projectionFields
4691 4692 4693 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 4691 def projection_fields @projection_fields end |
#quote ⇒ String
Optional. The value that is used to quote data sections in a CSV file.
BigQuery converts the string to ISO-8859-1 encoding, and then uses the first
byte of the encoded string to split the data in its raw, binary state. The
default value is a double-quote ('"'). If your data does not contain quoted
sections, set the property value to an empty string. If your data contains
quoted newline characters, you must also set the allowQuotedNewlines property
to true. To include the specific quote character within a quoted value,
precede it with an additional matching quote character. For example, if you
want to escape the default character ' " ', use ' "" '. @default "
Corresponds to the JSON property quote
4704 4705 4706 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 4704 def quote @quote end |
#range_partitioning ⇒ Google::Apis::BigqueryV2::RangePartitioning
Range partitioning specification for the destination table. Only one of
timePartitioning and rangePartitioning should be specified.
Corresponds to the JSON property rangePartitioning
4710 4711 4712 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 4710 def range_partitioning @range_partitioning end |
#reference_file_schema_uri ⇒ String
Optional. The user can provide a reference file with the reader schema. This
file is only loaded if it is part of source URIs, but is not loaded otherwise.
It is enabled for the following formats: AVRO, PARQUET, ORC.
Corresponds to the JSON property referenceFileSchemaUri
4717 4718 4719 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 4717 def reference_file_schema_uri @reference_file_schema_uri end |
#schema ⇒ Google::Apis::BigqueryV2::TableSchema
Schema of a table
Corresponds to the JSON property schema
4722 4723 4724 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 4722 def schema @schema end |
#schema_inline ⇒ String
[Deprecated] The inline schema. For CSV schemas, specify as "Field1:Type1[,
Field2:Type2]*". For example, "foo:STRING, bar:INTEGER, baz:FLOAT".
Corresponds to the JSON property schemaInline
4728 4729 4730 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 4728 def schema_inline @schema_inline end |
#schema_inline_format ⇒ String
[Deprecated] The format of the schemaInline property.
Corresponds to the JSON property schemaInlineFormat
4733 4734 4735 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 4733 def schema_inline_format @schema_inline_format end |
#schema_update_options ⇒ Array<String>
Allows the schema of the destination table to be updated as a side effect of
the load job if a schema is autodetected or supplied in the job configuration.
Schema update options are supported in two cases: when writeDisposition is
WRITE_APPEND; when writeDisposition is WRITE_TRUNCATE and the destination
table is a partition of a table, specified by partition decorators. For normal
tables, WRITE_TRUNCATE will always overwrite the schema. One or more of the
following values are specified: * ALLOW_FIELD_ADDITION: allow adding a
nullable field to the schema. * ALLOW_FIELD_RELAXATION: allow relaxing a
required field in the original schema to nullable.
Corresponds to the JSON property schemaUpdateOptions
4746 4747 4748 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 4746 def @schema_update_options end |
#skip_leading_rows ⇒ Fixnum
Optional. The number of rows at the top of a CSV file that BigQuery will skip
when loading the data. The default value is 0. This property is useful if you
have header rows in the file that should be skipped. When autodetect is on,
the behavior is the following: * skipLeadingRows unspecified - Autodetect
tries to detect headers in the first row. If they are not detected, the row is
read as data. Otherwise data is read starting from the second row. *
skipLeadingRows is 0 - Instructs autodetect that there are no headers and data
should be read starting from the first row. * skipLeadingRows = N > 0 -
Autodetect skips N-1 rows and tries to detect headers in row N. If headers are
not detected, row N is just skipped. Otherwise row N is used to extract column
names for the detected schema.
Corresponds to the JSON property skipLeadingRows
4761 4762 4763 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 4761 def skip_leading_rows @skip_leading_rows end |
#source_format ⇒ String
Optional. The format of the data files. For CSV files, specify "CSV". For
datastore backups, specify "DATASTORE_BACKUP". For newline-delimited JSON,
specify "NEWLINE_DELIMITED_JSON". For Avro, specify "AVRO". For parquet,
specify "PARQUET". For orc, specify "ORC". The default value is CSV.
Corresponds to the JSON property sourceFormat
4769 4770 4771 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 4769 def source_format @source_format end |
#source_uris ⇒ Array<String>
[Required] The fully-qualified URIs that point to your data in Google Cloud.
For Google Cloud Storage URIs: Each URI can contain one '' wildcard character
and it must come after the 'bucket' name. Size limits related to load jobs
apply to external data sources. For Google Cloud Bigtable URIs: Exactly one
URI can be specified and it has be a fully specified and valid HTTPS URL for a
Google Cloud Bigtable table. For Google Cloud Datastore backups: Exactly one
URI can be specified. Also, the '' wildcard character is not allowed.
Corresponds to the JSON property sourceUris
4780 4781 4782 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 4780 def source_uris @source_uris end |
#time_partitioning ⇒ Google::Apis::BigqueryV2::TimePartitioning
Time-based partitioning specification for the destination table. Only one of
timePartitioning and rangePartitioning should be specified.
Corresponds to the JSON property timePartitioning
4786 4787 4788 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 4786 def time_partitioning @time_partitioning end |
#use_avro_logical_types ⇒ Boolean Also known as: use_avro_logical_types?
Optional. If sourceFormat is set to "AVRO", indicates whether to interpret
logical types as the corresponding BigQuery data type (for example, TIMESTAMP),
instead of using the raw type (for example, INTEGER).
Corresponds to the JSON property useAvroLogicalTypes
4793 4794 4795 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 4793 def use_avro_logical_types @use_avro_logical_types end |
#write_disposition ⇒ String
Optional. Specifies the action that occurs if the destination table already
exists. The following values are supported: * WRITE_TRUNCATE: If the table
already exists, BigQuery overwrites the data, removes the constraints and uses
the schema from the load job. * WRITE_APPEND: If the table already exists,
BigQuery appends the data to the table. * WRITE_EMPTY: If the table already
exists and contains data, a 'duplicate' error is returned in the job result.
The default value is WRITE_APPEND. Each action is atomic and only occurs if
BigQuery is able to complete the job successfully. Creation, truncation and
append actions occur as one atomic update upon job completion.
Corresponds to the JSON property writeDisposition
4807 4808 4809 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 4807 def write_disposition @write_disposition end |
Instance Method Details
#update!(**args) ⇒ Object
Update properties of this object
4814 4815 4816 4817 4818 4819 4820 4821 4822 4823 4824 4825 4826 4827 4828 4829 4830 4831 4832 4833 4834 4835 4836 4837 4838 4839 4840 4841 4842 4843 4844 4845 4846 4847 4848 4849 4850 4851 4852 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 4814 def update!(**args) @allow_jagged_rows = args[:allow_jagged_rows] if args.key?(:allow_jagged_rows) @allow_quoted_newlines = args[:allow_quoted_newlines] if args.key?(:allow_quoted_newlines) @autodetect = args[:autodetect] if args.key?(:autodetect) @clustering = args[:clustering] if args.key?(:clustering) @column_name_character_map = args[:column_name_character_map] if args.key?(:column_name_character_map) @connection_properties = args[:connection_properties] if args.key?(:connection_properties) @copy_files_only = args[:copy_files_only] if args.key?(:copy_files_only) @create_disposition = args[:create_disposition] if args.key?(:create_disposition) @create_session = args[:create_session] if args.key?(:create_session) @decimal_target_types = args[:decimal_target_types] if args.key?(:decimal_target_types) @destination_encryption_configuration = args[:destination_encryption_configuration] if args.key?(:destination_encryption_configuration) @destination_table = args[:destination_table] if args.key?(:destination_table) @destination_table_properties = args[:destination_table_properties] if args.key?(:destination_table_properties) @encoding = args[:encoding] if args.key?(:encoding) @field_delimiter = args[:field_delimiter] if args.key?(:field_delimiter) @file_set_spec_type = args[:file_set_spec_type] if args.key?(:file_set_spec_type) @hive_partitioning_options = args[:hive_partitioning_options] if args.key?(:hive_partitioning_options) @ignore_unknown_values = args[:ignore_unknown_values] if args.key?(:ignore_unknown_values) @json_extension = args[:json_extension] if args.key?(:json_extension) @max_bad_records = args[:max_bad_records] if args.key?(:max_bad_records) @null_marker = args[:null_marker] if args.key?(:null_marker) @parquet_options = args[:parquet_options] if args.key?(:parquet_options) @preserve_ascii_control_characters = args[:preserve_ascii_control_characters] if args.key?(:preserve_ascii_control_characters) @projection_fields = args[:projection_fields] if args.key?(:projection_fields) @quote = args[:quote] if args.key?(:quote) @range_partitioning = args[:range_partitioning] if args.key?(:range_partitioning) @reference_file_schema_uri = args[:reference_file_schema_uri] if args.key?(:reference_file_schema_uri) @schema = args[:schema] if args.key?(:schema) @schema_inline = args[:schema_inline] if args.key?(:schema_inline) @schema_inline_format = args[:schema_inline_format] if args.key?(:schema_inline_format) @schema_update_options = args[:schema_update_options] if args.key?(:schema_update_options) @skip_leading_rows = args[:skip_leading_rows] if args.key?(:skip_leading_rows) @source_format = args[:source_format] if args.key?(:source_format) @source_uris = args[:source_uris] if args.key?(:source_uris) @time_partitioning = args[:time_partitioning] if args.key?(:time_partitioning) @use_avro_logical_types = args[:use_avro_logical_types] if args.key?(:use_avro_logical_types) @write_disposition = args[:write_disposition] if args.key?(:write_disposition) end |