Class: Google::Apis::BigqueryV2::JobConfigurationLoad

Inherits:
Object
  • Object
show all
Includes:
Core::Hashable, Core::JsonObjectSupport
Defined in:
lib/google/apis/bigquery_v2/classes.rb,
lib/google/apis/bigquery_v2/representations.rb,
lib/google/apis/bigquery_v2/representations.rb

Instance Attribute Summary collapse

Instance Method Summary collapse

Constructor Details

#initialize(**args) ⇒ JobConfigurationLoad

Returns a new instance of JobConfigurationLoad.



4043
4044
4045
# File 'lib/google/apis/bigquery_v2/classes.rb', line 4043

def initialize(**args)
   update!(**args)
end

Instance Attribute Details

#allow_jagged_rowsBoolean Also known as: allow_jagged_rows?

[Optional] Accept rows that are missing trailing optional columns. The missing values are treated as nulls. If false, records with missing trailing columns are treated as bad records, and if there are too many bad records, an invalid error is returned in the job result. The default value is false. Only applicable to CSV, ignored for other formats. Corresponds to the JSON property allowJaggedRows

Returns:

  • (Boolean)


3770
3771
3772
# File 'lib/google/apis/bigquery_v2/classes.rb', line 3770

def allow_jagged_rows
  @allow_jagged_rows
end

#allow_quoted_newlinesBoolean Also known as: allow_quoted_newlines?

Indicates if BigQuery should allow quoted data sections that contain newline characters in a CSV file. The default value is false. Corresponds to the JSON property allowQuotedNewlines

Returns:

  • (Boolean)


3777
3778
3779
# File 'lib/google/apis/bigquery_v2/classes.rb', line 3777

def allow_quoted_newlines
  @allow_quoted_newlines
end

#autodetectBoolean Also known as: autodetect?

[Optional] Indicates if we should automatically infer the options and schema for CSV and JSON sources. Corresponds to the JSON property autodetect

Returns:

  • (Boolean)


3784
3785
3786
# File 'lib/google/apis/bigquery_v2/classes.rb', line 3784

def autodetect
  @autodetect
end

#clusteringGoogle::Apis::BigqueryV2::Clustering

[Beta] Clustering specification for the destination table. Must be specified with time-based partitioning, data in the table will be first partitioned and subsequently clustered. Corresponds to the JSON property clustering



3792
3793
3794
# File 'lib/google/apis/bigquery_v2/classes.rb', line 3792

def clustering
  @clustering
end

#connection_propertiesArray<Google::Apis::BigqueryV2::ConnectionProperty>

Connection properties. Corresponds to the JSON property connectionProperties



3797
3798
3799
# File 'lib/google/apis/bigquery_v2/classes.rb', line 3797

def connection_properties
  @connection_properties
end

#create_dispositionString

[Optional] Specifies whether the job is allowed to create new tables. The following values are supported: CREATE_IF_NEEDED: If the table does not exist, BigQuery creates the table. CREATE_NEVER: The table must already exist. If it does not, a 'notFound' error is returned in the job result. The default value is CREATE_IF_NEEDED. Creation, truncation and append actions occur as one atomic update upon job completion. Corresponds to the JSON property createDisposition

Returns:

  • (String)


3807
3808
3809
# File 'lib/google/apis/bigquery_v2/classes.rb', line 3807

def create_disposition
  @create_disposition
end

#create_sessionBoolean Also known as: create_session?

If true, creates a new session, where session id will be a server generated random id. If false, runs query with an existing session_id passed in ConnectionProperty, otherwise runs the load job in non-session mode. Corresponds to the JSON property createSession

Returns:

  • (Boolean)


3814
3815
3816
# File 'lib/google/apis/bigquery_v2/classes.rb', line 3814

def create_session
  @create_session
end

#decimal_target_typesArray<String>

[Optional] Defines the list of possible SQL data types to which the source decimal values are converted. This list and the precision and the scale parameters of the decimal field determine the target type. In the order of NUMERIC, BIGNUMERIC, and STRING, a type is picked if it is in the specified list and if it supports the precision and the scale. STRING supports all precision and scale values. If none of the listed types supports the precision and the scale, the type supporting the widest range in the specified list is picked, and if a value exceeds the supported range when reading the data, an error will be thrown. Example: Suppose the value of this field is ["NUMERIC", " BIGNUMERIC"]. If (precision,scale) is: (38,9) -> NUMERIC; (39,9) -> BIGNUMERIC (NUMERIC cannot hold 30 integer digits); (38,10) -> BIGNUMERIC (NUMERIC cannot hold 10 fractional digits); (76,38) -> BIGNUMERIC; (77,38) -> BIGNUMERIC ( error if value exeeds supported range). This field cannot contain duplicate types. The order of the types in this field is ignored. For example, [" BIGNUMERIC", "NUMERIC"] is the same as ["NUMERIC", "BIGNUMERIC"] and NUMERIC always takes precedence over BIGNUMERIC. Defaults to ["NUMERIC", "STRING"] for ORC and ["NUMERIC"] for the other file formats. Corresponds to the JSON property decimalTargetTypes

Returns:

  • (Array<String>)


3836
3837
3838
# File 'lib/google/apis/bigquery_v2/classes.rb', line 3836

def decimal_target_types
  @decimal_target_types
end

#destination_encryption_configurationGoogle::Apis::BigqueryV2::EncryptionConfiguration

Custom encryption configuration (e.g., Cloud KMS keys). Corresponds to the JSON property destinationEncryptionConfiguration



3841
3842
3843
# File 'lib/google/apis/bigquery_v2/classes.rb', line 3841

def destination_encryption_configuration
  @destination_encryption_configuration
end

#destination_tableGoogle::Apis::BigqueryV2::TableReference

[Required] The destination table to load the data into. Corresponds to the JSON property destinationTable



3846
3847
3848
# File 'lib/google/apis/bigquery_v2/classes.rb', line 3846

def destination_table
  @destination_table
end

#destination_table_propertiesGoogle::Apis::BigqueryV2::DestinationTableProperties

[Beta] [Optional] Properties with which to create the destination table if it is new. Corresponds to the JSON property destinationTableProperties



3852
3853
3854
# File 'lib/google/apis/bigquery_v2/classes.rb', line 3852

def destination_table_properties
  @destination_table_properties
end

#encodingString

[Optional] The character encoding of the data. The supported values are UTF-8 or ISO-8859-1. The default value is UTF-8. BigQuery decodes the data after the raw, binary data has been split using the values of the quote and fieldDelimiter properties. Corresponds to the JSON property encoding

Returns:

  • (String)


3860
3861
3862
# File 'lib/google/apis/bigquery_v2/classes.rb', line 3860

def encoding
  @encoding
end

#field_delimiterString

[Optional] The separator for fields in a CSV file. The separator can be any ISO-8859-1 single-byte character. To use a character in the range 128-255, you must encode the character as UTF8. BigQuery converts the string to ISO-8859-1 encoding, and then uses the first byte of the encoded string to split the data in its raw, binary state. BigQuery also supports the escape sequence "\t" to specify a tab separator. The default value is a comma (','). Corresponds to the JSON property fieldDelimiter

Returns:

  • (String)


3870
3871
3872
# File 'lib/google/apis/bigquery_v2/classes.rb', line 3870

def field_delimiter
  @field_delimiter
end

#hive_partitioning_optionsGoogle::Apis::BigqueryV2::HivePartitioningOptions

[Optional] Options to configure hive partitioning support. Corresponds to the JSON property hivePartitioningOptions



3875
3876
3877
# File 'lib/google/apis/bigquery_v2/classes.rb', line 3875

def hive_partitioning_options
  @hive_partitioning_options
end

#ignore_unknown_valuesBoolean Also known as: ignore_unknown_values?

[Optional] Indicates if BigQuery should allow extra values that are not represented in the table schema. If true, the extra values are ignored. If false, records with extra columns are treated as bad records, and if there are too many bad records, an invalid error is returned in the job result. The default value is false. The sourceFormat property determines what BigQuery treats as an extra value: CSV: Trailing columns JSON: Named values that don't match any column names Corresponds to the JSON property ignoreUnknownValues

Returns:

  • (Boolean)


3886
3887
3888
# File 'lib/google/apis/bigquery_v2/classes.rb', line 3886

def ignore_unknown_values
  @ignore_unknown_values
end

#json_extensionString

[Optional] If sourceFormat is set to newline-delimited JSON, indicates whether it should be processed as a JSON variant such as GeoJSON. For a sourceFormat other than JSON, omit this field. If the sourceFormat is newline-delimited JSON: - for newline-delimited GeoJSON: set to GEOJSON. Corresponds to the JSON property jsonExtension

Returns:

  • (String)


3895
3896
3897
# File 'lib/google/apis/bigquery_v2/classes.rb', line 3895

def json_extension
  @json_extension
end

#max_bad_recordsFixnum

[Optional] The maximum number of bad records that BigQuery can ignore when running the job. If the number of bad records exceeds this value, an invalid error is returned in the job result. This is only valid for CSV and JSON. The default value is 0, which requires that all records are valid. Corresponds to the JSON property maxBadRecords

Returns:

  • (Fixnum)


3903
3904
3905
# File 'lib/google/apis/bigquery_v2/classes.rb', line 3903

def max_bad_records
  @max_bad_records
end

#null_markerString

[Optional] Specifies a string that represents a null value in a CSV file. For example, if you specify "\N", BigQuery interprets "\N" as a null value when loading a CSV file. The default value is the empty string. If you set this property to a custom value, BigQuery throws an error if an empty string is present for all data types except for STRING and BYTE. For STRING and BYTE columns, BigQuery interprets the empty string as an empty value. Corresponds to the JSON property nullMarker

Returns:

  • (String)


3913
3914
3915
# File 'lib/google/apis/bigquery_v2/classes.rb', line 3913

def null_marker
  @null_marker
end

#parquet_optionsGoogle::Apis::BigqueryV2::ParquetOptions

[Optional] Options to configure parquet support. Corresponds to the JSON property parquetOptions



3918
3919
3920
# File 'lib/google/apis/bigquery_v2/classes.rb', line 3918

def parquet_options
  @parquet_options
end

#preserve_ascii_control_charactersBoolean Also known as: preserve_ascii_control_characters?

[Optional] Preserves the embedded ASCII control characters (the first 32 characters in the ASCII-table, from '\x00' to '\x1F') when loading from CSV. Only applicable to CSV, ignored for other formats. Corresponds to the JSON property preserveAsciiControlCharacters

Returns:

  • (Boolean)


3925
3926
3927
# File 'lib/google/apis/bigquery_v2/classes.rb', line 3925

def preserve_ascii_control_characters
  @preserve_ascii_control_characters
end

#projection_fieldsArray<String>

If sourceFormat is set to "DATASTORE_BACKUP", indicates which entity properties to load into BigQuery from a Cloud Datastore backup. Property names are case sensitive and must be top-level properties. If no properties are specified, BigQuery loads all properties. If any named property isn't found in the Cloud Datastore backup, an invalid error is returned in the job result. Corresponds to the JSON property projectionFields

Returns:

  • (Array<String>)


3935
3936
3937
# File 'lib/google/apis/bigquery_v2/classes.rb', line 3935

def projection_fields
  @projection_fields
end

#quoteString

[Optional] The value that is used to quote data sections in a CSV file. BigQuery converts the string to ISO-8859-1 encoding, and then uses the first byte of the encoded string to split the data in its raw, binary state. The default value is a double-quote ('"'). If your data does not contain quoted sections, set the property value to an empty string. If your data contains quoted newline characters, you must also set the allowQuotedNewlines property to true. Corresponds to the JSON property quote

Returns:

  • (String)


3946
3947
3948
# File 'lib/google/apis/bigquery_v2/classes.rb', line 3946

def quote
  @quote
end

#range_partitioningGoogle::Apis::BigqueryV2::RangePartitioning

[TrustedTester] Range partitioning specification for this table. Only one of timePartitioning and rangePartitioning should be specified. Corresponds to the JSON property rangePartitioning



3952
3953
3954
# File 'lib/google/apis/bigquery_v2/classes.rb', line 3952

def range_partitioning
  @range_partitioning
end

#reference_file_schema_uriString

User provided referencing file with the expected reader schema, Available for the format: AVRO, PARQUET, ORC. Corresponds to the JSON property referenceFileSchemaUri

Returns:

  • (String)


3958
3959
3960
# File 'lib/google/apis/bigquery_v2/classes.rb', line 3958

def reference_file_schema_uri
  @reference_file_schema_uri
end

#schemaGoogle::Apis::BigqueryV2::TableSchema

[Optional] The schema for the destination table. The schema can be omitted if the destination table already exists, or if you're loading data from Google Cloud Datastore. Corresponds to the JSON property schema



3965
3966
3967
# File 'lib/google/apis/bigquery_v2/classes.rb', line 3965

def schema
  @schema
end

#schema_inlineString

[Deprecated] The inline schema. For CSV schemas, specify as "Field1:Type1[, Field2:Type2]*". For example, "foo:STRING, bar:INTEGER, baz:FLOAT". Corresponds to the JSON property schemaInline

Returns:

  • (String)


3971
3972
3973
# File 'lib/google/apis/bigquery_v2/classes.rb', line 3971

def schema_inline
  @schema_inline
end

#schema_inline_formatString

[Deprecated] The format of the schemaInline property. Corresponds to the JSON property schemaInlineFormat

Returns:

  • (String)


3976
3977
3978
# File 'lib/google/apis/bigquery_v2/classes.rb', line 3976

def schema_inline_format
  @schema_inline_format
end

#schema_update_optionsArray<String>

Allows the schema of the destination table to be updated as a side effect of the load job if a schema is autodetected or supplied in the job configuration. Schema update options are supported in two cases: when writeDisposition is WRITE_APPEND; when writeDisposition is WRITE_TRUNCATE and the destination table is a partition of a table, specified by partition decorators. For normal tables, WRITE_TRUNCATE will always overwrite the schema. One or more of the following values are specified: ALLOW_FIELD_ADDITION: allow adding a nullable field to the schema. ALLOW_FIELD_RELAXATION: allow relaxing a required field in the original schema to nullable. Corresponds to the JSON property schemaUpdateOptions

Returns:

  • (Array<String>)


3989
3990
3991
# File 'lib/google/apis/bigquery_v2/classes.rb', line 3989

def schema_update_options
  @schema_update_options
end

#skip_leading_rowsFixnum

[Optional] The number of rows at the top of a CSV file that BigQuery will skip when loading the data. The default value is 0. This property is useful if you have header rows in the file that should be skipped. Corresponds to the JSON property skipLeadingRows

Returns:

  • (Fixnum)


3996
3997
3998
# File 'lib/google/apis/bigquery_v2/classes.rb', line 3996

def skip_leading_rows
  @skip_leading_rows
end

#source_formatString

[Optional] The format of the data files. For CSV files, specify "CSV". For datastore backups, specify "DATASTORE_BACKUP". For newline-delimited JSON, specify "NEWLINE_DELIMITED_JSON". For Avro, specify "AVRO". For parquet, specify "PARQUET". For orc, specify "ORC". The default value is CSV. Corresponds to the JSON property sourceFormat

Returns:

  • (String)


4004
4005
4006
# File 'lib/google/apis/bigquery_v2/classes.rb', line 4004

def source_format
  @source_format
end

#source_urisArray<String>

[Required] The fully-qualified URIs that point to your data in Google Cloud. For Google Cloud Storage URIs: Each URI can contain one '' wildcard character and it must come after the 'bucket' name. Size limits related to load jobs apply to external data sources. For Google Cloud Bigtable URIs: Exactly one URI can be specified and it has be a fully specified and valid HTTPS URL for a Google Cloud Bigtable table. For Google Cloud Datastore backups: Exactly one URI can be specified. Also, the '' wildcard character is not allowed. Corresponds to the JSON property sourceUris

Returns:

  • (Array<String>)


4015
4016
4017
# File 'lib/google/apis/bigquery_v2/classes.rb', line 4015

def source_uris
  @source_uris
end

#time_partitioningGoogle::Apis::BigqueryV2::TimePartitioning

Time-based partitioning specification for the destination table. Only one of timePartitioning and rangePartitioning should be specified. Corresponds to the JSON property timePartitioning



4021
4022
4023
# File 'lib/google/apis/bigquery_v2/classes.rb', line 4021

def time_partitioning
  @time_partitioning
end

#use_avro_logical_typesBoolean Also known as: use_avro_logical_types?

[Optional] If sourceFormat is set to "AVRO", indicates whether to interpret logical types as the corresponding BigQuery data type (for example, TIMESTAMP), instead of using the raw type (for example, INTEGER). Corresponds to the JSON property useAvroLogicalTypes

Returns:

  • (Boolean)


4028
4029
4030
# File 'lib/google/apis/bigquery_v2/classes.rb', line 4028

def use_avro_logical_types
  @use_avro_logical_types
end

#write_dispositionString

[Optional] Specifies the action that occurs if the destination table already exists. The following values are supported: WRITE_TRUNCATE: If the table already exists, BigQuery overwrites the table data. WRITE_APPEND: If the table already exists, BigQuery appends the data to the table. WRITE_EMPTY: If the table already exists and contains data, a 'duplicate' error is returned in the job result. The default value is WRITE_APPEND. Each action is atomic and only occurs if BigQuery is able to complete the job successfully. Creation, truncation and append actions occur as one atomic update upon job completion. Corresponds to the JSON property writeDisposition

Returns:

  • (String)


4041
4042
4043
# File 'lib/google/apis/bigquery_v2/classes.rb', line 4041

def write_disposition
  @write_disposition
end

Instance Method Details

#update!(**args) ⇒ Object

Update properties of this object



4048
4049
4050
4051
4052
4053
4054
4055
4056
4057
4058
4059
4060
4061
4062
4063
4064
4065
4066
4067
4068
4069
4070
4071
4072
4073
4074
4075
4076
4077
4078
4079
4080
4081
4082
4083
# File 'lib/google/apis/bigquery_v2/classes.rb', line 4048

def update!(**args)
  @allow_jagged_rows = args[:allow_jagged_rows] if args.key?(:allow_jagged_rows)
  @allow_quoted_newlines = args[:allow_quoted_newlines] if args.key?(:allow_quoted_newlines)
  @autodetect = args[:autodetect] if args.key?(:autodetect)
  @clustering = args[:clustering] if args.key?(:clustering)
  @connection_properties = args[:connection_properties] if args.key?(:connection_properties)
  @create_disposition = args[:create_disposition] if args.key?(:create_disposition)
  @create_session = args[:create_session] if args.key?(:create_session)
  @decimal_target_types = args[:decimal_target_types] if args.key?(:decimal_target_types)
  @destination_encryption_configuration = args[:destination_encryption_configuration] if args.key?(:destination_encryption_configuration)
  @destination_table = args[:destination_table] if args.key?(:destination_table)
  @destination_table_properties = args[:destination_table_properties] if args.key?(:destination_table_properties)
  @encoding = args[:encoding] if args.key?(:encoding)
  @field_delimiter = args[:field_delimiter] if args.key?(:field_delimiter)
  @hive_partitioning_options = args[:hive_partitioning_options] if args.key?(:hive_partitioning_options)
  @ignore_unknown_values = args[:ignore_unknown_values] if args.key?(:ignore_unknown_values)
  @json_extension = args[:json_extension] if args.key?(:json_extension)
  @max_bad_records = args[:max_bad_records] if args.key?(:max_bad_records)
  @null_marker = args[:null_marker] if args.key?(:null_marker)
  @parquet_options = args[:parquet_options] if args.key?(:parquet_options)
  @preserve_ascii_control_characters = args[:preserve_ascii_control_characters] if args.key?(:preserve_ascii_control_characters)
  @projection_fields = args[:projection_fields] if args.key?(:projection_fields)
  @quote = args[:quote] if args.key?(:quote)
  @range_partitioning = args[:range_partitioning] if args.key?(:range_partitioning)
  @reference_file_schema_uri = args[:reference_file_schema_uri] if args.key?(:reference_file_schema_uri)
  @schema = args[:schema] if args.key?(:schema)
  @schema_inline = args[:schema_inline] if args.key?(:schema_inline)
  @schema_inline_format = args[:schema_inline_format] if args.key?(:schema_inline_format)
  @schema_update_options = args[:schema_update_options] if args.key?(:schema_update_options)
  @skip_leading_rows = args[:skip_leading_rows] if args.key?(:skip_leading_rows)
  @source_format = args[:source_format] if args.key?(:source_format)
  @source_uris = args[:source_uris] if args.key?(:source_uris)
  @time_partitioning = args[:time_partitioning] if args.key?(:time_partitioning)
  @use_avro_logical_types = args[:use_avro_logical_types] if args.key?(:use_avro_logical_types)
  @write_disposition = args[:write_disposition] if args.key?(:write_disposition)
end