google.cloud.bigquery.job.LoadJobConfig¶
-
class
google.cloud.bigquery.job.
LoadJobConfig
(**kwargs)[source]¶ Configuration options for load jobs.
All properties in this class are optional. Values which are
None
-> server defaults. Set properties on the constructed configuration by using the property name as the name of a keyword argument.Methods
__init__
(**kwargs)Initialize self.
from_api_repr
(resource)Factory: construct a job configuration given its API representation
Build an API representation of the job config.
Attributes
Allow missing trailing optional columns (CSV only).
Allow quoted data containing newline characters (CSV only).
Automatically infer the schema from a sample of the data.
Fields defining clustering for the table
Specifies behavior for creating tables.
Custom encryption configuration for the destination table.
Name given to destination table.
Name given to destination table.
The character encoding of the data.
The separator for fields in a CSV file.
[Beta] When set, it configures hive partitioning support.
Ignore extra values not represented in the table schema.
Labels for the job.
Number of invalid rows to ignore.
Represents a null value (CSV only).
Character used to quote data sections (CSV only).
Optional[google.cloud.bigquery.table.RangePartitioning]: Configures range-based partitioning for destination table.
Schema of the destination table.
Specifies updates to the destination table schema to allow as a side effect of the load job.
Number of rows to skip when reading data (CSV only).
File format of the data.
Specifies time-based partitioning for the destination table.
For loads of Avro data, governs whether Avro logical types are converted to their corresponding BigQuery types (e.g.
Action that occurs if the destination table already exists.
-
property
allow_quoted_newlines
¶ Allow quoted data containing newline characters (CSV only).
- Type
Optional[bool]
-
property
autodetect
¶ Automatically infer the schema from a sample of the data.
See: https://cloud.google.com/bigquery/docs/reference/rest/v2/Job#JobConfigurationLoad.FIELDS.autodetect
- Type
Optional[bool]
-
property
clustering_fields
¶ Fields defining clustering for the table
(Defaults to
None
).Clustering fields are immutable after table creation.
Note
BigQuery supports clustering for both partitioned and non-partitioned tables.
- Type
Optional[List[str]]
-
property
create_disposition
¶ Specifies behavior for creating tables.
- Type
-
property
destination_encryption_configuration
¶ Custom encryption configuration for the destination table.
Custom encryption configuration (e.g., Cloud KMS keys) or
None
if using default encryption.
-
property
encoding
¶ The character encoding of the data.
See: https://cloud.google.com/bigquery/docs/reference/rest/v2/Job#JobConfigurationLoad.FIELDS.encoding
- Type
Optional[google.cloud.bigquery.job.Encoding]
-
classmethod
from_api_repr
(resource)¶ Factory: construct a job configuration given its API representation
- Parameters
resource (Dict) – A job configuration in the same representation as is returned from the API.
- Returns
Configuration parsed from
resource
.- Return type
google.cloud.bigquery.job._JobConfig
-
property
hive_partitioning
¶ [Beta] When set, it configures hive partitioning support.
Note
Experimental. This feature is experimental and might change or have limited support.
- Type
Optional[
HivePartitioningOptions
]
-
property
ignore_unknown_values
¶ Ignore extra values not represented in the table schema.
- Type
Optional[bool]
-
property
labels
¶ Labels for the job.
This method always returns a dict. To change a job’s labels, modify the dict, then call
Client.update_job
. To delete a label, set its value toNone
before updating.- Raises
ValueError – If
value
type is invalid.- Type
-
property
null_marker
¶ Represents a null value (CSV only).
See: https://cloud.google.com/bigquery/docs/reference/rest/v2/Job#JobConfigurationLoad.FIELDS.null_marker
- Type
Optional[str]
-
property
quote_character
¶ Character used to quote data sections (CSV only).
See: https://cloud.google.com/bigquery/docs/reference/rest/v2/Job#JobConfigurationLoad.FIELDS.quote
- Type
Optional[str]
-
property
range_partitioning
¶ Optional[google.cloud.bigquery.table.RangePartitioning]: Configures range-based partitioning for destination table.
Note
Beta. The integer range partitioning feature is in a pre-release state and might change or have limited support.
Only specify at most one of
time_partitioning
orrange_partitioning
.- Raises
ValueError – If the value is not
RangePartitioning
orNone
.
-
property
schema
¶ Schema of the destination table.
See: https://cloud.google.com/bigquery/docs/reference/rest/v2/Job#JobConfigurationLoad.FIELDS.schema
- Type
Optional[Sequence[Union[
SchemaField
, Mapping[str, Any] ]]]
-
property
schema_update_options
¶ Specifies updates to the destination table schema to allow as a side effect of the load job.
- Type
Optional[List[google.cloud.bigquery.job.SchemaUpdateOption]]
-
property
source_format
¶ File format of the data.
- Type
Optional[google.cloud.bigquery.job.SourceFormat]
-
property
time_partitioning
¶ Specifies time-based partitioning for the destination table.
Only specify at most one of
time_partitioning
orrange_partitioning
.- Type
-
to_api_repr
()¶ Build an API representation of the job config.
- Returns
A dictionary in the format used by the BigQuery API.
- Return type
Dict
-
property
use_avro_logical_types
¶ For loads of Avro data, governs whether Avro logical types are converted to their corresponding BigQuery types (e.g. TIMESTAMP) rather than raw types (e.g. INTEGER).
- Type
Optional[bool]
-
property
write_disposition
¶ Action that occurs if the destination table already exists.
- Type
-
property