Class: Google::Cloud::Bigquery::LoadJob::Updater
- Inherits:
-
Google::Cloud::Bigquery::LoadJob
- Object
- Job
- Google::Cloud::Bigquery::LoadJob
- Google::Cloud::Bigquery::LoadJob::Updater
- Defined in:
- lib/google/cloud/bigquery/load_job.rb
Overview
Yielded to a block to accumulate changes for a patch request.
Attributes collapse
-
#updates ⇒ Object
readonly
A list of attributes that were updated.
Attributes collapse
-
#autodetect=(val) ⇒ Object
Allows BigQuery to autodetect the schema.
- #cancel ⇒ Object
-
#clustering_fields=(fields) ⇒ Object
Sets the list of fields on which data should be clustered.
-
#column_name_character_map=(new_character_map) ⇒ Object
Sets the character map for column name conversion.
-
#create=(new_create) ⇒ Object
Sets the create disposition.
-
#create_session=(value) ⇒ Object
Sets the create_session property.
-
#delimiter=(val) ⇒ Object
Sets the separator for fields in a CSV file.
-
#encoding=(val) ⇒ Object
Sets the character encoding of the data.
-
#encryption=(val) ⇒ Object
Sets the encryption configuration of the destination table.
-
#format=(new_format) ⇒ Object
Sets the source file format.
-
#hive_partitioning_mode=(mode) ⇒ Object
Sets the mode of hive partitioning to use when reading data.
-
#hive_partitioning_source_uri_prefix=(source_uri_prefix) ⇒ Object
Sets the common prefix for all source uris when hive partition detection is requested.
-
#ignore_unknown=(val) ⇒ Object
Allows unknown columns to be ignored.
-
#jagged_rows=(val) ⇒ Object
Sets flag for allowing jagged rows.
-
#labels=(val) ⇒ Object
Sets the labels to use for the load job.
-
#location=(value) ⇒ Object
Sets the geographic location where the job should run.
-
#max_bad_records=(val) ⇒ Object
Sets the maximum number of bad records that can be ignored.
-
#null_marker=(val) ⇒ Object
Sets the string that represents a null value in a CSV file.
-
#parquet_enable_list_inference=(enable_list_inference) ⇒ Object
Sets whether to use schema inference specifically for Parquet
LIST
logical type. -
#parquet_enum_as_string=(enum_as_string) ⇒ Object
Sets whether to infer Parquet
ENUM
logical type asSTRING
instead ofBYTES
by default. -
#projection_fields=(new_fields) ⇒ Object
Sets the projection fields.
-
#quote=(val) ⇒ Object
Sets the character to use to quote string values in CSVs.
-
#quoted_newlines=(val) ⇒ Object
Allows quoted data sections to contain newline characters in CSV.
-
#range_partitioning_end=(range_end) ⇒ Object
Sets the end of range partitioning, exclusive, for the destination table.
-
#range_partitioning_field=(field) ⇒ Object
Sets the field on which to range partition the table.
-
#range_partitioning_interval=(range_interval) ⇒ Object
Sets width of each interval for data in range partitions.
-
#range_partitioning_start=(range_start) ⇒ Object
Sets the start of range partitioning, inclusive, for the destination table.
- #reload! ⇒ Object (also: #refresh!)
- #rerun! ⇒ Object
-
#schema_update_options=(new_options) ⇒ Object
Sets the schema update options, which allow the schema of the destination table to be updated as a side effect of the load job if a schema is autodetected or supplied in the job configuration.
-
#session_id=(value) ⇒ Object
Sets the session ID for a query run in session mode.
-
#skip_leading=(val) ⇒ Object
Sets the number of leading rows to skip in the file.
-
#source_uris=(new_uris) ⇒ Object
Sets the source URIs to load.
-
#time_partitioning_expiration=(expiration) ⇒ Object
Sets the time partition expiration for the destination table.
-
#time_partitioning_field=(field) ⇒ Object
Sets the field on which to time partition the destination table.
-
#time_partitioning_require_filter=(val) ⇒ Object
If set to true, queries over the destination table will require a time partition filter that can be used for time partition elimination to be specified.
-
#time_partitioning_type=(type) ⇒ Object
Sets the time partitioning for the destination table.
- #wait_until_done! ⇒ Object
-
#write=(new_write) ⇒ Object
Sets the write disposition.
Schema collapse
-
#bignumeric(name, description: nil, mode: :nullable, policy_tags: nil, precision: nil, scale: nil, default_value_expression: nil) ⇒ Object
Adds a bignumeric number field to the schema.
-
#boolean(name, description: nil, mode: :nullable, policy_tags: nil, default_value_expression: nil) ⇒ Object
Adds a boolean field to the schema.
-
#bytes(name, description: nil, mode: :nullable, policy_tags: nil, max_length: nil, default_value_expression: nil) ⇒ Object
Adds a bytes field to the schema.
-
#check_for_mutated_schema! ⇒ Object
Make sure any access changes are saved.
-
#date(name, description: nil, mode: :nullable, policy_tags: nil, default_value_expression: nil) ⇒ Object
Adds a date field to the schema.
-
#datetime(name, description: nil, mode: :nullable, policy_tags: nil, default_value_expression: nil) ⇒ Object
Adds a datetime field to the schema.
-
#float(name, description: nil, mode: :nullable, policy_tags: nil, default_value_expression: nil) ⇒ Object
Adds a floating-point number field to the schema.
-
#geography(name, description: nil, mode: :nullable, policy_tags: nil, default_value_expression: nil) ⇒ Object
Adds a geography field to the schema.
-
#integer(name, description: nil, mode: :nullable, policy_tags: nil, default_value_expression: nil) ⇒ Object
Adds an integer field to the schema.
-
#json(name, description: nil, mode: :nullable, policy_tags: nil, default_value_expression: nil) ⇒ Object
Adds an json field to the schema.
-
#numeric(name, description: nil, mode: :nullable, policy_tags: nil, precision: nil, scale: nil, default_value_expression: nil) ⇒ Object
Adds a numeric number field to the schema.
-
#record(name, description: nil, mode: nil, default_value_expression: nil) {|nested_schema| ... } ⇒ Object
Adds a record field to the schema.
-
#schema(replace: false) {|schema| ... } ⇒ Google::Cloud::Bigquery::Schema
Returns the table's schema.
-
#schema=(new_schema) ⇒ Object
Sets the schema of the destination table.
-
#string(name, description: nil, mode: :nullable, policy_tags: nil, max_length: nil, default_value_expression: nil) ⇒ Object
Adds a string field to the schema.
-
#time(name, description: nil, mode: :nullable, policy_tags: nil, default_value_expression: nil) ⇒ Object
Adds a time field to the schema.
-
#timestamp(name, description: nil, mode: :nullable, policy_tags: nil, default_value_expression: nil) ⇒ Object
Adds a timestamp field to the schema.
Methods inherited from Google::Cloud::Bigquery::LoadJob
#allow_jagged_rows?, #autodetect?, #backup?, #clustering?, #clustering_fields, #csv?, #delimiter, #destination, #encryption, #hive_partitioning?, #hive_partitioning_mode, #hive_partitioning_source_uri_prefix, #ignore_unknown_values?, #input_file_bytes, #input_files, #iso8859_1?, #json?, #max_bad_records, #null_marker, #orc?, #output_bytes, #output_rows, #parquet?, #parquet_enable_list_inference?, #parquet_enum_as_string?, #parquet_options?, #quote, #quoted_newlines?, #range_partitioning?, #range_partitioning_end, #range_partitioning_field, #range_partitioning_interval, #range_partitioning_start, #schema_update_options, #skip_leading_rows, #sources, #time_partitioning?, #time_partitioning_expiration, #time_partitioning_field, #time_partitioning_require_filter?, #time_partitioning_type, #utf8?
Methods inherited from Job
#configuration, #created_at, #delete, #done?, #ended_at, #error, #errors, #failed?, #job_id, #labels, #location, #num_child_jobs, #parent_job_id, #pending?, #project_id, #reservation_usage, #running?, #script_statistics, #session_id, #started_at, #state, #statistics, #status, #transaction_id, #user_email
Instance Attribute Details
#updates ⇒ Object (readonly)
A list of attributes that were updated.
660 661 662 |
# File 'lib/google/cloud/bigquery/load_job.rb', line 660 def updates @updates end |
Instance Method Details
#autodetect=(val) ⇒ Object
Allows BigQuery to autodetect the schema.
1909 1910 1911 |
# File 'lib/google/cloud/bigquery/load_job.rb', line 1909 def autodetect= val @gapi.configuration.load.update! autodetect: val end |
#bignumeric(name, description: nil, mode: :nullable, policy_tags: nil, precision: nil, scale: nil, default_value_expression: nil) ⇒ Object
Adds a bignumeric number field to the schema. BIGNUMERIC
is a
decimal type with fixed precision and scale. Precision is the
number of digits that the number contains. Scale is how many of
these digits appear after the decimal point. It supports:
Precision: 76.76 (the 77th digit is partial) Scale: 38 Min: -5.7896044618658097711785492504343953926634992332820282019728792003956564819968E+38 Max: 5.7896044618658097711785492504343953926634992332820282019728792003956564819967E+38
This type can represent decimal fractions exactly, and is suitable for financial calculations.
1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 |
# File 'lib/google/cloud/bigquery/load_job.rb', line 1085 def bignumeric name, description: nil, mode: :nullable, policy_tags: nil, precision: nil, scale: nil, default_value_expression: nil schema.bignumeric name, description: description, mode: mode, policy_tags: , precision: precision, scale: scale, default_value_expression: default_value_expression end |
#boolean(name, description: nil, mode: :nullable, policy_tags: nil, default_value_expression: nil) ⇒ Object
Adds a boolean field to the schema.
See Schema#boolean.
1149 1150 1151 1152 1153 1154 1155 1156 |
# File 'lib/google/cloud/bigquery/load_job.rb', line 1149 def boolean name, description: nil, mode: :nullable, policy_tags: nil, default_value_expression: nil schema.boolean name, description: description, mode: mode, policy_tags: , default_value_expression: default_value_expression end |
#bytes(name, description: nil, mode: :nullable, policy_tags: nil, max_length: nil, default_value_expression: nil) ⇒ Object
Adds a bytes field to the schema.
See Schema#bytes.
1213 1214 1215 1216 1217 1218 1219 1220 1221 |
# File 'lib/google/cloud/bigquery/load_job.rb', line 1213 def bytes name, description: nil, mode: :nullable, policy_tags: nil, max_length: nil, default_value_expression: nil schema.bytes name, description: description, mode: mode, policy_tags: , max_length: max_length, default_value_expression: default_value_expression end |
#cancel ⇒ Object
2588 2589 2590 |
# File 'lib/google/cloud/bigquery/load_job.rb', line 2588 def cancel raise "not implemented in #{self.class}" end |
#check_for_mutated_schema! ⇒ Object
Make sure any access changes are saved
1670 1671 1672 1673 1674 1675 |
# File 'lib/google/cloud/bigquery/load_job.rb', line 1670 def check_for_mutated_schema! return if @schema.nil? return unless @schema.changed? @gapi.configuration.load.schema = @schema.to_gapi patch_gapi! :schema end |
#clustering_fields=(fields) ⇒ Object
Sets the list of fields on which data should be clustered.
Only top-level, non-repeated, simple-type fields are supported. When you cluster a table using multiple columns, the order of columns you specify is important. The order of the specified columns determines the sort order of the data.
BigQuery supports clustering for both partitioned and non-partitioned tables.
See Google::Cloud::Bigquery::LoadJob#clustering_fields, Table#clustering_fields and Table#clustering_fields=.
2583 2584 2585 2586 |
# File 'lib/google/cloud/bigquery/load_job.rb', line 2583 def clustering_fields= fields @gapi.configuration.load.clustering ||= Google::Apis::BigqueryV2::Clustering.new @gapi.configuration.load.clustering.fields = fields end |
#column_name_character_map=(new_character_map) ⇒ Object
Sets the character map for column name conversion. The default value is default
.
The following values are supported:
default
strict
v1
v2
1744 1745 1746 |
# File 'lib/google/cloud/bigquery/load_job.rb', line 1744 def column_name_character_map= new_character_map @gapi.configuration.load.update! column_name_character_map: Convert.character_map(new_character_map) end |
#create=(new_create) ⇒ Object
Sets the create disposition.
This specifies whether the job is allowed to create new tables. The
default value is needed
.
The following values are supported:
needed
- Create the table if it does not exist.never
- The table must already exist. A 'notFound' error is raised if the table does not exist.
1764 1765 1766 |
# File 'lib/google/cloud/bigquery/load_job.rb', line 1764 def create= new_create @gapi.configuration.load.update! create_disposition: Convert.create_disposition(new_create) end |
#create_session=(value) ⇒ Object
Sets the create_session property. If true, creates a new session,
where session id will be a server generated random id. If false,
runs query with an existing #session_id=, otherwise runs query in
non-session mode. The default value is false
.
value is false
.
1799 1800 1801 |
# File 'lib/google/cloud/bigquery/load_job.rb', line 1799 def create_session= value @gapi.configuration.load.create_session = value end |
#date(name, description: nil, mode: :nullable, policy_tags: nil, default_value_expression: nil) ⇒ Object
Adds a date field to the schema.
See Schema#date.
1459 1460 1461 1462 1463 1464 1465 1466 |
# File 'lib/google/cloud/bigquery/load_job.rb', line 1459 def date name, description: nil, mode: :nullable, policy_tags: nil, default_value_expression: nil schema.date name, description: description, mode: mode, policy_tags: , default_value_expression: default_value_expression end |
#datetime(name, description: nil, mode: :nullable, policy_tags: nil, default_value_expression: nil) ⇒ Object
Adds a datetime field to the schema.
See Schema#datetime.
1397 1398 1399 1400 1401 1402 1403 1404 |
# File 'lib/google/cloud/bigquery/load_job.rb', line 1397 def datetime name, description: nil, mode: :nullable, policy_tags: nil, default_value_expression: nil schema.datetime name, description: description, mode: mode, policy_tags: , default_value_expression: default_value_expression end |
#delimiter=(val) ⇒ Object
Sets the separator for fields in a CSV file.
1936 1937 1938 |
# File 'lib/google/cloud/bigquery/load_job.rb', line 1936 def delimiter= val @gapi.configuration.load.update! field_delimiter: val end |
#encoding=(val) ⇒ Object
Sets the character encoding of the data.
1922 1923 1924 |
# File 'lib/google/cloud/bigquery/load_job.rb', line 1922 def encoding= val @gapi.configuration.load.update! encoding: val end |
#encryption=(val) ⇒ Object
Sets the encryption configuration of the destination table.
2074 2075 2076 |
# File 'lib/google/cloud/bigquery/load_job.rb', line 2074 def encryption= val @gapi.configuration.load.update! destination_encryption_configuration: val.to_gapi end |
#float(name, description: nil, mode: :nullable, policy_tags: nil, default_value_expression: nil) ⇒ Object
Adds a floating-point number field to the schema.
See Schema#float.
920 921 922 923 924 |
# File 'lib/google/cloud/bigquery/load_job.rb', line 920 def float name, description: nil, mode: :nullable, policy_tags: nil, default_value_expression: nil schema.float name, description: description, mode: mode, policy_tags: , default_value_expression: default_value_expression end |
#format=(new_format) ⇒ Object
Sets the source file format. The default value is csv
.
The following values are supported:
csv
- CSVjson
- Newline-delimited JSONavro
- Avroorc
- ORCparquet
- Parquetdatastore_backup
- Cloud Datastore backup
1726 1727 1728 |
# File 'lib/google/cloud/bigquery/load_job.rb', line 1726 def format= new_format @gapi.configuration.load.update! source_format: Convert.source_format(new_format) end |
#geography(name, description: nil, mode: :nullable, policy_tags: nil, default_value_expression: nil) ⇒ Object
Adds a geography field to the schema.
See Schema#geography.
1525 1526 1527 1528 1529 1530 1531 1532 |
# File 'lib/google/cloud/bigquery/load_job.rb', line 1525 def geography name, description: nil, mode: :nullable, policy_tags: nil, default_value_expression: nil schema.geography name, description: description, mode: mode, policy_tags: , default_value_expression: default_value_expression end |
#hive_partitioning_mode=(mode) ⇒ Object
Sets the mode of hive partitioning to use when reading data. The following modes are supported:
auto
: automatically infer partition key name(s) and type(s).strings
: automatically infer partition key name(s). All types are interpreted as strings.custom
: partition key schema is encoded in the source URI prefix.
Not all storage formats support hive partitioning. Requesting hive partitioning on an unsupported format
will lead to an error. Currently supported types include: avro
, csv
, json
, orc
and parquet
.
2139 2140 2141 2142 |
# File 'lib/google/cloud/bigquery/load_job.rb', line 2139 def hive_partitioning_mode= mode @gapi.configuration.load. ||= Google::Apis::BigqueryV2::HivePartitioningOptions.new @gapi.configuration.load..mode = mode.to_s.upcase end |
#hive_partitioning_source_uri_prefix=(source_uri_prefix) ⇒ Object
Sets the common prefix for all source uris when hive partition detection is requested. The prefix must end immediately before the partition key encoding begins. For example, consider files following this data layout:
gs://bucket/path_to_table/dt=2019-01-01/country=BR/id=7/file.avro
gs://bucket/path_to_table/dt=2018-12-31/country=CA/id=3/file.avro
When hive partitioning is requested with either AUTO
or STRINGS
mode, the common prefix can be either of
gs://bucket/path_to_table
or gs://bucket/path_to_table/
(trailing slash does not matter).
2182 2183 2184 2185 |
# File 'lib/google/cloud/bigquery/load_job.rb', line 2182 def hive_partitioning_source_uri_prefix= source_uri_prefix @gapi.configuration.load. ||= Google::Apis::BigqueryV2::HivePartitioningOptions.new @gapi.configuration.load..source_uri_prefix = source_uri_prefix end |
#ignore_unknown=(val) ⇒ Object
Allows unknown columns to be ignored.
1958 1959 1960 |
# File 'lib/google/cloud/bigquery/load_job.rb', line 1958 def ignore_unknown= val @gapi.configuration.load.update! ignore_unknown_values: val end |
#integer(name, description: nil, mode: :nullable, policy_tags: nil, default_value_expression: nil) ⇒ Object
Adds an integer field to the schema.
See Schema#integer.
861 862 863 864 865 |
# File 'lib/google/cloud/bigquery/load_job.rb', line 861 def integer name, description: nil, mode: :nullable, policy_tags: nil, default_value_expression: nil schema.integer name, description: description, mode: mode, policy_tags: , default_value_expression: default_value_expression end |
#jagged_rows=(val) ⇒ Object
Sets flag for allowing jagged rows.
Accept rows that are missing trailing optional columns. The missing
values are treated as nulls. If false
, records with missing
trailing columns are treated as bad records, and if there are too
many bad records, an invalid error is returned in the job result.
The default value is false
. Only applicable to CSV, ignored for
other formats.
1883 1884 1885 |
# File 'lib/google/cloud/bigquery/load_job.rb', line 1883 def jagged_rows= val @gapi.configuration.load.update! allow_jagged_rows: val end |
#json(name, description: nil, mode: :nullable, policy_tags: nil, default_value_expression: nil) ⇒ Object
Adds an json field to the schema.
See Schema#json. https://cloud.google.com/bigquery/docs/reference/standard-sql/data-types#json_type
1593 1594 1595 1596 1597 |
# File 'lib/google/cloud/bigquery/load_job.rb', line 1593 def json name, description: nil, mode: :nullable, policy_tags: nil, default_value_expression: nil schema.json name, description: description, mode: mode, policy_tags: , default_value_expression: default_value_expression end |
#labels=(val) ⇒ Object
Sets the labels to use for the load job.
2100 2101 2102 |
# File 'lib/google/cloud/bigquery/load_job.rb', line 2100 def labels= val @gapi.configuration.update! labels: val end |
#location=(value) ⇒ Object
Sets the geographic location where the job should run. Required except for US and EU.
1701 1702 1703 1704 1705 1706 1707 1708 |
# File 'lib/google/cloud/bigquery/load_job.rb', line 1701 def location= value @gapi.job_reference.location = value return unless value.nil? # Treat assigning value of nil the same as unsetting the value. unset = @gapi.job_reference.instance_variables.include? :@location @gapi.job_reference.remove_instance_variable :@location if unset end |
#max_bad_records=(val) ⇒ Object
Sets the maximum number of bad records that can be ignored.
1973 1974 1975 |
# File 'lib/google/cloud/bigquery/load_job.rb', line 1973 def max_bad_records= val @gapi.configuration.load.update! max_bad_records: val end |
#null_marker=(val) ⇒ Object
Sets the string that represents a null value in a CSV file.
1991 1992 1993 |
# File 'lib/google/cloud/bigquery/load_job.rb', line 1991 def null_marker= val @gapi.configuration.load.update! null_marker: val end |
#numeric(name, description: nil, mode: :nullable, policy_tags: nil, precision: nil, scale: nil, default_value_expression: nil) ⇒ Object
Adds a numeric number field to the schema. NUMERIC
is a decimal
type with fixed precision and scale. Precision is the number of
digits that the number contains. Scale is how many of these
digits appear after the decimal point. It supports:
Precision: 38 Scale: 9 Min: -9.9999999999999999999999999999999999999E+28 Max: 9.9999999999999999999999999999999999999E+28
This type can represent decimal fractions exactly, and is suitable for financial calculations.
See Schema#numeric
1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 |
# File 'lib/google/cloud/bigquery/load_job.rb', line 1000 def numeric name, description: nil, mode: :nullable, policy_tags: nil, precision: nil, scale: nil, default_value_expression: nil schema.numeric name, description: description, mode: mode, policy_tags: , precision: precision, scale: scale, default_value_expression: default_value_expression end |
#parquet_enable_list_inference=(enable_list_inference) ⇒ Object
Sets whether to use schema inference specifically for Parquet LIST
logical type.
2212 2213 2214 2215 |
# File 'lib/google/cloud/bigquery/load_job.rb', line 2212 def parquet_enable_list_inference= enable_list_inference @gapi.configuration.load. ||= Google::Apis::BigqueryV2::ParquetOptions.new @gapi.configuration.load..enable_list_inference = enable_list_inference end |
#parquet_enum_as_string=(enum_as_string) ⇒ Object
Sets whether to infer Parquet ENUM
logical type as STRING
instead of BYTES
by default.
2242 2243 2244 2245 |
# File 'lib/google/cloud/bigquery/load_job.rb', line 2242 def parquet_enum_as_string= enum_as_string @gapi.configuration.load. ||= Google::Apis::BigqueryV2::ParquetOptions.new @gapi.configuration.load..enum_as_string = enum_as_string end |
#projection_fields=(new_fields) ⇒ Object
Sets the projection fields.
If the format
option is set to datastore_backup
, indicates
which entity properties to load from a Cloud Datastore backup.
Property names are case sensitive and must be top-level properties.
If not set, BigQuery loads all properties. If any named property
isn't found in the Cloud Datastore backup, an invalid error is
returned.
1834 1835 1836 1837 1838 1839 1840 |
# File 'lib/google/cloud/bigquery/load_job.rb', line 1834 def projection_fields= new_fields if new_fields.nil? @gapi.configuration.load.update! projection_fields: nil else @gapi.configuration.load.update! projection_fields: Array(new_fields) end end |
#quote=(val) ⇒ Object
Sets the character to use to quote string values in CSVs.
2009 2010 2011 |
# File 'lib/google/cloud/bigquery/load_job.rb', line 2009 def quote= val @gapi.configuration.load.update! quote: val end |
#quoted_newlines=(val) ⇒ Object
Allows quoted data sections to contain newline characters in CSV.
1896 1897 1898 |
# File 'lib/google/cloud/bigquery/load_job.rb', line 1896 def quoted_newlines= val @gapi.configuration.load.update! allow_quoted_newlines: val end |
#range_partitioning_end=(range_end) ⇒ Object
Sets the end of range partitioning, exclusive, for the destination table. See Creating and using integer range partitioned tables.
You can only set range partitioning when creating a table. BigQuery does not allow you to change partitioning on an existing table.
See #range_partitioning_start=, #range_partitioning_interval= and #range_partitioning_field=.
2406 2407 2408 2409 2410 2411 |
# File 'lib/google/cloud/bigquery/load_job.rb', line 2406 def range_partitioning_end= range_end @gapi.configuration.load.range_partitioning ||= Google::Apis::BigqueryV2::RangePartitioning.new( range: Google::Apis::BigqueryV2::RangePartitioning::Range.new ) @gapi.configuration.load.range_partitioning.range.end = range_end end |
#range_partitioning_field=(field) ⇒ Object
Sets the field on which to range partition the table. See Creating and using integer range partitioned tables.
See #range_partitioning_start=, #range_partitioning_interval= and #range_partitioning_end=.
You can only set range partitioning when creating a table. BigQuery does not allow you to change partitioning on an existing table.
2283 2284 2285 2286 2287 2288 |
# File 'lib/google/cloud/bigquery/load_job.rb', line 2283 def range_partitioning_field= field @gapi.configuration.load.range_partitioning ||= Google::Apis::BigqueryV2::RangePartitioning.new( range: Google::Apis::BigqueryV2::RangePartitioning::Range.new ) @gapi.configuration.load.range_partitioning.field = field end |
#range_partitioning_interval=(range_interval) ⇒ Object
Sets width of each interval for data in range partitions. See Creating and using integer range partitioned tables.
You can only set range partitioning when creating a table. BigQuery does not allow you to change partitioning on an existing table.
See #range_partitioning_field=, #range_partitioning_start= and #range_partitioning_end=.
2365 2366 2367 2368 2369 2370 |
# File 'lib/google/cloud/bigquery/load_job.rb', line 2365 def range_partitioning_interval= range_interval @gapi.configuration.load.range_partitioning ||= Google::Apis::BigqueryV2::RangePartitioning.new( range: Google::Apis::BigqueryV2::RangePartitioning::Range.new ) @gapi.configuration.load.range_partitioning.range.interval = range_interval end |
#range_partitioning_start=(range_start) ⇒ Object
Sets the start of range partitioning, inclusive, for the destination table. See Creating and using integer range partitioned tables.
You can only set range partitioning when creating a table. BigQuery does not allow you to change partitioning on an existing table.
See #range_partitioning_field=, #range_partitioning_interval= and #range_partitioning_end=.
2324 2325 2326 2327 2328 2329 |
# File 'lib/google/cloud/bigquery/load_job.rb', line 2324 def range_partitioning_start= range_start @gapi.configuration.load.range_partitioning ||= Google::Apis::BigqueryV2::RangePartitioning.new( range: Google::Apis::BigqueryV2::RangePartitioning::Range.new ) @gapi.configuration.load.range_partitioning.range.start = range_start end |
#record(name, description: nil, mode: nil, default_value_expression: nil) {|nested_schema| ... } ⇒ Object
Adds a record field to the schema. A block must be passed describing the nested fields of the record. For more information about nested and repeated records, see Loading denormalized, nested, and repeated data .
See Schema#record.
1663 1664 1665 1666 |
# File 'lib/google/cloud/bigquery/load_job.rb', line 1663 def record name, description: nil, mode: nil, default_value_expression: nil, &block schema.record name, description: description, mode: mode, default_value_expression: default_value_expression, &block end |
#reload! ⇒ Object Also known as: refresh!
2596 2597 2598 |
# File 'lib/google/cloud/bigquery/load_job.rb', line 2596 def reload! raise "not implemented in #{self.class}" end |
#rerun! ⇒ Object
2592 2593 2594 |
# File 'lib/google/cloud/bigquery/load_job.rb', line 2592 def rerun! raise "not implemented in #{self.class}" end |
#schema(replace: false) {|schema| ... } ⇒ Google::Cloud::Bigquery::Schema
Returns the table's schema. This method can also be used to set, replace, or add to the schema by passing a block. See Schema for available methods.
703 704 705 706 707 708 709 710 711 712 713 714 |
# File 'lib/google/cloud/bigquery/load_job.rb', line 703 def schema replace: false # Same as Table#schema, but not frozen # TODO: make sure to call ensure_full_data! on Dataset#update @schema ||= Schema.from_gapi @gapi.configuration.load.schema if block_given? @schema = Schema.from_gapi if replace yield @schema check_for_mutated_schema! end # Do not freeze on updater, allow modifications @schema end |
#schema=(new_schema) ⇒ Object
Sets the schema of the destination table.
743 744 745 |
# File 'lib/google/cloud/bigquery/load_job.rb', line 743 def schema= new_schema @schema = new_schema end |
#schema_update_options=(new_options) ⇒ Object
Sets the schema update options, which allow the schema of the
destination table to be updated as a side effect of the load job if
a schema is autodetected or supplied in the job configuration.
Schema update options are supported in two cases: when write
disposition is WRITE_APPEND
; when write disposition is
WRITE_TRUNCATE
and the destination table is a partition of a
table, specified by partition decorators. For normal tables,
WRITE_TRUNCATE
will always overwrite the schema. One or more of
the following values are specified:
ALLOW_FIELD_ADDITION
: allow adding a nullable field to the schema.ALLOW_FIELD_RELAXATION
: allow relaxing a required field in the original schema to nullable.
2033 2034 2035 2036 2037 2038 2039 |
# File 'lib/google/cloud/bigquery/load_job.rb', line 2033 def if .nil? @gapi.configuration.load.update! schema_update_options: nil else @gapi.configuration.load.update! schema_update_options: Array() end end |
#session_id=(value) ⇒ Object
Sets the session ID for a query run in session mode. See #create_session=.
1809 1810 1811 1812 1813 1814 1815 1816 1817 1818 |
# File 'lib/google/cloud/bigquery/load_job.rb', line 1809 def session_id= value @gapi.configuration.load.connection_properties ||= [] prop = @gapi.configuration.load.connection_properties.find { |cp| cp.key == "session_id" } if prop prop.value = value else prop = Google::Apis::BigqueryV2::ConnectionProperty.new key: "session_id", value: value @gapi.configuration.load.connection_properties << prop end end |
#skip_leading=(val) ⇒ Object
Sets the number of leading rows to skip in the file.
2051 2052 2053 |
# File 'lib/google/cloud/bigquery/load_job.rb', line 2051 def skip_leading= val @gapi.configuration.load.update! skip_leading_rows: val end |
#source_uris=(new_uris) ⇒ Object
Sets the source URIs to load.
The fully-qualified URIs that point to your data in Google Cloud.
- For Google Cloud Storage URIs: Each URI can contain one '*' wildcard character and it must come after the 'bucket' name. Size limits related to load jobs apply to external data sources. For
- Google Cloud Bigtable URIs: Exactly one URI can be specified and it has be a fully specified and valid HTTPS URL for a Google Cloud Bigtable table.
- For Google Cloud Datastore backups: Exactly one URI can be specified. Also, the '*' wildcard character is not allowed.
1860 1861 1862 1863 1864 1865 1866 |
# File 'lib/google/cloud/bigquery/load_job.rb', line 1860 def source_uris= new_uris if new_uris.nil? @gapi.configuration.load.update! source_uris: nil else @gapi.configuration.load.update! source_uris: Array(new_uris) end end |
#string(name, description: nil, mode: :nullable, policy_tags: nil, max_length: nil, default_value_expression: nil) ⇒ Object
Adds a string field to the schema.
See Schema#string.
802 803 804 805 806 |
# File 'lib/google/cloud/bigquery/load_job.rb', line 802 def string name, description: nil, mode: :nullable, policy_tags: nil, max_length: nil, default_value_expression: nil schema.string name, description: description, mode: mode, policy_tags: , max_length: max_length, default_value_expression: default_value_expression end |
#time(name, description: nil, mode: :nullable, policy_tags: nil, default_value_expression: nil) ⇒ Object
Adds a time field to the schema.
See Schema#time.
1335 1336 1337 1338 1339 1340 1341 1342 |
# File 'lib/google/cloud/bigquery/load_job.rb', line 1335 def time name, description: nil, mode: :nullable, policy_tags: nil, default_value_expression: nil schema.time name, description: description, mode: mode, policy_tags: , default_value_expression: default_value_expression end |
#time_partitioning_expiration=(expiration) ⇒ Object
Sets the time partition expiration for the destination table. See Partitioned Tables.
The destination table must also be time partitioned. See #time_partitioning_type=.
2517 2518 2519 2520 |
# File 'lib/google/cloud/bigquery/load_job.rb', line 2517 def time_partitioning_expiration= expiration @gapi.configuration.load.time_partitioning ||= Google::Apis::BigqueryV2::TimePartitioning.new @gapi.configuration.load.time_partitioning.update! expiration_ms: expiration * 1000 end |
#time_partitioning_field=(field) ⇒ Object
Sets the field on which to time partition the destination table. If not
set, the destination table is time partitioned by pseudo column
_PARTITIONTIME
; if set, the table is time partitioned by this field.
See Partitioned
Tables.
The destination table must also be time partitioned. See #time_partitioning_type=.
You can only set the time partitioning field while creating a table. BigQuery does not allow you to change partitioning on an existing table.
2484 2485 2486 2487 |
# File 'lib/google/cloud/bigquery/load_job.rb', line 2484 def time_partitioning_field= field @gapi.configuration.load.time_partitioning ||= Google::Apis::BigqueryV2::TimePartitioning.new @gapi.configuration.load.time_partitioning.update! field: field end |
#time_partitioning_require_filter=(val) ⇒ Object
If set to true, queries over the destination table will require a time partition filter that can be used for time partition elimination to be specified. See Partitioned Tables.
2533 2534 2535 2536 |
# File 'lib/google/cloud/bigquery/load_job.rb', line 2533 def time_partitioning_require_filter= val @gapi.configuration.load.time_partitioning ||= Google::Apis::BigqueryV2::TimePartitioning.new @gapi.configuration.load.time_partitioning.update! require_partition_filter: val end |
#time_partitioning_type=(type) ⇒ Object
Sets the time partitioning for the destination table. See Partitioned Tables.
You can only set the time partitioning field while creating a table. BigQuery does not allow you to change partitioning on an existing table.
2441 2442 2443 2444 |
# File 'lib/google/cloud/bigquery/load_job.rb', line 2441 def time_partitioning_type= type @gapi.configuration.load.time_partitioning ||= Google::Apis::BigqueryV2::TimePartitioning.new @gapi.configuration.load.time_partitioning.update! type: type end |
#timestamp(name, description: nil, mode: :nullable, policy_tags: nil, default_value_expression: nil) ⇒ Object
Adds a timestamp field to the schema.
See Schema#timestamp.
1276 1277 1278 1279 1280 |
# File 'lib/google/cloud/bigquery/load_job.rb', line 1276 def name, description: nil, mode: :nullable, policy_tags: nil, default_value_expression: nil schema. name, description: description, mode: mode, policy_tags: , default_value_expression: default_value_expression end |
#wait_until_done! ⇒ Object
2601 2602 2603 |
# File 'lib/google/cloud/bigquery/load_job.rb', line 2601 def wait_until_done! raise "not implemented in #{self.class}" end |
#write=(new_write) ⇒ Object
Sets the write disposition.
This specifies how to handle data already present in the table. The
default value is append
.
The following values are supported:
truncate
- BigQuery overwrites the table data.append
- BigQuery appends the data to the table.empty
- An error will be returned if the table already contains data.
1785 1786 1787 |
# File 'lib/google/cloud/bigquery/load_job.rb', line 1785 def write= new_write @gapi.configuration.load.update! write_disposition: Convert.write_disposition(new_write) end |