Class: Google::Cloud::Bigquery::ExtractJob::Updater

Inherits:
Google::Cloud::Bigquery::ExtractJob show all
Defined in:
lib/google/cloud/bigquery/extract_job.rb

Overview

Yielded to a block to accumulate changes for an API request.

Attributes collapse

Methods inherited from Google::Cloud::Bigquery::ExtractJob

#avro?, #compression?, #csv?, #delimiter, #destinations, #destinations_counts, #destinations_file_counts, #json?, #ml_tf_saved_model?, #ml_xgboost_booster?, #model?, #print_header?, #source, #table?, #use_avro_logical_types?

Methods inherited from Job

#configuration, #created_at, #done?, #ended_at, #error, #errors, #failed?, #job_id, #labels, #location, #num_child_jobs, #parent_job_id, #pending?, #project_id, #reservation_usage, #running?, #script_statistics, #started_at, #state, #statistics, #status, #transaction_id, #user_email

Instance Method Details

#cancelObject



435
436
437
# File 'lib/google/cloud/bigquery/extract_job.rb', line 435

def cancel
  raise "not implemented in #{self.class}"
end

#compression=(value) ⇒ Object

Sets the compression type. Not applicable when extracting models.

Parameters:

  • value (String)

    The compression type to use for exported files. Possible values include GZIP and NONE. The default value is NONE.



342
343
344
# File 'lib/google/cloud/bigquery/extract_job.rb', line 342

def compression= value
  @gapi.configuration.extract.compression = value
end

#delimiter=(value) ⇒ Object

Sets the field delimiter. Not applicable when extracting models.

Parameters:

  • value (String)

    Delimiter to use between fields in the exported data. Default is ,.



353
354
355
# File 'lib/google/cloud/bigquery/extract_job.rb', line 353

def delimiter= value
  @gapi.configuration.extract.field_delimiter = value
end

#format=(new_format) ⇒ Object

Sets the destination file format. The default value for tables is csv. Tables with nested or repeated fields cannot be exported as CSV. The default value for models is ml_tf_saved_model.

Supported values for tables:

Supported values for models:

  • ml_tf_saved_model - TensorFlow SavedModel
  • ml_xgboost_booster - XGBoost Booster

Parameters:

  • new_format (String)

    The new source format.



377
378
379
# File 'lib/google/cloud/bigquery/extract_job.rb', line 377

def format= new_format
  @gapi.configuration.extract.update! destination_format: Convert.source_format(new_format)
end

#header=(value) ⇒ Object

Print a header row in the exported file. Not applicable when extracting models.

Parameters:

  • value (Boolean)

    Whether to print out a header row in the results. Default is true.



389
390
391
# File 'lib/google/cloud/bigquery/extract_job.rb', line 389

def header= value
  @gapi.configuration.extract.print_header = value
end

#labels=(value) ⇒ Object

Sets the labels to use for the job.

Parameters:

  • value (Hash)

    A hash of user-provided labels associated with the job. You can use these to organize and group your jobs.

    The labels applied to a resource must meet the following requirements:

    • Each resource can have multiple labels, up to a maximum of 64.
    • Each label must be a key-value pair.
    • Keys have a minimum length of 1 character and a maximum length of 63 characters, and cannot be empty. Values can be empty, and have a maximum length of 63 characters.
    • Keys and values can contain only lowercase letters, numeric characters, underscores, and dashes. All characters must use UTF-8 encoding, and international characters are allowed.
    • The key portion of a label must be unique. However, you can use the same key with multiple resources.
    • Keys must start with a lowercase letter or international character.


415
416
417
# File 'lib/google/cloud/bigquery/extract_job.rb', line 415

def labels= value
  @gapi.configuration.update! labels: value
end

#location=(value) ⇒ Object

Sets the geographic location where the job should run. Required except for US and EU.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"
table = dataset.table "my_table"

destination = "gs://my-bucket/file-name.csv"
extract_job = table.extract_job destination do |j|
  j.location = "EU"
end

extract_job.wait_until_done!
extract_job.done? #=> true

Parameters:

  • value (String)

    A geographic location, such as "US", "EU" or "asia-northeast1". Required except for US and EU.



325
326
327
328
329
330
331
332
# File 'lib/google/cloud/bigquery/extract_job.rb', line 325

def location= value
  @gapi.job_reference.location = value
  return unless value.nil?

  # Treat assigning value of nil the same as unsetting the value.
  unset = @gapi.job_reference.instance_variables.include? :@location
  @gapi.job_reference.remove_instance_variable :@location if unset
end

#reload!Object Also known as: refresh!



443
444
445
# File 'lib/google/cloud/bigquery/extract_job.rb', line 443

def reload!
  raise "not implemented in #{self.class}"
end

#rerun!Object



439
440
441
# File 'lib/google/cloud/bigquery/extract_job.rb', line 439

def rerun!
  raise "not implemented in #{self.class}"
end

#use_avro_logical_types=(value) ⇒ Object

Indicate whether to enable extracting applicable column types (such as TIMESTAMP) to their corresponding AVRO logical types (timestamp-micros), instead of only using their raw types (avro-long).

Only used when #format is set to "AVRO" (#avro?).

Parameters:

  • value (Boolean)

    Whether applicable column types will use their corresponding AVRO logical types.



431
432
433
# File 'lib/google/cloud/bigquery/extract_job.rb', line 431

def use_avro_logical_types= value
  @gapi.configuration.extract.use_avro_logical_types = value
end

#wait_until_done!Object



448
449
450
# File 'lib/google/cloud/bigquery/extract_job.rb', line 448

def wait_until_done!
  raise "not implemented in #{self.class}"
end