Class: Google::Cloud::Bigquery::Storage::V1::BigQueryWrite::Client
- Inherits:
-
Object
- Object
- Google::Cloud::Bigquery::Storage::V1::BigQueryWrite::Client
- Includes:
- Paths
- Defined in:
- lib/google/cloud/bigquery/storage/v1/big_query_write/client.rb
Overview
Client for the BigQueryWrite service.
BigQuery Write API.
The Write API can be used to write data to BigQuery.
For supplementary information about the Write API, see: https://cloud.google.com/bigquery/docs/write-api
Defined Under Namespace
Classes: Configuration
Class Method Summary collapse
-
.configure {|config| ... } ⇒ Client::Configuration
Configure the BigQueryWrite Client class.
Instance Method Summary collapse
-
#append_rows(request, options = nil) {|response, operation| ... } ⇒ ::Enumerable<::Google::Cloud::Bigquery::Storage::V1::AppendRowsResponse>
Appends data to the given stream.
-
#batch_commit_write_streams(request, options = nil) {|response, operation| ... } ⇒ ::Google::Cloud::Bigquery::Storage::V1::BatchCommitWriteStreamsResponse
Atomically commits a group of
PENDING
streams that belong to the sameparent
table. -
#configure {|config| ... } ⇒ Client::Configuration
Configure the BigQueryWrite Client instance.
-
#create_write_stream(request, options = nil) {|response, operation| ... } ⇒ ::Google::Cloud::Bigquery::Storage::V1::WriteStream
Creates a write stream to the given table.
-
#finalize_write_stream(request, options = nil) {|response, operation| ... } ⇒ ::Google::Cloud::Bigquery::Storage::V1::FinalizeWriteStreamResponse
Finalize a write stream so that no new data can be appended to the stream.
-
#flush_rows(request, options = nil) {|response, operation| ... } ⇒ ::Google::Cloud::Bigquery::Storage::V1::FlushRowsResponse
Flushes rows to a BUFFERED stream.
-
#get_write_stream(request, options = nil) {|response, operation| ... } ⇒ ::Google::Cloud::Bigquery::Storage::V1::WriteStream
Gets information about a write stream.
-
#initialize {|config| ... } ⇒ Client
constructor
Create a new BigQueryWrite client object.
-
#universe_domain ⇒ String
The effective universe domain.
Methods included from Paths
#table_path, #write_stream_path
Constructor Details
#initialize {|config| ... } ⇒ Client
Create a new BigQueryWrite client object.
160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 |
# File 'lib/google/cloud/bigquery/storage/v1/big_query_write/client.rb', line 160 def initialize # These require statements are intentionally placed here to initialize # the gRPC module only when it's required. # See https://github.com/googleapis/toolkit/issues/446 require "gapic/grpc" require "google/cloud/bigquery/storage/v1/storage_services_pb" # Create the configuration object @config = Configuration.new Client.configure # Yield the configuration if needed yield @config if block_given? # Create credentials credentials = @config.credentials # Use self-signed JWT if the endpoint is unchanged from default, # but only if the default endpoint does not have a region prefix. enable_self_signed_jwt = @config.endpoint.nil? || (@config.endpoint == Configuration::DEFAULT_ENDPOINT && !@config.endpoint.split(".").first.include?("-")) credentials ||= Credentials.default scope: @config.scope, enable_self_signed_jwt: enable_self_signed_jwt if credentials.is_a?(::String) || credentials.is_a?(::Hash) credentials = Credentials.new credentials, scope: @config.scope end @quota_project_id = @config.quota_project @quota_project_id ||= credentials.quota_project_id if credentials.respond_to? :quota_project_id @big_query_write_stub = ::Gapic::ServiceStub.new( ::Google::Cloud::Bigquery::Storage::V1::BigQueryWrite::Stub, credentials: credentials, endpoint: @config.endpoint, endpoint_template: DEFAULT_ENDPOINT_TEMPLATE, universe_domain: @config.universe_domain, channel_args: @config.channel_args, interceptors: @config.interceptors, channel_pool_config: @config.channel_pool ) end |
Class Method Details
.configure {|config| ... } ⇒ Client::Configuration
Configure the BigQueryWrite Client class.
See Configuration for a description of the configuration fields.
68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 |
# File 'lib/google/cloud/bigquery/storage/v1/big_query_write/client.rb', line 68 def self.configure @configure ||= begin namespace = ["Google", "Cloud", "Bigquery", "Storage", "V1"] parent_config = while namespace.any? parent_name = namespace.join "::" parent_const = const_get parent_name break parent_const.configure if parent_const.respond_to? :configure namespace.pop end default_config = Client::Configuration.new parent_config default_config.rpcs.create_write_stream.timeout = 1200.0 default_config.rpcs.create_write_stream.retry_policy = { initial_delay: 10.0, max_delay: 120.0, multiplier: 1.3, retry_codes: [4, 14, 8] } default_config.rpcs.append_rows.timeout = 86_400.0 default_config.rpcs.append_rows.retry_policy = { initial_delay: 0.1, max_delay: 60.0, multiplier: 1.3, retry_codes: [14] } default_config.rpcs.get_write_stream.timeout = 600.0 default_config.rpcs.get_write_stream.retry_policy = { initial_delay: 0.1, max_delay: 60.0, multiplier: 1.3, retry_codes: [4, 14, 8] } default_config.rpcs.finalize_write_stream.timeout = 600.0 default_config.rpcs.finalize_write_stream.retry_policy = { initial_delay: 0.1, max_delay: 60.0, multiplier: 1.3, retry_codes: [4, 14, 8] } default_config.rpcs.batch_commit_write_streams.timeout = 600.0 default_config.rpcs.batch_commit_write_streams.retry_policy = { initial_delay: 0.1, max_delay: 60.0, multiplier: 1.3, retry_codes: [4, 14, 8] } default_config.rpcs.flush_rows.timeout = 600.0 default_config.rpcs.flush_rows.retry_policy = { initial_delay: 0.1, max_delay: 60.0, multiplier: 1.3, retry_codes: [4, 14, 8] } default_config end yield @configure if block_given? @configure end |
Instance Method Details
#append_rows(request, options = nil) {|response, operation| ... } ⇒ ::Enumerable<::Google::Cloud::Bigquery::Storage::V1::AppendRowsResponse>
Appends data to the given stream.
If offset
is specified, the offset
is checked against the end of
stream. The server returns OUT_OF_RANGE
in AppendRowsResponse
if an
attempt is made to append to an offset beyond the current end of the stream
or ALREADY_EXISTS
if user provides an offset
that has already been
written to. User can retry with adjusted offset within the same RPC
connection. If offset
is not specified, append happens at the end of the
stream.
The response contains an optional offset at which the append happened. No offset information will be returned for appends to a default stream.
Responses are received in the same order in which requests are sent. There will be one response for each successful inserted request. Responses may optionally embed error information if the originating AppendRequest was not successfully processed.
The specifics of when successfully appended data is made visible to the table are governed by the type of stream:
For COMMITTED streams (which includes the default stream), data is visible immediately upon successful append.
For BUFFERED streams, data is made visible via a subsequent
FlushRows
rpc which advances a cursor to a newer offset in the stream.For PENDING streams, data is not made visible until the stream itself is finalized (via the
FinalizeWriteStream
rpc), and the stream is explicitly committed via theBatchCommitWriteStreams
rpc.
366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 |
# File 'lib/google/cloud/bigquery/storage/v1/big_query_write/client.rb', line 366 def append_rows request, = nil unless request.is_a? ::Enumerable raise ::ArgumentError, "request must be an Enumerable" unless request.respond_to? :to_enum request = request.to_enum end request = request.lazy.map do |req| ::Gapic::Protobuf.coerce req, to: ::Google::Cloud::Bigquery::Storage::V1::AppendRowsRequest end # Converts hash and nil to an options object = ::Gapic::CallOptions.new(**.to_h) if .respond_to? :to_h # Customize the options with defaults = @config.rpcs.append_rows..to_h # Set x-goog-api-client, x-goog-user-project and x-goog-api-version headers [:"x-goog-api-client"] ||= ::Gapic::Headers.x_goog_api_client \ lib_name: @config.lib_name, lib_version: @config.lib_version, gapic_version: ::Google::Cloud::Bigquery::Storage::V1::VERSION [:"x-goog-api-version"] = API_VERSION unless API_VERSION.empty? [:"x-goog-user-project"] = @quota_project_id if @quota_project_id .apply_defaults timeout: @config.rpcs.append_rows.timeout, metadata: , retry_policy: @config.rpcs.append_rows.retry_policy .apply_defaults timeout: @config.timeout, metadata: @config., retry_policy: @config.retry_policy @big_query_write_stub.call_rpc :append_rows, request, options: do |response, operation| yield response, operation if block_given? return response end rescue ::GRPC::BadStatus => e raise ::Google::Cloud::Error.from_error(e) end |
#batch_commit_write_streams(request, options = nil) ⇒ ::Google::Cloud::Bigquery::Storage::V1::BatchCommitWriteStreamsResponse #batch_commit_write_streams(parent: nil, write_streams: nil) ⇒ ::Google::Cloud::Bigquery::Storage::V1::BatchCommitWriteStreamsResponse
Atomically commits a group of PENDING
streams that belong to the same
parent
table.
Streams must be finalized before commit and cannot be committed multiple times. Once a stream is committed, data in the stream becomes available for read operations.
635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 |
# File 'lib/google/cloud/bigquery/storage/v1/big_query_write/client.rb', line 635 def batch_commit_write_streams request, = nil raise ::ArgumentError, "request must be provided" if request.nil? request = ::Gapic::Protobuf.coerce request, to: ::Google::Cloud::Bigquery::Storage::V1::BatchCommitWriteStreamsRequest # Converts hash and nil to an options object = ::Gapic::CallOptions.new(**.to_h) if .respond_to? :to_h # Customize the options with defaults = @config.rpcs.batch_commit_write_streams..to_h # Set x-goog-api-client, x-goog-user-project and x-goog-api-version headers [:"x-goog-api-client"] ||= ::Gapic::Headers.x_goog_api_client \ lib_name: @config.lib_name, lib_version: @config.lib_version, gapic_version: ::Google::Cloud::Bigquery::Storage::V1::VERSION [:"x-goog-api-version"] = API_VERSION unless API_VERSION.empty? [:"x-goog-user-project"] = @quota_project_id if @quota_project_id header_params = {} if request.parent header_params["parent"] = request.parent end request_params_header = header_params.map { |k, v| "#{k}=#{v}" }.join("&") [:"x-goog-request-params"] ||= request_params_header .apply_defaults timeout: @config.rpcs.batch_commit_write_streams.timeout, metadata: , retry_policy: @config.rpcs.batch_commit_write_streams.retry_policy .apply_defaults timeout: @config.timeout, metadata: @config., retry_policy: @config.retry_policy @big_query_write_stub.call_rpc :batch_commit_write_streams, request, options: do |response, operation| yield response, operation if block_given? return response end rescue ::GRPC::BadStatus => e raise ::Google::Cloud::Error.from_error(e) end |
#configure {|config| ... } ⇒ Client::Configuration
Configure the BigQueryWrite Client instance.
The configuration is set to the derived mode, meaning that values can be changed, but structural changes (adding new fields, etc.) are not allowed. Structural changes should be made on configure.
See Configuration for a description of the configuration fields.
130 131 132 133 |
# File 'lib/google/cloud/bigquery/storage/v1/big_query_write/client.rb', line 130 def configure yield @config if block_given? @config end |
#create_write_stream(request, options = nil) ⇒ ::Google::Cloud::Bigquery::Storage::V1::WriteStream #create_write_stream(parent: nil, write_stream: nil) ⇒ ::Google::Cloud::Bigquery::Storage::V1::WriteStream
Creates a write stream to the given table. Additionally, every table has a special stream named '_default' to which data can be written. This stream doesn't need to be created using CreateWriteStream. It is a stream that can be used simultaneously by any number of clients. Data written to this stream is considered committed as soon as an acknowledgement is received.
254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 |
# File 'lib/google/cloud/bigquery/storage/v1/big_query_write/client.rb', line 254 def create_write_stream request, = nil raise ::ArgumentError, "request must be provided" if request.nil? request = ::Gapic::Protobuf.coerce request, to: ::Google::Cloud::Bigquery::Storage::V1::CreateWriteStreamRequest # Converts hash and nil to an options object = ::Gapic::CallOptions.new(**.to_h) if .respond_to? :to_h # Customize the options with defaults = @config.rpcs.create_write_stream..to_h # Set x-goog-api-client, x-goog-user-project and x-goog-api-version headers [:"x-goog-api-client"] ||= ::Gapic::Headers.x_goog_api_client \ lib_name: @config.lib_name, lib_version: @config.lib_version, gapic_version: ::Google::Cloud::Bigquery::Storage::V1::VERSION [:"x-goog-api-version"] = API_VERSION unless API_VERSION.empty? [:"x-goog-user-project"] = @quota_project_id if @quota_project_id header_params = {} if request.parent header_params["parent"] = request.parent end request_params_header = header_params.map { |k, v| "#{k}=#{v}" }.join("&") [:"x-goog-request-params"] ||= request_params_header .apply_defaults timeout: @config.rpcs.create_write_stream.timeout, metadata: , retry_policy: @config.rpcs.create_write_stream.retry_policy .apply_defaults timeout: @config.timeout, metadata: @config., retry_policy: @config.retry_policy @big_query_write_stub.call_rpc :create_write_stream, request, options: do |response, operation| yield response, operation if block_given? return response end rescue ::GRPC::BadStatus => e raise ::Google::Cloud::Error.from_error(e) end |
#finalize_write_stream(request, options = nil) ⇒ ::Google::Cloud::Bigquery::Storage::V1::FinalizeWriteStreamResponse #finalize_write_stream(name: nil) ⇒ ::Google::Cloud::Bigquery::Storage::V1::FinalizeWriteStreamResponse
Finalize a write stream so that no new data can be appended to the stream. Finalize is not supported on the '_default' stream.
541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 |
# File 'lib/google/cloud/bigquery/storage/v1/big_query_write/client.rb', line 541 def finalize_write_stream request, = nil raise ::ArgumentError, "request must be provided" if request.nil? request = ::Gapic::Protobuf.coerce request, to: ::Google::Cloud::Bigquery::Storage::V1::FinalizeWriteStreamRequest # Converts hash and nil to an options object = ::Gapic::CallOptions.new(**.to_h) if .respond_to? :to_h # Customize the options with defaults = @config.rpcs.finalize_write_stream..to_h # Set x-goog-api-client, x-goog-user-project and x-goog-api-version headers [:"x-goog-api-client"] ||= ::Gapic::Headers.x_goog_api_client \ lib_name: @config.lib_name, lib_version: @config.lib_version, gapic_version: ::Google::Cloud::Bigquery::Storage::V1::VERSION [:"x-goog-api-version"] = API_VERSION unless API_VERSION.empty? [:"x-goog-user-project"] = @quota_project_id if @quota_project_id header_params = {} if request.name header_params["name"] = request.name end request_params_header = header_params.map { |k, v| "#{k}=#{v}" }.join("&") [:"x-goog-request-params"] ||= request_params_header .apply_defaults timeout: @config.rpcs.finalize_write_stream.timeout, metadata: , retry_policy: @config.rpcs.finalize_write_stream.retry_policy .apply_defaults timeout: @config.timeout, metadata: @config., retry_policy: @config.retry_policy @big_query_write_stub.call_rpc :finalize_write_stream, request, options: do |response, operation| yield response, operation if block_given? return response end rescue ::GRPC::BadStatus => e raise ::Google::Cloud::Error.from_error(e) end |
#flush_rows(request, options = nil) ⇒ ::Google::Cloud::Bigquery::Storage::V1::FlushRowsResponse #flush_rows(write_stream: nil, offset: nil) ⇒ ::Google::Cloud::Bigquery::Storage::V1::FlushRowsResponse
Flushes rows to a BUFFERED stream.
If users are appending rows to BUFFERED stream, flush operation is required in order for the rows to become available for reading. A Flush operation flushes up to any previously flushed offset in a BUFFERED stream, to the offset specified in the request.
Flush is not supported on the _default stream, since it is not BUFFERED.
731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 |
# File 'lib/google/cloud/bigquery/storage/v1/big_query_write/client.rb', line 731 def flush_rows request, = nil raise ::ArgumentError, "request must be provided" if request.nil? request = ::Gapic::Protobuf.coerce request, to: ::Google::Cloud::Bigquery::Storage::V1::FlushRowsRequest # Converts hash and nil to an options object = ::Gapic::CallOptions.new(**.to_h) if .respond_to? :to_h # Customize the options with defaults = @config.rpcs.flush_rows..to_h # Set x-goog-api-client, x-goog-user-project and x-goog-api-version headers [:"x-goog-api-client"] ||= ::Gapic::Headers.x_goog_api_client \ lib_name: @config.lib_name, lib_version: @config.lib_version, gapic_version: ::Google::Cloud::Bigquery::Storage::V1::VERSION [:"x-goog-api-version"] = API_VERSION unless API_VERSION.empty? [:"x-goog-user-project"] = @quota_project_id if @quota_project_id header_params = {} if request.write_stream header_params["write_stream"] = request.write_stream end request_params_header = header_params.map { |k, v| "#{k}=#{v}" }.join("&") [:"x-goog-request-params"] ||= request_params_header .apply_defaults timeout: @config.rpcs.flush_rows.timeout, metadata: , retry_policy: @config.rpcs.flush_rows.retry_policy .apply_defaults timeout: @config.timeout, metadata: @config., retry_policy: @config.retry_policy @big_query_write_stub.call_rpc :flush_rows, request, options: do |response, operation| yield response, operation if block_given? return response end rescue ::GRPC::BadStatus => e raise ::Google::Cloud::Error.from_error(e) end |
#get_write_stream(request, options = nil) ⇒ ::Google::Cloud::Bigquery::Storage::V1::WriteStream #get_write_stream(name: nil, view: nil) ⇒ ::Google::Cloud::Bigquery::Storage::V1::WriteStream
Gets information about a write stream.
453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 |
# File 'lib/google/cloud/bigquery/storage/v1/big_query_write/client.rb', line 453 def get_write_stream request, = nil raise ::ArgumentError, "request must be provided" if request.nil? request = ::Gapic::Protobuf.coerce request, to: ::Google::Cloud::Bigquery::Storage::V1::GetWriteStreamRequest # Converts hash and nil to an options object = ::Gapic::CallOptions.new(**.to_h) if .respond_to? :to_h # Customize the options with defaults = @config.rpcs.get_write_stream..to_h # Set x-goog-api-client, x-goog-user-project and x-goog-api-version headers [:"x-goog-api-client"] ||= ::Gapic::Headers.x_goog_api_client \ lib_name: @config.lib_name, lib_version: @config.lib_version, gapic_version: ::Google::Cloud::Bigquery::Storage::V1::VERSION [:"x-goog-api-version"] = API_VERSION unless API_VERSION.empty? [:"x-goog-user-project"] = @quota_project_id if @quota_project_id header_params = {} if request.name header_params["name"] = request.name end request_params_header = header_params.map { |k, v| "#{k}=#{v}" }.join("&") [:"x-goog-request-params"] ||= request_params_header .apply_defaults timeout: @config.rpcs.get_write_stream.timeout, metadata: , retry_policy: @config.rpcs.get_write_stream.retry_policy .apply_defaults timeout: @config.timeout, metadata: @config., retry_policy: @config.retry_policy @big_query_write_stub.call_rpc :get_write_stream, request, options: do |response, operation| yield response, operation if block_given? return response end rescue ::GRPC::BadStatus => e raise ::Google::Cloud::Error.from_error(e) end |
#universe_domain ⇒ String
The effective universe domain
140 141 142 |
# File 'lib/google/cloud/bigquery/storage/v1/big_query_write/client.rb', line 140 def universe_domain @big_query_write_stub.universe_domain end |