Class: Google::Cloud::AIPlatform::V1::LlmUtilityService::Client

Inherits:
Object
  • Object
show all
Includes:
Paths
Defined in:
lib/google/cloud/ai_platform/v1/llm_utility_service/client.rb

Overview

Client for the LlmUtilityService service.

Service for LLM related utility functions.

Defined Under Namespace

Classes: Configuration

Instance Attribute Summary collapse

Class Method Summary collapse

Instance Method Summary collapse

Methods included from Paths

#endpoint_path

Constructor Details

#initialize {|config| ... } ⇒ Client

Create a new LlmUtilityService client object.

Examples:


# Create a client using the default configuration
client = ::Google::Cloud::AIPlatform::V1::LlmUtilityService::Client.new

# Create a client using a custom configuration
client = ::Google::Cloud::AIPlatform::V1::LlmUtilityService::Client.new do |config|
  config.timeout = 10.0
end

Yields:

  • (config)

    Configure the LlmUtilityService client.

Yield Parameters:



126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
# File 'lib/google/cloud/ai_platform/v1/llm_utility_service/client.rb', line 126

def initialize
  # These require statements are intentionally placed here to initialize
  # the gRPC module only when it's required.
  # See https://github.com/googleapis/toolkit/issues/446
  require "gapic/grpc"
  require "google/cloud/aiplatform/v1/llm_utility_service_services_pb"

  # Create the configuration object
  @config = Configuration.new Client.configure

  # Yield the configuration if needed
  yield @config if block_given?

  # Create credentials
  credentials = @config.credentials
  # Use self-signed JWT if the endpoint is unchanged from default,
  # but only if the default endpoint does not have a region prefix.
  enable_self_signed_jwt = @config.endpoint.nil? ||
                           (@config.endpoint == Configuration::DEFAULT_ENDPOINT &&
                           !@config.endpoint.split(".").first.include?("-"))
  credentials ||= Credentials.default scope: @config.scope,
                                      enable_self_signed_jwt: enable_self_signed_jwt
  if credentials.is_a?(::String) || credentials.is_a?(::Hash)
    credentials = Credentials.new credentials, scope: @config.scope
  end
  @quota_project_id = @config.quota_project
  @quota_project_id ||= credentials.quota_project_id if credentials.respond_to? :quota_project_id

  @llm_utility_service_stub = ::Gapic::ServiceStub.new(
    ::Google::Cloud::AIPlatform::V1::LlmUtilityService::Stub,
    credentials: credentials,
    endpoint: @config.endpoint,
    endpoint_template: DEFAULT_ENDPOINT_TEMPLATE,
    universe_domain: @config.universe_domain,
    channel_args: @config.channel_args,
    interceptors: @config.interceptors,
    channel_pool_config: @config.channel_pool
  )

  @location_client = Google::Cloud::Location::Locations::Client.new do |config|
    config.credentials = credentials
    config.quota_project = @quota_project_id
    config.endpoint = @llm_utility_service_stub.endpoint
    config.universe_domain = @llm_utility_service_stub.universe_domain
  end

  @iam_policy_client = Google::Iam::V1::IAMPolicy::Client.new do |config|
    config.credentials = credentials
    config.quota_project = @quota_project_id
    config.endpoint = @llm_utility_service_stub.endpoint
    config.universe_domain = @llm_utility_service_stub.universe_domain
  end
end

Instance Attribute Details

#iam_policy_clientGoogle::Iam::V1::IAMPolicy::Client (readonly)

Get the associated client for mix-in of the IAMPolicy.

Returns:

  • (Google::Iam::V1::IAMPolicy::Client)


192
193
194
# File 'lib/google/cloud/ai_platform/v1/llm_utility_service/client.rb', line 192

def iam_policy_client
  @iam_policy_client
end

#location_clientGoogle::Cloud::Location::Locations::Client (readonly)

Get the associated client for mix-in of the Locations.

Returns:

  • (Google::Cloud::Location::Locations::Client)


185
186
187
# File 'lib/google/cloud/ai_platform/v1/llm_utility_service/client.rb', line 185

def location_client
  @location_client
end

Class Method Details

.configure {|config| ... } ⇒ Client::Configuration

Configure the LlmUtilityService Client class.

See Configuration for a description of the configuration fields.

Examples:


# Modify the configuration for all LlmUtilityService clients
::Google::Cloud::AIPlatform::V1::LlmUtilityService::Client.configure do |config|
  config.timeout = 10.0
end

Yields:

  • (config)

    Configure the Client client.

Yield Parameters:

Returns:



64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
# File 'lib/google/cloud/ai_platform/v1/llm_utility_service/client.rb', line 64

def self.configure
  @configure ||= begin
    namespace = ["Google", "Cloud", "AIPlatform", "V1"]
    parent_config = while namespace.any?
                      parent_name = namespace.join "::"
                      parent_const = const_get parent_name
                      break parent_const.configure if parent_const.respond_to? :configure
                      namespace.pop
                    end
    default_config = Client::Configuration.new parent_config

    default_config
  end
  yield @configure if block_given?
  @configure
end

Instance Method Details

#compute_tokens(request, options = nil) ⇒ ::Google::Cloud::AIPlatform::V1::ComputeTokensResponse #compute_tokens(endpoint: nil, instances: nil, model: nil, contents: nil) ⇒ ::Google::Cloud::AIPlatform::V1::ComputeTokensResponse

Return a list of tokens based on the input text.

Examples:

Basic example

require "google/cloud/ai_platform/v1"

# Create a client object. The client can be reused for multiple calls.
client = Google::Cloud::AIPlatform::V1::LlmUtilityService::Client.new

# Create a request. To set request fields, pass in keyword arguments.
request = Google::Cloud::AIPlatform::V1::ComputeTokensRequest.new

# Call the compute_tokens method.
result = client.compute_tokens request

# The returned object is of type Google::Cloud::AIPlatform::V1::ComputeTokensResponse.
p result

Overloads:

  • #compute_tokens(request, options = nil) ⇒ ::Google::Cloud::AIPlatform::V1::ComputeTokensResponse

    Pass arguments to compute_tokens via a request object, either of type ComputeTokensRequest or an equivalent Hash.

    Parameters:

    • request (::Google::Cloud::AIPlatform::V1::ComputeTokensRequest, ::Hash)

      A request object representing the call parameters. Required. To specify no parameters, or to keep all the default parameter values, pass an empty Hash.

    • options (::Gapic::CallOptions, ::Hash) (defaults to: nil)

      Overrides the default settings for this call, e.g, timeout, retries, etc. Optional.

  • #compute_tokens(endpoint: nil, instances: nil, model: nil, contents: nil) ⇒ ::Google::Cloud::AIPlatform::V1::ComputeTokensResponse

    Pass arguments to compute_tokens via keyword arguments. Note that at least one keyword argument is required. To specify no parameters, or to keep all the default parameter values, pass an empty Hash as a request object (see above).

    Parameters:

    • endpoint (::String) (defaults to: nil)

      Required. The name of the Endpoint requested to get lists of tokens and token ids.

    • instances (::Array<::Google::Protobuf::Value, ::Hash>) (defaults to: nil)

      Optional. The instances that are the input to token computing API call. Schema is identical to the prediction schema of the text model, even for the non-text models, like chat models, or Codey models.

    • model (::String) (defaults to: nil)

      Optional. The name of the publisher model requested to serve the prediction. Format: projects/{project}/locations/{location}/publishers//models/

    • contents (::Array<::Google::Cloud::AIPlatform::V1::Content, ::Hash>) (defaults to: nil)

      Optional. Input content.

Yields:

  • (response, operation)

    Access the result along with the RPC operation

Yield Parameters:

Returns:

Raises:

  • (::Google::Cloud::Error)

    if the RPC is aborted.



362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
# File 'lib/google/cloud/ai_platform/v1/llm_utility_service/client.rb', line 362

def compute_tokens request, options = nil
  raise ::ArgumentError, "request must be provided" if request.nil?

  request = ::Gapic::Protobuf.coerce request, to: ::Google::Cloud::AIPlatform::V1::ComputeTokensRequest

  # Converts hash and nil to an options object
  options = ::Gapic::CallOptions.new(**options.to_h) if options.respond_to? :to_h

  # Customize the options with defaults
   = @config.rpcs.compute_tokens..to_h

  # Set x-goog-api-client, x-goog-user-project and x-goog-api-version headers
  [:"x-goog-api-client"] ||= ::Gapic::Headers.x_goog_api_client \
    lib_name: @config.lib_name, lib_version: @config.lib_version,
    gapic_version: ::Google::Cloud::AIPlatform::V1::VERSION
  [:"x-goog-api-version"] = API_VERSION unless API_VERSION.empty?
  [:"x-goog-user-project"] = @quota_project_id if @quota_project_id

  header_params = {}
  if request.endpoint
    header_params["endpoint"] = request.endpoint
  end

  request_params_header = header_params.map { |k, v| "#{k}=#{v}" }.join("&")
  [:"x-goog-request-params"] ||= request_params_header

  options.apply_defaults timeout:      @config.rpcs.compute_tokens.timeout,
                         metadata:     ,
                         retry_policy: @config.rpcs.compute_tokens.retry_policy

  options.apply_defaults timeout:      @config.timeout,
                         metadata:     @config.,
                         retry_policy: @config.retry_policy

  @llm_utility_service_stub.call_rpc :compute_tokens, request, options: options do |response, operation|
    yield response, operation if block_given?
    return response
  end
rescue ::GRPC::BadStatus => e
  raise ::Google::Cloud::Error.from_error(e)
end

#configure {|config| ... } ⇒ Client::Configuration

Configure the LlmUtilityService Client instance.

The configuration is set to the derived mode, meaning that values can be changed, but structural changes (adding new fields, etc.) are not allowed. Structural changes should be made on configure.

See Configuration for a description of the configuration fields.

Yields:

  • (config)

    Configure the Client client.

Yield Parameters:

Returns:



96
97
98
99
# File 'lib/google/cloud/ai_platform/v1/llm_utility_service/client.rb', line 96

def configure
  yield @config if block_given?
  @config
end

#count_tokens(request, options = nil) ⇒ ::Google::Cloud::AIPlatform::V1::CountTokensResponse #count_tokens(endpoint: nil, model: nil, instances: nil, contents: nil, system_instruction: nil, tools: nil, generation_config: nil) ⇒ ::Google::Cloud::AIPlatform::V1::CountTokensResponse

Perform a token counting.

Examples:

Basic example

require "google/cloud/ai_platform/v1"

# Create a client object. The client can be reused for multiple calls.
client = Google::Cloud::AIPlatform::V1::LlmUtilityService::Client.new

# Create a request. To set request fields, pass in keyword arguments.
request = Google::Cloud::AIPlatform::V1::CountTokensRequest.new

# Call the count_tokens method.
result = client.count_tokens request

# The returned object is of type Google::Cloud::AIPlatform::V1::CountTokensResponse.
p result

Overloads:

  • #count_tokens(request, options = nil) ⇒ ::Google::Cloud::AIPlatform::V1::CountTokensResponse

    Pass arguments to count_tokens via a request object, either of type CountTokensRequest or an equivalent Hash.

    Parameters:

    • request (::Google::Cloud::AIPlatform::V1::CountTokensRequest, ::Hash)

      A request object representing the call parameters. Required. To specify no parameters, or to keep all the default parameter values, pass an empty Hash.

    • options (::Gapic::CallOptions, ::Hash) (defaults to: nil)

      Overrides the default settings for this call, e.g, timeout, retries, etc. Optional.

  • #count_tokens(endpoint: nil, model: nil, instances: nil, contents: nil, system_instruction: nil, tools: nil, generation_config: nil) ⇒ ::Google::Cloud::AIPlatform::V1::CountTokensResponse

    Pass arguments to count_tokens via keyword arguments. Note that at least one keyword argument is required. To specify no parameters, or to keep all the default parameter values, pass an empty Hash as a request object (see above).

    Parameters:

    • endpoint (::String) (defaults to: nil)

      Required. The name of the Endpoint requested to perform token counting. Format: projects/{project}/locations/{location}/endpoints/{endpoint}

    • model (::String) (defaults to: nil)

      Optional. The name of the publisher model requested to serve the prediction. Format: projects/{project}/locations/{location}/publishers/*/models/*

    • instances (::Array<::Google::Protobuf::Value, ::Hash>) (defaults to: nil)

      Optional. The instances that are the input to token counting call. Schema is identical to the prediction schema of the underlying model.

    • contents (::Array<::Google::Cloud::AIPlatform::V1::Content, ::Hash>) (defaults to: nil)

      Optional. Input content.

    • system_instruction (::Google::Cloud::AIPlatform::V1::Content, ::Hash) (defaults to: nil)

      Optional. The user provided system instructions for the model. Note: only text should be used in parts and content in each part will be in a separate paragraph.

    • tools (::Array<::Google::Cloud::AIPlatform::V1::Tool, ::Hash>) (defaults to: nil)

      Optional. A list of Tools the model may use to generate the next response.

      A Tool is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model.

    • generation_config (::Google::Cloud::AIPlatform::V1::GenerationConfig, ::Hash) (defaults to: nil)

      Optional. Generation config that the model will use to generate the response.

Yields:

  • (response, operation)

    Access the result along with the RPC operation

Yield Parameters:

Returns:

Raises:

  • (::Google::Cloud::Error)

    if the RPC is aborted.



265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
# File 'lib/google/cloud/ai_platform/v1/llm_utility_service/client.rb', line 265

def count_tokens request, options = nil
  raise ::ArgumentError, "request must be provided" if request.nil?

  request = ::Gapic::Protobuf.coerce request, to: ::Google::Cloud::AIPlatform::V1::CountTokensRequest

  # Converts hash and nil to an options object
  options = ::Gapic::CallOptions.new(**options.to_h) if options.respond_to? :to_h

  # Customize the options with defaults
   = @config.rpcs.count_tokens..to_h

  # Set x-goog-api-client, x-goog-user-project and x-goog-api-version headers
  [:"x-goog-api-client"] ||= ::Gapic::Headers.x_goog_api_client \
    lib_name: @config.lib_name, lib_version: @config.lib_version,
    gapic_version: ::Google::Cloud::AIPlatform::V1::VERSION
  [:"x-goog-api-version"] = API_VERSION unless API_VERSION.empty?
  [:"x-goog-user-project"] = @quota_project_id if @quota_project_id

  header_params = {}
  if request.endpoint
    header_params["endpoint"] = request.endpoint
  end

  request_params_header = header_params.map { |k, v| "#{k}=#{v}" }.join("&")
  [:"x-goog-request-params"] ||= request_params_header

  options.apply_defaults timeout:      @config.rpcs.count_tokens.timeout,
                         metadata:     ,
                         retry_policy: @config.rpcs.count_tokens.retry_policy

  options.apply_defaults timeout:      @config.timeout,
                         metadata:     @config.,
                         retry_policy: @config.retry_policy

  @llm_utility_service_stub.call_rpc :count_tokens, request, options: options do |response, operation|
    yield response, operation if block_given?
    return response
  end
rescue ::GRPC::BadStatus => e
  raise ::Google::Cloud::Error.from_error(e)
end

#universe_domainString

The effective universe domain

Returns:

  • (String)


106
107
108
# File 'lib/google/cloud/ai_platform/v1/llm_utility_service/client.rb', line 106

def universe_domain
  @llm_utility_service_stub.universe_domain
end