Class: Google::Apis::AiplatformV1beta1::GoogleCloudAiplatformV1beta1ModelContainerSpec

Inherits:
Object
  • Object
show all
Includes:
Core::Hashable, Core::JsonObjectSupport
Defined in:
lib/google/apis/aiplatform_v1beta1/classes.rb,
lib/google/apis/aiplatform_v1beta1/representations.rb,
lib/google/apis/aiplatform_v1beta1/representations.rb

Overview

Specification of a container for serving predictions. Some fields in this message correspond to fields in the Kubernetes Container v1 core specification.

Instance Attribute Summary collapse

Instance Method Summary collapse

Constructor Details

#initialize(**args) ⇒ GoogleCloudAiplatformV1beta1ModelContainerSpec

Returns a new instance of GoogleCloudAiplatformV1beta1ModelContainerSpec.



15735
15736
15737
# File 'lib/google/apis/aiplatform_v1beta1/classes.rb', line 15735

def initialize(**args)
   update!(**args)
end

Instance Attribute Details

#argsArray<String>

Immutable. Specifies arguments for the command that runs when the container starts. This overrides the container's CMD. Specify this field as an array of executable and arguments, similar to a Docker CMD's "default parameters" form. If you don't specify this field but do specify the command field, then the command from the command field runs without any additional arguments. See the Kubernetes documentation about how the command and args fields interact with a container's ENTRYPOINT and CMD. If you don't specify this field and don't specify the command field, then the container's ENTRYPOINT and CMD determine what runs based on their default behavior. See the Docker documentation about how CMD and ENTRYPOINT interact. In this field, you can reference environment variables set by Vertex AI and environment variables set in the env field. You cannot reference environment variables set in the Docker image. In order for environment variables to be expanded, reference them by using the following syntax: $( VARIABLE_NAME) Note that this differs from Bash variable expansion, which does not use parentheses. If a variable cannot be resolved, the reference in the input string is used unchanged. To avoid variable expansion, you can escape this syntax with $$; for example: $$(VARIABLE_NAME) This field corresponds to the args field of the Kubernetes Containers v1 core API. Corresponds to the JSON property args

Returns:

  • (Array<String>)


15578
15579
15580
# File 'lib/google/apis/aiplatform_v1beta1/classes.rb', line 15578

def args
  @args
end

#commandArray<String>

Immutable. Specifies the command that runs when the container starts. This overrides the container's ENTRYPOINT. Specify this field as an array of executable and arguments, similar to a Docker ENTRYPOINT's "exec" form, not its "shell" form. If you do not specify this field, then the container's ENTRYPOINT runs, in conjunction with the args field or the container's CMD, if either exists. If this field is not specified and the container does not have an ENTRYPOINT, then refer to the Docker documentation about how CMD and ENTRYPOINT interact. If you specify this field, then you can also specify the args field to provide additional arguments for this command. However, if you specify this field, then the container's CMD is ignored. See the Kubernetes documentation about how the command and args fields interact with a container's ENTRYPOINT and CMD. In this field, you can reference environment variables set by Vertex AI and environment variables set in the env field. You cannot reference environment variables set in the Docker image. In order for environment variables to be expanded, reference them by using the following syntax: $( VARIABLE_NAME) Note that this differs from Bash variable expansion, which does not use parentheses. If a variable cannot be resolved, the reference in the input string is used unchanged. To avoid variable expansion, you can escape this syntax with $$; for example: $$(VARIABLE_NAME) This field corresponds to the command field of the Kubernetes Containers v1 core API . Corresponds to the JSON property command

Returns:

  • (Array<String>)


15610
15611
15612
# File 'lib/google/apis/aiplatform_v1beta1/classes.rb', line 15610

def command
  @command
end

#deployment_timeoutString

Immutable. Deployment timeout. Limit for deployment timeout is 2 hours. Corresponds to the JSON property deploymentTimeout

Returns:

  • (String)


15615
15616
15617
# File 'lib/google/apis/aiplatform_v1beta1/classes.rb', line 15615

def deployment_timeout
  @deployment_timeout
end

#envArray<Google::Apis::AiplatformV1beta1::GoogleCloudAiplatformV1beta1EnvVar>

Immutable. List of environment variables to set in the container. After the container starts running, code running in the container can read these environment variables. Additionally, the command and args fields can reference these variables. Later entries in this list can also reference earlier entries. For example, the following example sets the variable VAR_2 to have the value foo bar: json [ ` "name": "VAR_1", "value": "foo" `, ` "name": " VAR_2", "value": "$(VAR_1) bar" ` ] If you switch the order of the variables in the example, then the expansion does not occur. This field corresponds to the env field of the Kubernetes Containers v1 core API. Corresponds to the JSON property env



15630
15631
15632
# File 'lib/google/apis/aiplatform_v1beta1/classes.rb', line 15630

def env
  @env
end

#grpc_portsArray<Google::Apis::AiplatformV1beta1::GoogleCloudAiplatformV1beta1Port>

Immutable. List of ports to expose from the container. Vertex AI sends gRPC prediction requests that it receives to the first port on this list. Vertex AI also sends liveness and health checks to this port. If you do not specify this field, gRPC requests to the container will be disabled. Vertex AI does not use ports other than the first one listed. This field corresponds to the ports field of the Kubernetes Containers v1 core API. Corresponds to the JSON property grpcPorts



15640
15641
15642
# File 'lib/google/apis/aiplatform_v1beta1/classes.rb', line 15640

def grpc_ports
  @grpc_ports
end

#health_probeGoogle::Apis::AiplatformV1beta1::GoogleCloudAiplatformV1beta1Probe

Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. Corresponds to the JSON property healthProbe



15646
15647
15648
# File 'lib/google/apis/aiplatform_v1beta1/classes.rb', line 15646

def health_probe
  @health_probe
end

#health_routeString

Immutable. HTTP path on the container to send health checks to. Vertex AI intermittently sends GET requests to this path on the container's IP address and port to check that the container is healthy. Read more about health checks. For example, if you set this field to /bar, then Vertex AI intermittently sends a GET request to the /bar path on the port of your container specified by the first value of this ModelContainerSpec's ports field. If you don't specify this field, it defaults to the following value when you deploy this Model to an Endpoint: /v1/endpoints/ENDPOINT/ deployedModels/ DEPLOYED_MODEL:predict The placeholders in this value are replaced as follows: * ENDPOINT: The last segment (following endpoints/)of the Endpoint.name][] field of the Endpoint where this Model has been deployed. (Vertex AI makes this value available to your container code as the AIP_ENDPOINT_ID environment variable.) * DEPLOYED_MODEL: DeployedModel.id of the DeployedModel. (Vertex AI makes this value available to your container code as the AIP_DEPLOYED_MODEL_ID environment variable.) Corresponds to the JSON property healthRoute

Returns:

  • (String)


15669
15670
15671
# File 'lib/google/apis/aiplatform_v1beta1/classes.rb', line 15669

def health_route
  @health_route
end

#image_uriString

Required. Immutable. URI of the Docker image to be used as the custom container for serving predictions. This URI must identify an image in Artifact Registry or Container Registry. Learn more about the container publishing requirements, including permissions requirements for the Vertex AI Service Agent. The container image is ingested upon ModelService. UploadModel, stored internally, and this original path is afterwards not used. To learn about the requirements for the Docker image itself, see Custom container requirements. You can use the URI to one of Vertex AI's pre-built container images for prediction in this field. Corresponds to the JSON property imageUri

Returns:

  • (String)


15685
15686
15687
# File 'lib/google/apis/aiplatform_v1beta1/classes.rb', line 15685

def image_uri
  @image_uri
end

#portsArray<Google::Apis::AiplatformV1beta1::GoogleCloudAiplatformV1beta1Port>

Immutable. List of ports to expose from the container. Vertex AI sends any prediction requests that it receives to the first port on this list. Vertex AI also sends liveness and health checks to this port. If you do not specify this field, it defaults to following value: json [ ` " containerPort": 8080 ` ] Vertex AI does not use ports other than the first one listed. This field corresponds to the ports field of the Kubernetes Containers v1 core API. Corresponds to the JSON property ports



15698
15699
15700
# File 'lib/google/apis/aiplatform_v1beta1/classes.rb', line 15698

def ports
  @ports
end

#predict_routeString

Immutable. HTTP path on the container to send prediction requests to. Vertex AI forwards requests sent using projects.locations.endpoints.predict to this path on the container's IP address and port. Vertex AI then returns the container's response in the API response. For example, if you set this field to /foo, then when Vertex AI receives a prediction request, it forwards the request body in a POST request to the /foo path on the port of your container specified by the first value of this ModelContainerSpec's ports field. If you don't specify this field, it defaults to the following value when you deploy this Model to an Endpoint: /v1/endpoints/ENDPOINT/ deployedModels/DEPLOYED_MODEL:predict The placeholders in this value are replaced as follows: * ENDPOINT: The last segment (following endpoints/)of the Endpoint.name][] field of the Endpoint where this Model has been deployed. (Vertex AI makes this value available to your container code as the AIP_ENDPOINT_ID environment variable.) * DEPLOYED_MODEL: DeployedModel.id of the DeployedModel. (Vertex AI makes this value available to your container code as the AIP_DEPLOYED_MODEL_ID environment variable.) Corresponds to the JSON property predictRoute

Returns:

  • (String)


15721
15722
15723
# File 'lib/google/apis/aiplatform_v1beta1/classes.rb', line 15721

def predict_route
  @predict_route
end

#shared_memory_size_mbFixnum

Immutable. The amount of the VM memory to reserve as the shared memory for the model in megabytes. Corresponds to the JSON property sharedMemorySizeMb

Returns:

  • (Fixnum)


15727
15728
15729
# File 'lib/google/apis/aiplatform_v1beta1/classes.rb', line 15727

def shared_memory_size_mb
  @shared_memory_size_mb
end

#startup_probeGoogle::Apis::AiplatformV1beta1::GoogleCloudAiplatformV1beta1Probe

Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. Corresponds to the JSON property startupProbe



15733
15734
15735
# File 'lib/google/apis/aiplatform_v1beta1/classes.rb', line 15733

def startup_probe
  @startup_probe
end

Instance Method Details

#update!(**args) ⇒ Object

Update properties of this object



15740
15741
15742
15743
15744
15745
15746
15747
15748
15749
15750
15751
15752
15753
# File 'lib/google/apis/aiplatform_v1beta1/classes.rb', line 15740

def update!(**args)
  @args = args[:args] if args.key?(:args)
  @command = args[:command] if args.key?(:command)
  @deployment_timeout = args[:deployment_timeout] if args.key?(:deployment_timeout)
  @env = args[:env] if args.key?(:env)
  @grpc_ports = args[:grpc_ports] if args.key?(:grpc_ports)
  @health_probe = args[:health_probe] if args.key?(:health_probe)
  @health_route = args[:health_route] if args.key?(:health_route)
  @image_uri = args[:image_uri] if args.key?(:image_uri)
  @ports = args[:ports] if args.key?(:ports)
  @predict_route = args[:predict_route] if args.key?(:predict_route)
  @shared_memory_size_mb = args[:shared_memory_size_mb] if args.key?(:shared_memory_size_mb)
  @startup_probe = args[:startup_probe] if args.key?(:startup_probe)
end