Class: Google::Cloud::AIPlatform::V1::ExplainRequest
- Inherits:
-
Object
- Object
- Google::Cloud::AIPlatform::V1::ExplainRequest
- Extended by:
- Protobuf::MessageExts::ClassMethods
- Includes:
- Protobuf::MessageExts
- Defined in:
- proto_docs/google/cloud/aiplatform/v1/prediction_service.rb
Overview
Request message for PredictionService.Explain.
Instance Attribute Summary collapse
-
#deployed_model_id ⇒ ::String
If specified, this ExplainRequest will be served by the chosen DeployedModel, overriding Endpoint.traffic_split.
-
#endpoint ⇒ ::String
Required.
-
#explanation_spec_override ⇒ ::Google::Cloud::AIPlatform::V1::ExplanationSpecOverride
If specified, overrides the explanation_spec of the DeployedModel.
-
#instances ⇒ ::Array<::Google::Protobuf::Value>
Required.
-
#parameters ⇒ ::Google::Protobuf::Value
The parameters that govern the prediction.
Instance Attribute Details
#deployed_model_id ⇒ ::String
Returns If specified, this ExplainRequest will be served by the chosen DeployedModel, overriding Endpoint.traffic_split.
398 399 400 401 |
# File 'proto_docs/google/cloud/aiplatform/v1/prediction_service.rb', line 398 class ExplainRequest include ::Google::Protobuf::MessageExts extend ::Google::Protobuf::MessageExts::ClassMethods end |
#endpoint ⇒ ::String
Returns Required. The name of the Endpoint requested to serve the explanation.
Format:
projects/{project}/locations/{location}/endpoints/{endpoint}
.
398 399 400 401 |
# File 'proto_docs/google/cloud/aiplatform/v1/prediction_service.rb', line 398 class ExplainRequest include ::Google::Protobuf::MessageExts extend ::Google::Protobuf::MessageExts::ClassMethods end |
#explanation_spec_override ⇒ ::Google::Cloud::AIPlatform::V1::ExplanationSpecOverride
Returns If specified, overrides the explanation_spec of the DeployedModel. Can be used for explaining prediction results with different configurations, such as:
- Explaining top-5 predictions results as opposed to top-1;
- Increasing path count or step count of the attribution methods to reduce approximate errors;
- Using different baselines for explaining the prediction results.
398 399 400 401 |
# File 'proto_docs/google/cloud/aiplatform/v1/prediction_service.rb', line 398 class ExplainRequest include ::Google::Protobuf::MessageExts extend ::Google::Protobuf::MessageExts::ClassMethods end |
#instances ⇒ ::Array<::Google::Protobuf::Value>
Returns Required. The instances that are the input to the explanation call. A DeployedModel may have an upper limit on the number of instances it supports per request, and when it is exceeded the explanation call errors in case of AutoML Models, or, in case of customer created Models, the behaviour is as documented by that Model. The schema of any single instance may be specified via Endpoint's DeployedModels' [Model's][google.cloud.aiplatform.v1.DeployedModel.model] [PredictSchemata's][google.cloud.aiplatform.v1.Model.predict_schemata] instance_schema_uri.
398 399 400 401 |
# File 'proto_docs/google/cloud/aiplatform/v1/prediction_service.rb', line 398 class ExplainRequest include ::Google::Protobuf::MessageExts extend ::Google::Protobuf::MessageExts::ClassMethods end |
#parameters ⇒ ::Google::Protobuf::Value
Returns The parameters that govern the prediction. The schema of the parameters may be specified via Endpoint's DeployedModels' [Model's ][google.cloud.aiplatform.v1.DeployedModel.model] [PredictSchemata's][google.cloud.aiplatform.v1.Model.predict_schemata] parameters_schema_uri.
398 399 400 401 |
# File 'proto_docs/google/cloud/aiplatform/v1/prediction_service.rb', line 398 class ExplainRequest include ::Google::Protobuf::MessageExts extend ::Google::Protobuf::MessageExts::ClassMethods end |