Class PredictionServiceClient
PredictionService client wrapper, for convenient use.
Inherited Members
Namespace: Google.Cloud.AutoML.V1
Assembly: Google.Cloud.AutoML.V1.dll
Syntax
public abstract class PredictionServiceClient
Remarks
AutoML Prediction API.
On any input that is documented to expect a string parameter in snake_case or kebab-case, either of those cases is accepted.
Properties
BatchPredictOperationsClient
The long-running operations client for BatchPredict
.
Declaration
public virtual OperationsClient BatchPredictOperationsClient { get; }
Property Value
Type | Description |
---|---|
OperationsClient |
DefaultEndpoint
The default endpoint for the PredictionService service, which is a host of "automl.googleapis.com" and a port of 443.
Declaration
public static string DefaultEndpoint { get; }
Property Value
Type | Description |
---|---|
System.String |
DefaultScopes
The default PredictionService scopes.
Declaration
public static IReadOnlyList<string> DefaultScopes { get; }
Property Value
Type | Description |
---|---|
System.Collections.Generic.IReadOnlyList<System.String> |
Remarks
The default PredictionService scopes are:
GrpcClient
The underlying gRPC PredictionService client
Declaration
public virtual PredictionService.PredictionServiceClient GrpcClient { get; }
Property Value
Type | Description |
---|---|
PredictionService.PredictionServiceClient |
Methods
BatchPredict(BatchPredictRequest, CallSettings)
Perform a batch prediction. Unlike the online [Predict][google.cloud.automl.v1.PredictionService.Predict], batch prediction result won't be immediately available in the response. Instead, a long running operation object is returned. User can poll the operation result via [GetOperation][google.longrunning.Operations.GetOperation] method. Once the operation is done, [BatchPredictResult][google.cloud.automl.v1.BatchPredictResult] is returned in the [response][google.longrunning.Operation.response] field. Available for following ML scenarios:
- AutoML Vision Classification
- AutoML Vision Object Detection
- AutoML Video Intelligence Classification
- AutoML Video Intelligence Object Tracking * AutoML Natural Language Classification
- AutoML Natural Language Entity Extraction
- AutoML Natural Language Sentiment Analysis
- AutoML Tables
Declaration
public virtual Operation<BatchPredictResult, OperationMetadata> BatchPredict(BatchPredictRequest request, CallSettings callSettings = null)
Parameters
Type | Name | Description |
---|---|---|
BatchPredictRequest | request | The request object containing all of the parameters for the API call. |
CallSettings | callSettings | If not null, applies overrides to this RPC call. |
Returns
Type | Description |
---|---|
Operation<BatchPredictResult, OperationMetadata> | The RPC response. |
Sample code
// Create client
PredictionServiceClient predictionServiceClient = PredictionServiceClient.Create();
// Initialize request argument(s)
BatchPredictRequest request = new BatchPredictRequest
{
ModelName = ModelName.FromProjectLocationModel("[PROJECT]", "[LOCATION]", "[MODEL]"),
InputConfig = new BatchPredictInputConfig(),
OutputConfig = new BatchPredictOutputConfig(),
Params = { { "", "" }, },
};
// Make the request
Operation<BatchPredictResult, OperationMetadata> response = predictionServiceClient.BatchPredict(request);
// Poll until the returned long-running operation is complete
Operation<BatchPredictResult, OperationMetadata> completedResponse = response.PollUntilCompleted();
// Retrieve the operation result
BatchPredictResult result = completedResponse.Result;
// Or get the name of the operation
string operationName = response.Name;
// This name can be stored, then the long-running operation retrieved later by name
Operation<BatchPredictResult, OperationMetadata> retrievedResponse = predictionServiceClient.PollOnceBatchPredict(operationName);
// Check if the retrieved long-running operation has completed
if (retrievedResponse.IsCompleted)
{
// If it has completed, then access the result
BatchPredictResult retrievedResult = retrievedResponse.Result;
}
BatchPredict(ModelName, BatchPredictInputConfig, BatchPredictOutputConfig, IDictionary<String, String>, CallSettings)
Perform a batch prediction. Unlike the online [Predict][google.cloud.automl.v1.PredictionService.Predict], batch prediction result won't be immediately available in the response. Instead, a long running operation object is returned. User can poll the operation result via [GetOperation][google.longrunning.Operations.GetOperation] method. Once the operation is done, [BatchPredictResult][google.cloud.automl.v1.BatchPredictResult] is returned in the [response][google.longrunning.Operation.response] field. Available for following ML scenarios:
- AutoML Vision Classification
- AutoML Vision Object Detection
- AutoML Video Intelligence Classification
- AutoML Video Intelligence Object Tracking * AutoML Natural Language Classification
- AutoML Natural Language Entity Extraction
- AutoML Natural Language Sentiment Analysis
- AutoML Tables
Declaration
public virtual Operation<BatchPredictResult, OperationMetadata> BatchPredict(ModelName name, BatchPredictInputConfig inputConfig, BatchPredictOutputConfig outputConfig, IDictionary<string, string> params, CallSettings callSettings = null)
Parameters
Type | Name | Description |
---|---|---|
ModelName | name | Required. Name of the model requested to serve the batch prediction. |
BatchPredictInputConfig | inputConfig | Required. The input configuration for batch prediction. |
BatchPredictOutputConfig | outputConfig | Required. The Configuration specifying where output predictions should be written. |
System.Collections.Generic.IDictionary<System.String, System.String> | params | Additional domain-specific parameters for the predictions, any string must be up to 25000 characters long. AutoML Natural Language Classification
AutoML Vision Classification
AutoML Vision Object Detection
WARNING: Model evaluation is not done for this classification type, the quality of it depends on training data, but there are no metrics provided to describe that quality.
WARNING: Model evaluation is not done for this classification type, the quality of it depends on training data, but there are no metrics provided to describe that quality. AutoML Video Intelligence Object Tracking
|
CallSettings | callSettings | If not null, applies overrides to this RPC call. |
Returns
Type | Description |
---|---|
Operation<BatchPredictResult, OperationMetadata> | The RPC response. |
Sample code
// Create client
PredictionServiceClient predictionServiceClient = PredictionServiceClient.Create();
// Initialize request argument(s)
ModelName name = ModelName.FromProjectLocationModel("[PROJECT]", "[LOCATION]", "[MODEL]");
BatchPredictInputConfig inputConfig = new BatchPredictInputConfig();
BatchPredictOutputConfig outputConfig = new BatchPredictOutputConfig();
IDictionary<string, string> @params = new Dictionary<string, string> { { "", "" }, };
// Make the request
Operation<BatchPredictResult, OperationMetadata> response = predictionServiceClient.BatchPredict(name, inputConfig, outputConfig, @params);
// Poll until the returned long-running operation is complete
Operation<BatchPredictResult, OperationMetadata> completedResponse = response.PollUntilCompleted();
// Retrieve the operation result
BatchPredictResult result = completedResponse.Result;
// Or get the name of the operation
string operationName = response.Name;
// This name can be stored, then the long-running operation retrieved later by name
Operation<BatchPredictResult, OperationMetadata> retrievedResponse = predictionServiceClient.PollOnceBatchPredict(operationName);
// Check if the retrieved long-running operation has completed
if (retrievedResponse.IsCompleted)
{
// If it has completed, then access the result
BatchPredictResult retrievedResult = retrievedResponse.Result;
}
BatchPredict(String, BatchPredictInputConfig, BatchPredictOutputConfig, IDictionary<String, String>, CallSettings)
Perform a batch prediction. Unlike the online [Predict][google.cloud.automl.v1.PredictionService.Predict], batch prediction result won't be immediately available in the response. Instead, a long running operation object is returned. User can poll the operation result via [GetOperation][google.longrunning.Operations.GetOperation] method. Once the operation is done, [BatchPredictResult][google.cloud.automl.v1.BatchPredictResult] is returned in the [response][google.longrunning.Operation.response] field. Available for following ML scenarios:
- AutoML Vision Classification
- AutoML Vision Object Detection
- AutoML Video Intelligence Classification
- AutoML Video Intelligence Object Tracking * AutoML Natural Language Classification
- AutoML Natural Language Entity Extraction
- AutoML Natural Language Sentiment Analysis
- AutoML Tables
Declaration
public virtual Operation<BatchPredictResult, OperationMetadata> BatchPredict(string name, BatchPredictInputConfig inputConfig, BatchPredictOutputConfig outputConfig, IDictionary<string, string> params, CallSettings callSettings = null)
Parameters
Type | Name | Description |
---|---|---|
System.String | name | Required. Name of the model requested to serve the batch prediction. |
BatchPredictInputConfig | inputConfig | Required. The input configuration for batch prediction. |
BatchPredictOutputConfig | outputConfig | Required. The Configuration specifying where output predictions should be written. |
System.Collections.Generic.IDictionary<System.String, System.String> | params | Additional domain-specific parameters for the predictions, any string must be up to 25000 characters long. AutoML Natural Language Classification
AutoML Vision Classification
AutoML Vision Object Detection
WARNING: Model evaluation is not done for this classification type, the quality of it depends on training data, but there are no metrics provided to describe that quality.
WARNING: Model evaluation is not done for this classification type, the quality of it depends on training data, but there are no metrics provided to describe that quality. AutoML Video Intelligence Object Tracking
|
CallSettings | callSettings | If not null, applies overrides to this RPC call. |
Returns
Type | Description |
---|---|
Operation<BatchPredictResult, OperationMetadata> | The RPC response. |
Sample code
// Create client
PredictionServiceClient predictionServiceClient = PredictionServiceClient.Create();
// Initialize request argument(s)
string name = "projects/[PROJECT]/locations/[LOCATION]/models/[MODEL]";
BatchPredictInputConfig inputConfig = new BatchPredictInputConfig();
BatchPredictOutputConfig outputConfig = new BatchPredictOutputConfig();
IDictionary<string, string> @params = new Dictionary<string, string> { { "", "" }, };
// Make the request
Operation<BatchPredictResult, OperationMetadata> response = predictionServiceClient.BatchPredict(name, inputConfig, outputConfig, @params);
// Poll until the returned long-running operation is complete
Operation<BatchPredictResult, OperationMetadata> completedResponse = response.PollUntilCompleted();
// Retrieve the operation result
BatchPredictResult result = completedResponse.Result;
// Or get the name of the operation
string operationName = response.Name;
// This name can be stored, then the long-running operation retrieved later by name
Operation<BatchPredictResult, OperationMetadata> retrievedResponse = predictionServiceClient.PollOnceBatchPredict(operationName);
// Check if the retrieved long-running operation has completed
if (retrievedResponse.IsCompleted)
{
// If it has completed, then access the result
BatchPredictResult retrievedResult = retrievedResponse.Result;
}
BatchPredictAsync(BatchPredictRequest, CallSettings)
Perform a batch prediction. Unlike the online [Predict][google.cloud.automl.v1.PredictionService.Predict], batch prediction result won't be immediately available in the response. Instead, a long running operation object is returned. User can poll the operation result via [GetOperation][google.longrunning.Operations.GetOperation] method. Once the operation is done, [BatchPredictResult][google.cloud.automl.v1.BatchPredictResult] is returned in the [response][google.longrunning.Operation.response] field. Available for following ML scenarios:
- AutoML Vision Classification
- AutoML Vision Object Detection
- AutoML Video Intelligence Classification
- AutoML Video Intelligence Object Tracking * AutoML Natural Language Classification
- AutoML Natural Language Entity Extraction
- AutoML Natural Language Sentiment Analysis
- AutoML Tables
Declaration
public virtual Task<Operation<BatchPredictResult, OperationMetadata>> BatchPredictAsync(BatchPredictRequest request, CallSettings callSettings = null)
Parameters
Type | Name | Description |
---|---|---|
BatchPredictRequest | request | The request object containing all of the parameters for the API call. |
CallSettings | callSettings | If not null, applies overrides to this RPC call. |
Returns
Type | Description |
---|---|
System.Threading.Tasks.Task<Operation<BatchPredictResult, OperationMetadata>> | A Task containing the RPC response. |
Sample code
// Create client
PredictionServiceClient predictionServiceClient = await PredictionServiceClient.CreateAsync();
// Initialize request argument(s)
BatchPredictRequest request = new BatchPredictRequest
{
ModelName = ModelName.FromProjectLocationModel("[PROJECT]", "[LOCATION]", "[MODEL]"),
InputConfig = new BatchPredictInputConfig(),
OutputConfig = new BatchPredictOutputConfig(),
Params = { { "", "" }, },
};
// Make the request
Operation<BatchPredictResult, OperationMetadata> response = await predictionServiceClient.BatchPredictAsync(request);
// Poll until the returned long-running operation is complete
Operation<BatchPredictResult, OperationMetadata> completedResponse = await response.PollUntilCompletedAsync();
// Retrieve the operation result
BatchPredictResult result = completedResponse.Result;
// Or get the name of the operation
string operationName = response.Name;
// This name can be stored, then the long-running operation retrieved later by name
Operation<BatchPredictResult, OperationMetadata> retrievedResponse = await predictionServiceClient.PollOnceBatchPredictAsync(operationName);
// Check if the retrieved long-running operation has completed
if (retrievedResponse.IsCompleted)
{
// If it has completed, then access the result
BatchPredictResult retrievedResult = retrievedResponse.Result;
}
BatchPredictAsync(BatchPredictRequest, CancellationToken)
Perform a batch prediction. Unlike the online [Predict][google.cloud.automl.v1.PredictionService.Predict], batch prediction result won't be immediately available in the response. Instead, a long running operation object is returned. User can poll the operation result via [GetOperation][google.longrunning.Operations.GetOperation] method. Once the operation is done, [BatchPredictResult][google.cloud.automl.v1.BatchPredictResult] is returned in the [response][google.longrunning.Operation.response] field. Available for following ML scenarios:
- AutoML Vision Classification
- AutoML Vision Object Detection
- AutoML Video Intelligence Classification
- AutoML Video Intelligence Object Tracking * AutoML Natural Language Classification
- AutoML Natural Language Entity Extraction
- AutoML Natural Language Sentiment Analysis
- AutoML Tables
Declaration
public virtual Task<Operation<BatchPredictResult, OperationMetadata>> BatchPredictAsync(BatchPredictRequest request, CancellationToken cancellationToken)
Parameters
Type | Name | Description |
---|---|---|
BatchPredictRequest | request | The request object containing all of the parameters for the API call. |
System.Threading.CancellationToken | cancellationToken | A System.Threading.CancellationToken to use for this RPC. |
Returns
Type | Description |
---|---|
System.Threading.Tasks.Task<Operation<BatchPredictResult, OperationMetadata>> | A Task containing the RPC response. |
Sample code
// Create client
PredictionServiceClient predictionServiceClient = await PredictionServiceClient.CreateAsync();
// Initialize request argument(s)
BatchPredictRequest request = new BatchPredictRequest
{
ModelName = ModelName.FromProjectLocationModel("[PROJECT]", "[LOCATION]", "[MODEL]"),
InputConfig = new BatchPredictInputConfig(),
OutputConfig = new BatchPredictOutputConfig(),
Params = { { "", "" }, },
};
// Make the request
Operation<BatchPredictResult, OperationMetadata> response = await predictionServiceClient.BatchPredictAsync(request);
// Poll until the returned long-running operation is complete
Operation<BatchPredictResult, OperationMetadata> completedResponse = await response.PollUntilCompletedAsync();
// Retrieve the operation result
BatchPredictResult result = completedResponse.Result;
// Or get the name of the operation
string operationName = response.Name;
// This name can be stored, then the long-running operation retrieved later by name
Operation<BatchPredictResult, OperationMetadata> retrievedResponse = await predictionServiceClient.PollOnceBatchPredictAsync(operationName);
// Check if the retrieved long-running operation has completed
if (retrievedResponse.IsCompleted)
{
// If it has completed, then access the result
BatchPredictResult retrievedResult = retrievedResponse.Result;
}
BatchPredictAsync(ModelName, BatchPredictInputConfig, BatchPredictOutputConfig, IDictionary<String, String>, CallSettings)
Perform a batch prediction. Unlike the online [Predict][google.cloud.automl.v1.PredictionService.Predict], batch prediction result won't be immediately available in the response. Instead, a long running operation object is returned. User can poll the operation result via [GetOperation][google.longrunning.Operations.GetOperation] method. Once the operation is done, [BatchPredictResult][google.cloud.automl.v1.BatchPredictResult] is returned in the [response][google.longrunning.Operation.response] field. Available for following ML scenarios:
- AutoML Vision Classification
- AutoML Vision Object Detection
- AutoML Video Intelligence Classification
- AutoML Video Intelligence Object Tracking * AutoML Natural Language Classification
- AutoML Natural Language Entity Extraction
- AutoML Natural Language Sentiment Analysis
- AutoML Tables
Declaration
public virtual Task<Operation<BatchPredictResult, OperationMetadata>> BatchPredictAsync(ModelName name, BatchPredictInputConfig inputConfig, BatchPredictOutputConfig outputConfig, IDictionary<string, string> params, CallSettings callSettings = null)
Parameters
Type | Name | Description |
---|---|---|
ModelName | name | Required. Name of the model requested to serve the batch prediction. |
BatchPredictInputConfig | inputConfig | Required. The input configuration for batch prediction. |
BatchPredictOutputConfig | outputConfig | Required. The Configuration specifying where output predictions should be written. |
System.Collections.Generic.IDictionary<System.String, System.String> | params | Additional domain-specific parameters for the predictions, any string must be up to 25000 characters long. AutoML Natural Language Classification
AutoML Vision Classification
AutoML Vision Object Detection
WARNING: Model evaluation is not done for this classification type, the quality of it depends on training data, but there are no metrics provided to describe that quality.
WARNING: Model evaluation is not done for this classification type, the quality of it depends on training data, but there are no metrics provided to describe that quality. AutoML Video Intelligence Object Tracking
|
CallSettings | callSettings | If not null, applies overrides to this RPC call. |
Returns
Type | Description |
---|---|
System.Threading.Tasks.Task<Operation<BatchPredictResult, OperationMetadata>> | A Task containing the RPC response. |
Sample code
// Create client
PredictionServiceClient predictionServiceClient = await PredictionServiceClient.CreateAsync();
// Initialize request argument(s)
ModelName name = ModelName.FromProjectLocationModel("[PROJECT]", "[LOCATION]", "[MODEL]");
BatchPredictInputConfig inputConfig = new BatchPredictInputConfig();
BatchPredictOutputConfig outputConfig = new BatchPredictOutputConfig();
IDictionary<string, string> @params = new Dictionary<string, string> { { "", "" }, };
// Make the request
Operation<BatchPredictResult, OperationMetadata> response = await predictionServiceClient.BatchPredictAsync(name, inputConfig, outputConfig, @params);
// Poll until the returned long-running operation is complete
Operation<BatchPredictResult, OperationMetadata> completedResponse = await response.PollUntilCompletedAsync();
// Retrieve the operation result
BatchPredictResult result = completedResponse.Result;
// Or get the name of the operation
string operationName = response.Name;
// This name can be stored, then the long-running operation retrieved later by name
Operation<BatchPredictResult, OperationMetadata> retrievedResponse = await predictionServiceClient.PollOnceBatchPredictAsync(operationName);
// Check if the retrieved long-running operation has completed
if (retrievedResponse.IsCompleted)
{
// If it has completed, then access the result
BatchPredictResult retrievedResult = retrievedResponse.Result;
}
BatchPredictAsync(ModelName, BatchPredictInputConfig, BatchPredictOutputConfig, IDictionary<String, String>, CancellationToken)
Perform a batch prediction. Unlike the online [Predict][google.cloud.automl.v1.PredictionService.Predict], batch prediction result won't be immediately available in the response. Instead, a long running operation object is returned. User can poll the operation result via [GetOperation][google.longrunning.Operations.GetOperation] method. Once the operation is done, [BatchPredictResult][google.cloud.automl.v1.BatchPredictResult] is returned in the [response][google.longrunning.Operation.response] field. Available for following ML scenarios:
- AutoML Vision Classification
- AutoML Vision Object Detection
- AutoML Video Intelligence Classification
- AutoML Video Intelligence Object Tracking * AutoML Natural Language Classification
- AutoML Natural Language Entity Extraction
- AutoML Natural Language Sentiment Analysis
- AutoML Tables
Declaration
public virtual Task<Operation<BatchPredictResult, OperationMetadata>> BatchPredictAsync(ModelName name, BatchPredictInputConfig inputConfig, BatchPredictOutputConfig outputConfig, IDictionary<string, string> params, CancellationToken cancellationToken)
Parameters
Type | Name | Description |
---|---|---|
ModelName | name | Required. Name of the model requested to serve the batch prediction. |
BatchPredictInputConfig | inputConfig | Required. The input configuration for batch prediction. |
BatchPredictOutputConfig | outputConfig | Required. The Configuration specifying where output predictions should be written. |
System.Collections.Generic.IDictionary<System.String, System.String> | params | Additional domain-specific parameters for the predictions, any string must be up to 25000 characters long. AutoML Natural Language Classification
AutoML Vision Classification
AutoML Vision Object Detection
WARNING: Model evaluation is not done for this classification type, the quality of it depends on training data, but there are no metrics provided to describe that quality.
WARNING: Model evaluation is not done for this classification type, the quality of it depends on training data, but there are no metrics provided to describe that quality. AutoML Video Intelligence Object Tracking
|
System.Threading.CancellationToken | cancellationToken | A System.Threading.CancellationToken to use for this RPC. |
Returns
Type | Description |
---|---|
System.Threading.Tasks.Task<Operation<BatchPredictResult, OperationMetadata>> | A Task containing the RPC response. |
Sample code
// Create client
PredictionServiceClient predictionServiceClient = await PredictionServiceClient.CreateAsync();
// Initialize request argument(s)
ModelName name = ModelName.FromProjectLocationModel("[PROJECT]", "[LOCATION]", "[MODEL]");
BatchPredictInputConfig inputConfig = new BatchPredictInputConfig();
BatchPredictOutputConfig outputConfig = new BatchPredictOutputConfig();
IDictionary<string, string> @params = new Dictionary<string, string> { { "", "" }, };
// Make the request
Operation<BatchPredictResult, OperationMetadata> response = await predictionServiceClient.BatchPredictAsync(name, inputConfig, outputConfig, @params);
// Poll until the returned long-running operation is complete
Operation<BatchPredictResult, OperationMetadata> completedResponse = await response.PollUntilCompletedAsync();
// Retrieve the operation result
BatchPredictResult result = completedResponse.Result;
// Or get the name of the operation
string operationName = response.Name;
// This name can be stored, then the long-running operation retrieved later by name
Operation<BatchPredictResult, OperationMetadata> retrievedResponse = await predictionServiceClient.PollOnceBatchPredictAsync(operationName);
// Check if the retrieved long-running operation has completed
if (retrievedResponse.IsCompleted)
{
// If it has completed, then access the result
BatchPredictResult retrievedResult = retrievedResponse.Result;
}
BatchPredictAsync(String, BatchPredictInputConfig, BatchPredictOutputConfig, IDictionary<String, String>, CallSettings)
Perform a batch prediction. Unlike the online [Predict][google.cloud.automl.v1.PredictionService.Predict], batch prediction result won't be immediately available in the response. Instead, a long running operation object is returned. User can poll the operation result via [GetOperation][google.longrunning.Operations.GetOperation] method. Once the operation is done, [BatchPredictResult][google.cloud.automl.v1.BatchPredictResult] is returned in the [response][google.longrunning.Operation.response] field. Available for following ML scenarios:
- AutoML Vision Classification
- AutoML Vision Object Detection
- AutoML Video Intelligence Classification
- AutoML Video Intelligence Object Tracking * AutoML Natural Language Classification
- AutoML Natural Language Entity Extraction
- AutoML Natural Language Sentiment Analysis
- AutoML Tables
Declaration
public virtual Task<Operation<BatchPredictResult, OperationMetadata>> BatchPredictAsync(string name, BatchPredictInputConfig inputConfig, BatchPredictOutputConfig outputConfig, IDictionary<string, string> params, CallSettings callSettings = null)
Parameters
Type | Name | Description |
---|---|---|
System.String | name | Required. Name of the model requested to serve the batch prediction. |
BatchPredictInputConfig | inputConfig | Required. The input configuration for batch prediction. |
BatchPredictOutputConfig | outputConfig | Required. The Configuration specifying where output predictions should be written. |
System.Collections.Generic.IDictionary<System.String, System.String> | params | Additional domain-specific parameters for the predictions, any string must be up to 25000 characters long. AutoML Natural Language Classification
AutoML Vision Classification
AutoML Vision Object Detection
WARNING: Model evaluation is not done for this classification type, the quality of it depends on training data, but there are no metrics provided to describe that quality.
WARNING: Model evaluation is not done for this classification type, the quality of it depends on training data, but there are no metrics provided to describe that quality. AutoML Video Intelligence Object Tracking
|
CallSettings | callSettings | If not null, applies overrides to this RPC call. |
Returns
Type | Description |
---|---|
System.Threading.Tasks.Task<Operation<BatchPredictResult, OperationMetadata>> | A Task containing the RPC response. |
Sample code
// Create client
PredictionServiceClient predictionServiceClient = await PredictionServiceClient.CreateAsync();
// Initialize request argument(s)
string name = "projects/[PROJECT]/locations/[LOCATION]/models/[MODEL]";
BatchPredictInputConfig inputConfig = new BatchPredictInputConfig();
BatchPredictOutputConfig outputConfig = new BatchPredictOutputConfig();
IDictionary<string, string> @params = new Dictionary<string, string> { { "", "" }, };
// Make the request
Operation<BatchPredictResult, OperationMetadata> response = await predictionServiceClient.BatchPredictAsync(name, inputConfig, outputConfig, @params);
// Poll until the returned long-running operation is complete
Operation<BatchPredictResult, OperationMetadata> completedResponse = await response.PollUntilCompletedAsync();
// Retrieve the operation result
BatchPredictResult result = completedResponse.Result;
// Or get the name of the operation
string operationName = response.Name;
// This name can be stored, then the long-running operation retrieved later by name
Operation<BatchPredictResult, OperationMetadata> retrievedResponse = await predictionServiceClient.PollOnceBatchPredictAsync(operationName);
// Check if the retrieved long-running operation has completed
if (retrievedResponse.IsCompleted)
{
// If it has completed, then access the result
BatchPredictResult retrievedResult = retrievedResponse.Result;
}
BatchPredictAsync(String, BatchPredictInputConfig, BatchPredictOutputConfig, IDictionary<String, String>, CancellationToken)
Perform a batch prediction. Unlike the online [Predict][google.cloud.automl.v1.PredictionService.Predict], batch prediction result won't be immediately available in the response. Instead, a long running operation object is returned. User can poll the operation result via [GetOperation][google.longrunning.Operations.GetOperation] method. Once the operation is done, [BatchPredictResult][google.cloud.automl.v1.BatchPredictResult] is returned in the [response][google.longrunning.Operation.response] field. Available for following ML scenarios:
- AutoML Vision Classification
- AutoML Vision Object Detection
- AutoML Video Intelligence Classification
- AutoML Video Intelligence Object Tracking * AutoML Natural Language Classification
- AutoML Natural Language Entity Extraction
- AutoML Natural Language Sentiment Analysis
- AutoML Tables
Declaration
public virtual Task<Operation<BatchPredictResult, OperationMetadata>> BatchPredictAsync(string name, BatchPredictInputConfig inputConfig, BatchPredictOutputConfig outputConfig, IDictionary<string, string> params, CancellationToken cancellationToken)
Parameters
Type | Name | Description |
---|---|---|
System.String | name | Required. Name of the model requested to serve the batch prediction. |
BatchPredictInputConfig | inputConfig | Required. The input configuration for batch prediction. |
BatchPredictOutputConfig | outputConfig | Required. The Configuration specifying where output predictions should be written. |
System.Collections.Generic.IDictionary<System.String, System.String> | params | Additional domain-specific parameters for the predictions, any string must be up to 25000 characters long. AutoML Natural Language Classification
AutoML Vision Classification
AutoML Vision Object Detection
WARNING: Model evaluation is not done for this classification type, the quality of it depends on training data, but there are no metrics provided to describe that quality.
WARNING: Model evaluation is not done for this classification type, the quality of it depends on training data, but there are no metrics provided to describe that quality. AutoML Video Intelligence Object Tracking
|
System.Threading.CancellationToken | cancellationToken | A System.Threading.CancellationToken to use for this RPC. |
Returns
Type | Description |
---|---|
System.Threading.Tasks.Task<Operation<BatchPredictResult, OperationMetadata>> | A Task containing the RPC response. |
Sample code
// Create client
PredictionServiceClient predictionServiceClient = await PredictionServiceClient.CreateAsync();
// Initialize request argument(s)
string name = "projects/[PROJECT]/locations/[LOCATION]/models/[MODEL]";
BatchPredictInputConfig inputConfig = new BatchPredictInputConfig();
BatchPredictOutputConfig outputConfig = new BatchPredictOutputConfig();
IDictionary<string, string> @params = new Dictionary<string, string> { { "", "" }, };
// Make the request
Operation<BatchPredictResult, OperationMetadata> response = await predictionServiceClient.BatchPredictAsync(name, inputConfig, outputConfig, @params);
// Poll until the returned long-running operation is complete
Operation<BatchPredictResult, OperationMetadata> completedResponse = await response.PollUntilCompletedAsync();
// Retrieve the operation result
BatchPredictResult result = completedResponse.Result;
// Or get the name of the operation
string operationName = response.Name;
// This name can be stored, then the long-running operation retrieved later by name
Operation<BatchPredictResult, OperationMetadata> retrievedResponse = await predictionServiceClient.PollOnceBatchPredictAsync(operationName);
// Check if the retrieved long-running operation has completed
if (retrievedResponse.IsCompleted)
{
// If it has completed, then access the result
BatchPredictResult retrievedResult = retrievedResponse.Result;
}
Create()
Synchronously creates a PredictionServiceClient using the default credentials, endpoint and settings. To specify custom credentials or other settings, use PredictionServiceClientBuilder.
Declaration
public static PredictionServiceClient Create()
Returns
Type | Description |
---|---|
PredictionServiceClient | The created PredictionServiceClient. |
CreateAsync(CancellationToken)
Asynchronously creates a PredictionServiceClient using the default credentials, endpoint and settings. To specify custom credentials or other settings, use PredictionServiceClientBuilder.
Declaration
public static Task<PredictionServiceClient> CreateAsync(CancellationToken cancellationToken = default(CancellationToken))
Parameters
Type | Name | Description |
---|---|---|
System.Threading.CancellationToken | cancellationToken | The System.Threading.CancellationToken to use while creating the client. |
Returns
Type | Description |
---|---|
System.Threading.Tasks.Task<PredictionServiceClient> | The task representing the created PredictionServiceClient. |
PollOnceBatchPredict(String, CallSettings)
Poll an operation once, using an operationName
from a previous invocation of BatchPredict
.
Declaration
public virtual Operation<BatchPredictResult, OperationMetadata> PollOnceBatchPredict(string operationName, CallSettings callSettings = null)
Parameters
Type | Name | Description |
---|---|---|
System.String | operationName | The name of a previously invoked operation. Must not be |
CallSettings | callSettings | If not null, applies overrides to this RPC call. |
Returns
Type | Description |
---|---|
Operation<BatchPredictResult, OperationMetadata> | The result of polling the operation. |
PollOnceBatchPredictAsync(String, CallSettings)
Asynchronously poll an operation once, using an operationName
from a previous invocation of
BatchPredict
.
Declaration
public virtual Task<Operation<BatchPredictResult, OperationMetadata>> PollOnceBatchPredictAsync(string operationName, CallSettings callSettings = null)
Parameters
Type | Name | Description |
---|---|---|
System.String | operationName | The name of a previously invoked operation. Must not be |
CallSettings | callSettings | If not null, applies overrides to this RPC call. |
Returns
Type | Description |
---|---|
System.Threading.Tasks.Task<Operation<BatchPredictResult, OperationMetadata>> | A task representing the result of polling the operation. |
Predict(ModelName, ExamplePayload, IDictionary<String, String>, CallSettings)
Perform an online prediction. The prediction result is directly returned in the response. Available for following ML scenarios, and their expected request payloads:
AutoML Vision Classification
- An image in .JPEG, .GIF or .PNG format, image_bytes up to 30MB.
AutoML Vision Object Detection
- An image in .JPEG, .GIF or .PNG format, image_bytes up to 30MB.
AutoML Natural Language Classification
- A TextSnippet up to 60,000 characters, UTF-8 encoded or a document in .PDF, .TIF or .TIFF format with size upto 2MB.
AutoML Natural Language Entity Extraction
- A TextSnippet up to 10,000 characters, UTF-8 NFC encoded or a document in .PDF, .TIF or .TIFF format with size upto 20MB.
AutoML Natural Language Sentiment Analysis
- A TextSnippet up to 60,000 characters, UTF-8 encoded or a document in .PDF, .TIF or .TIFF format with size upto 2MB.
AutoML Translation
- A TextSnippet up to 25,000 characters, UTF-8 encoded.
AutoML Tables
- A row with column values matching
the columns of the model, up to 5MB. Not available for FORECASTING
prediction_type
.
Declaration
public virtual PredictResponse Predict(ModelName name, ExamplePayload payload, IDictionary<string, string> params, CallSettings callSettings = null)
Parameters
Type | Name | Description |
---|---|---|
ModelName | name | Required. Name of the model requested to serve the prediction. |
ExamplePayload | payload | Required. Payload to perform a prediction on. The payload must match the problem type that the model was trained to solve. |
System.Collections.Generic.IDictionary<System.String, System.String> | params | Additional domain-specific parameters, any string must be up to 25000 characters long. AutoML Vision Classification
AutoML Vision Object Detection
AutoML Tables
[feature_importance][google.cloud.automl.v1.TablesModelColumnInfo.feature_importance] is populated in the returned list of [TablesAnnotation][google.cloud.automl.v1.TablesAnnotation] objects. The default is false. |
CallSettings | callSettings | If not null, applies overrides to this RPC call. |
Returns
Type | Description |
---|---|
PredictResponse | The RPC response. |
Sample code
// Create client
PredictionServiceClient predictionServiceClient = PredictionServiceClient.Create();
// Initialize request argument(s)
ModelName name = ModelName.FromProjectLocationModel("[PROJECT]", "[LOCATION]", "[MODEL]");
ExamplePayload payload = new ExamplePayload();
IDictionary<string, string> @params = new Dictionary<string, string> { { "", "" }, };
// Make the request
PredictResponse response = predictionServiceClient.Predict(name, payload, @params);
Predict(PredictRequest, CallSettings)
Perform an online prediction. The prediction result is directly returned in the response. Available for following ML scenarios, and their expected request payloads:
AutoML Vision Classification
- An image in .JPEG, .GIF or .PNG format, image_bytes up to 30MB.
AutoML Vision Object Detection
- An image in .JPEG, .GIF or .PNG format, image_bytes up to 30MB.
AutoML Natural Language Classification
- A TextSnippet up to 60,000 characters, UTF-8 encoded or a document in .PDF, .TIF or .TIFF format with size upto 2MB.
AutoML Natural Language Entity Extraction
- A TextSnippet up to 10,000 characters, UTF-8 NFC encoded or a document in .PDF, .TIF or .TIFF format with size upto 20MB.
AutoML Natural Language Sentiment Analysis
- A TextSnippet up to 60,000 characters, UTF-8 encoded or a document in .PDF, .TIF or .TIFF format with size upto 2MB.
AutoML Translation
- A TextSnippet up to 25,000 characters, UTF-8 encoded.
AutoML Tables
- A row with column values matching
the columns of the model, up to 5MB. Not available for FORECASTING
prediction_type
.
Declaration
public virtual PredictResponse Predict(PredictRequest request, CallSettings callSettings = null)
Parameters
Type | Name | Description |
---|---|---|
PredictRequest | request | The request object containing all of the parameters for the API call. |
CallSettings | callSettings | If not null, applies overrides to this RPC call. |
Returns
Type | Description |
---|---|
PredictResponse | The RPC response. |
Sample code
// Create client
PredictionServiceClient predictionServiceClient = PredictionServiceClient.Create();
// Initialize request argument(s)
PredictRequest request = new PredictRequest
{
ModelName = ModelName.FromProjectLocationModel("[PROJECT]", "[LOCATION]", "[MODEL]"),
Payload = new ExamplePayload(),
Params = { { "", "" }, },
};
// Make the request
PredictResponse response = predictionServiceClient.Predict(request);
Predict(String, ExamplePayload, IDictionary<String, String>, CallSettings)
Perform an online prediction. The prediction result is directly returned in the response. Available for following ML scenarios, and their expected request payloads:
AutoML Vision Classification
- An image in .JPEG, .GIF or .PNG format, image_bytes up to 30MB.
AutoML Vision Object Detection
- An image in .JPEG, .GIF or .PNG format, image_bytes up to 30MB.
AutoML Natural Language Classification
- A TextSnippet up to 60,000 characters, UTF-8 encoded or a document in .PDF, .TIF or .TIFF format with size upto 2MB.
AutoML Natural Language Entity Extraction
- A TextSnippet up to 10,000 characters, UTF-8 NFC encoded or a document in .PDF, .TIF or .TIFF format with size upto 20MB.
AutoML Natural Language Sentiment Analysis
- A TextSnippet up to 60,000 characters, UTF-8 encoded or a document in .PDF, .TIF or .TIFF format with size upto 2MB.
AutoML Translation
- A TextSnippet up to 25,000 characters, UTF-8 encoded.
AutoML Tables
- A row with column values matching
the columns of the model, up to 5MB. Not available for FORECASTING
prediction_type
.
Declaration
public virtual PredictResponse Predict(string name, ExamplePayload payload, IDictionary<string, string> params, CallSettings callSettings = null)
Parameters
Type | Name | Description |
---|---|---|
System.String | name | Required. Name of the model requested to serve the prediction. |
ExamplePayload | payload | Required. Payload to perform a prediction on. The payload must match the problem type that the model was trained to solve. |
System.Collections.Generic.IDictionary<System.String, System.String> | params | Additional domain-specific parameters, any string must be up to 25000 characters long. AutoML Vision Classification
AutoML Vision Object Detection
AutoML Tables
[feature_importance][google.cloud.automl.v1.TablesModelColumnInfo.feature_importance] is populated in the returned list of [TablesAnnotation][google.cloud.automl.v1.TablesAnnotation] objects. The default is false. |
CallSettings | callSettings | If not null, applies overrides to this RPC call. |
Returns
Type | Description |
---|---|
PredictResponse | The RPC response. |
Sample code
// Create client
PredictionServiceClient predictionServiceClient = PredictionServiceClient.Create();
// Initialize request argument(s)
string name = "projects/[PROJECT]/locations/[LOCATION]/models/[MODEL]";
ExamplePayload payload = new ExamplePayload();
IDictionary<string, string> @params = new Dictionary<string, string> { { "", "" }, };
// Make the request
PredictResponse response = predictionServiceClient.Predict(name, payload, @params);
PredictAsync(ModelName, ExamplePayload, IDictionary<String, String>, CallSettings)
Perform an online prediction. The prediction result is directly returned in the response. Available for following ML scenarios, and their expected request payloads:
AutoML Vision Classification
- An image in .JPEG, .GIF or .PNG format, image_bytes up to 30MB.
AutoML Vision Object Detection
- An image in .JPEG, .GIF or .PNG format, image_bytes up to 30MB.
AutoML Natural Language Classification
- A TextSnippet up to 60,000 characters, UTF-8 encoded or a document in .PDF, .TIF or .TIFF format with size upto 2MB.
AutoML Natural Language Entity Extraction
- A TextSnippet up to 10,000 characters, UTF-8 NFC encoded or a document in .PDF, .TIF or .TIFF format with size upto 20MB.
AutoML Natural Language Sentiment Analysis
- A TextSnippet up to 60,000 characters, UTF-8 encoded or a document in .PDF, .TIF or .TIFF format with size upto 2MB.
AutoML Translation
- A TextSnippet up to 25,000 characters, UTF-8 encoded.
AutoML Tables
- A row with column values matching
the columns of the model, up to 5MB. Not available for FORECASTING
prediction_type
.
Declaration
public virtual Task<PredictResponse> PredictAsync(ModelName name, ExamplePayload payload, IDictionary<string, string> params, CallSettings callSettings = null)
Parameters
Type | Name | Description |
---|---|---|
ModelName | name | Required. Name of the model requested to serve the prediction. |
ExamplePayload | payload | Required. Payload to perform a prediction on. The payload must match the problem type that the model was trained to solve. |
System.Collections.Generic.IDictionary<System.String, System.String> | params | Additional domain-specific parameters, any string must be up to 25000 characters long. AutoML Vision Classification
AutoML Vision Object Detection
AutoML Tables
[feature_importance][google.cloud.automl.v1.TablesModelColumnInfo.feature_importance] is populated in the returned list of [TablesAnnotation][google.cloud.automl.v1.TablesAnnotation] objects. The default is false. |
CallSettings | callSettings | If not null, applies overrides to this RPC call. |
Returns
Type | Description |
---|---|
System.Threading.Tasks.Task<PredictResponse> | A Task containing the RPC response. |
Sample code
// Create client
PredictionServiceClient predictionServiceClient = await PredictionServiceClient.CreateAsync();
// Initialize request argument(s)
ModelName name = ModelName.FromProjectLocationModel("[PROJECT]", "[LOCATION]", "[MODEL]");
ExamplePayload payload = new ExamplePayload();
IDictionary<string, string> @params = new Dictionary<string, string> { { "", "" }, };
// Make the request
PredictResponse response = await predictionServiceClient.PredictAsync(name, payload, @params);
PredictAsync(ModelName, ExamplePayload, IDictionary<String, String>, CancellationToken)
Perform an online prediction. The prediction result is directly returned in the response. Available for following ML scenarios, and their expected request payloads:
AutoML Vision Classification
- An image in .JPEG, .GIF or .PNG format, image_bytes up to 30MB.
AutoML Vision Object Detection
- An image in .JPEG, .GIF or .PNG format, image_bytes up to 30MB.
AutoML Natural Language Classification
- A TextSnippet up to 60,000 characters, UTF-8 encoded or a document in .PDF, .TIF or .TIFF format with size upto 2MB.
AutoML Natural Language Entity Extraction
- A TextSnippet up to 10,000 characters, UTF-8 NFC encoded or a document in .PDF, .TIF or .TIFF format with size upto 20MB.
AutoML Natural Language Sentiment Analysis
- A TextSnippet up to 60,000 characters, UTF-8 encoded or a document in .PDF, .TIF or .TIFF format with size upto 2MB.
AutoML Translation
- A TextSnippet up to 25,000 characters, UTF-8 encoded.
AutoML Tables
- A row with column values matching
the columns of the model, up to 5MB. Not available for FORECASTING
prediction_type
.
Declaration
public virtual Task<PredictResponse> PredictAsync(ModelName name, ExamplePayload payload, IDictionary<string, string> params, CancellationToken cancellationToken)
Parameters
Type | Name | Description |
---|---|---|
ModelName | name | Required. Name of the model requested to serve the prediction. |
ExamplePayload | payload | Required. Payload to perform a prediction on. The payload must match the problem type that the model was trained to solve. |
System.Collections.Generic.IDictionary<System.String, System.String> | params | Additional domain-specific parameters, any string must be up to 25000 characters long. AutoML Vision Classification
AutoML Vision Object Detection
AutoML Tables
[feature_importance][google.cloud.automl.v1.TablesModelColumnInfo.feature_importance] is populated in the returned list of [TablesAnnotation][google.cloud.automl.v1.TablesAnnotation] objects. The default is false. |
System.Threading.CancellationToken | cancellationToken | A System.Threading.CancellationToken to use for this RPC. |
Returns
Type | Description |
---|---|
System.Threading.Tasks.Task<PredictResponse> | A Task containing the RPC response. |
Sample code
// Create client
PredictionServiceClient predictionServiceClient = await PredictionServiceClient.CreateAsync();
// Initialize request argument(s)
ModelName name = ModelName.FromProjectLocationModel("[PROJECT]", "[LOCATION]", "[MODEL]");
ExamplePayload payload = new ExamplePayload();
IDictionary<string, string> @params = new Dictionary<string, string> { { "", "" }, };
// Make the request
PredictResponse response = await predictionServiceClient.PredictAsync(name, payload, @params);
PredictAsync(PredictRequest, CallSettings)
Perform an online prediction. The prediction result is directly returned in the response. Available for following ML scenarios, and their expected request payloads:
AutoML Vision Classification
- An image in .JPEG, .GIF or .PNG format, image_bytes up to 30MB.
AutoML Vision Object Detection
- An image in .JPEG, .GIF or .PNG format, image_bytes up to 30MB.
AutoML Natural Language Classification
- A TextSnippet up to 60,000 characters, UTF-8 encoded or a document in .PDF, .TIF or .TIFF format with size upto 2MB.
AutoML Natural Language Entity Extraction
- A TextSnippet up to 10,000 characters, UTF-8 NFC encoded or a document in .PDF, .TIF or .TIFF format with size upto 20MB.
AutoML Natural Language Sentiment Analysis
- A TextSnippet up to 60,000 characters, UTF-8 encoded or a document in .PDF, .TIF or .TIFF format with size upto 2MB.
AutoML Translation
- A TextSnippet up to 25,000 characters, UTF-8 encoded.
AutoML Tables
- A row with column values matching
the columns of the model, up to 5MB. Not available for FORECASTING
prediction_type
.
Declaration
public virtual Task<PredictResponse> PredictAsync(PredictRequest request, CallSettings callSettings = null)
Parameters
Type | Name | Description |
---|---|---|
PredictRequest | request | The request object containing all of the parameters for the API call. |
CallSettings | callSettings | If not null, applies overrides to this RPC call. |
Returns
Type | Description |
---|---|
System.Threading.Tasks.Task<PredictResponse> | A Task containing the RPC response. |
Sample code
// Create client
PredictionServiceClient predictionServiceClient = await PredictionServiceClient.CreateAsync();
// Initialize request argument(s)
PredictRequest request = new PredictRequest
{
ModelName = ModelName.FromProjectLocationModel("[PROJECT]", "[LOCATION]", "[MODEL]"),
Payload = new ExamplePayload(),
Params = { { "", "" }, },
};
// Make the request
PredictResponse response = await predictionServiceClient.PredictAsync(request);
PredictAsync(PredictRequest, CancellationToken)
Perform an online prediction. The prediction result is directly returned in the response. Available for following ML scenarios, and their expected request payloads:
AutoML Vision Classification
- An image in .JPEG, .GIF or .PNG format, image_bytes up to 30MB.
AutoML Vision Object Detection
- An image in .JPEG, .GIF or .PNG format, image_bytes up to 30MB.
AutoML Natural Language Classification
- A TextSnippet up to 60,000 characters, UTF-8 encoded or a document in .PDF, .TIF or .TIFF format with size upto 2MB.
AutoML Natural Language Entity Extraction
- A TextSnippet up to 10,000 characters, UTF-8 NFC encoded or a document in .PDF, .TIF or .TIFF format with size upto 20MB.
AutoML Natural Language Sentiment Analysis
- A TextSnippet up to 60,000 characters, UTF-8 encoded or a document in .PDF, .TIF or .TIFF format with size upto 2MB.
AutoML Translation
- A TextSnippet up to 25,000 characters, UTF-8 encoded.
AutoML Tables
- A row with column values matching
the columns of the model, up to 5MB. Not available for FORECASTING
prediction_type
.
Declaration
public virtual Task<PredictResponse> PredictAsync(PredictRequest request, CancellationToken cancellationToken)
Parameters
Type | Name | Description |
---|---|---|
PredictRequest | request | The request object containing all of the parameters for the API call. |
System.Threading.CancellationToken | cancellationToken | A System.Threading.CancellationToken to use for this RPC. |
Returns
Type | Description |
---|---|
System.Threading.Tasks.Task<PredictResponse> | A Task containing the RPC response. |
Sample code
// Create client
PredictionServiceClient predictionServiceClient = await PredictionServiceClient.CreateAsync();
// Initialize request argument(s)
PredictRequest request = new PredictRequest
{
ModelName = ModelName.FromProjectLocationModel("[PROJECT]", "[LOCATION]", "[MODEL]"),
Payload = new ExamplePayload(),
Params = { { "", "" }, },
};
// Make the request
PredictResponse response = await predictionServiceClient.PredictAsync(request);
PredictAsync(String, ExamplePayload, IDictionary<String, String>, CallSettings)
Perform an online prediction. The prediction result is directly returned in the response. Available for following ML scenarios, and their expected request payloads:
AutoML Vision Classification
- An image in .JPEG, .GIF or .PNG format, image_bytes up to 30MB.
AutoML Vision Object Detection
- An image in .JPEG, .GIF or .PNG format, image_bytes up to 30MB.
AutoML Natural Language Classification
- A TextSnippet up to 60,000 characters, UTF-8 encoded or a document in .PDF, .TIF or .TIFF format with size upto 2MB.
AutoML Natural Language Entity Extraction
- A TextSnippet up to 10,000 characters, UTF-8 NFC encoded or a document in .PDF, .TIF or .TIFF format with size upto 20MB.
AutoML Natural Language Sentiment Analysis
- A TextSnippet up to 60,000 characters, UTF-8 encoded or a document in .PDF, .TIF or .TIFF format with size upto 2MB.
AutoML Translation
- A TextSnippet up to 25,000 characters, UTF-8 encoded.
AutoML Tables
- A row with column values matching
the columns of the model, up to 5MB. Not available for FORECASTING
prediction_type
.
Declaration
public virtual Task<PredictResponse> PredictAsync(string name, ExamplePayload payload, IDictionary<string, string> params, CallSettings callSettings = null)
Parameters
Type | Name | Description |
---|---|---|
System.String | name | Required. Name of the model requested to serve the prediction. |
ExamplePayload | payload | Required. Payload to perform a prediction on. The payload must match the problem type that the model was trained to solve. |
System.Collections.Generic.IDictionary<System.String, System.String> | params | Additional domain-specific parameters, any string must be up to 25000 characters long. AutoML Vision Classification
AutoML Vision Object Detection
AutoML Tables
[feature_importance][google.cloud.automl.v1.TablesModelColumnInfo.feature_importance] is populated in the returned list of [TablesAnnotation][google.cloud.automl.v1.TablesAnnotation] objects. The default is false. |
CallSettings | callSettings | If not null, applies overrides to this RPC call. |
Returns
Type | Description |
---|---|
System.Threading.Tasks.Task<PredictResponse> | A Task containing the RPC response. |
Sample code
// Create client
PredictionServiceClient predictionServiceClient = await PredictionServiceClient.CreateAsync();
// Initialize request argument(s)
string name = "projects/[PROJECT]/locations/[LOCATION]/models/[MODEL]";
ExamplePayload payload = new ExamplePayload();
IDictionary<string, string> @params = new Dictionary<string, string> { { "", "" }, };
// Make the request
PredictResponse response = await predictionServiceClient.PredictAsync(name, payload, @params);
PredictAsync(String, ExamplePayload, IDictionary<String, String>, CancellationToken)
Perform an online prediction. The prediction result is directly returned in the response. Available for following ML scenarios, and their expected request payloads:
AutoML Vision Classification
- An image in .JPEG, .GIF or .PNG format, image_bytes up to 30MB.
AutoML Vision Object Detection
- An image in .JPEG, .GIF or .PNG format, image_bytes up to 30MB.
AutoML Natural Language Classification
- A TextSnippet up to 60,000 characters, UTF-8 encoded or a document in .PDF, .TIF or .TIFF format with size upto 2MB.
AutoML Natural Language Entity Extraction
- A TextSnippet up to 10,000 characters, UTF-8 NFC encoded or a document in .PDF, .TIF or .TIFF format with size upto 20MB.
AutoML Natural Language Sentiment Analysis
- A TextSnippet up to 60,000 characters, UTF-8 encoded or a document in .PDF, .TIF or .TIFF format with size upto 2MB.
AutoML Translation
- A TextSnippet up to 25,000 characters, UTF-8 encoded.
AutoML Tables
- A row with column values matching
the columns of the model, up to 5MB. Not available for FORECASTING
prediction_type
.
Declaration
public virtual Task<PredictResponse> PredictAsync(string name, ExamplePayload payload, IDictionary<string, string> params, CancellationToken cancellationToken)
Parameters
Type | Name | Description |
---|---|---|
System.String | name | Required. Name of the model requested to serve the prediction. |
ExamplePayload | payload | Required. Payload to perform a prediction on. The payload must match the problem type that the model was trained to solve. |
System.Collections.Generic.IDictionary<System.String, System.String> | params | Additional domain-specific parameters, any string must be up to 25000 characters long. AutoML Vision Classification
AutoML Vision Object Detection
AutoML Tables
[feature_importance][google.cloud.automl.v1.TablesModelColumnInfo.feature_importance] is populated in the returned list of [TablesAnnotation][google.cloud.automl.v1.TablesAnnotation] objects. The default is false. |
System.Threading.CancellationToken | cancellationToken | A System.Threading.CancellationToken to use for this RPC. |
Returns
Type | Description |
---|---|
System.Threading.Tasks.Task<PredictResponse> | A Task containing the RPC response. |
Sample code
// Create client
PredictionServiceClient predictionServiceClient = await PredictionServiceClient.CreateAsync();
// Initialize request argument(s)
string name = "projects/[PROJECT]/locations/[LOCATION]/models/[MODEL]";
ExamplePayload payload = new ExamplePayload();
IDictionary<string, string> @params = new Dictionary<string, string> { { "", "" }, };
// Make the request
PredictResponse response = await predictionServiceClient.PredictAsync(name, payload, @params);
ShutdownDefaultChannelsAsync()
Shuts down any channels automatically created by Create() and CreateAsync(CancellationToken). Channels which weren't automatically created are not affected.
Declaration
public static Task ShutdownDefaultChannelsAsync()
Returns
Type | Description |
---|---|
System.Threading.Tasks.Task | A task representing the asynchronous shutdown operation. |
Remarks
After calling this method, further calls to Create() and CreateAsync(CancellationToken) will create new channels, which could in turn be shut down by another call to this method.