Class GuardrailLlmPromptSecurity
Guardrail that blocks the conversation if the input is considered unsafe based on the LLM classification.
Implements
Inherited Members
Namespace: Google.Apis.CustomerEngagementSuite.v1.Data
Assembly: Google.Apis.CustomerEngagementSuite.v1.dll
Syntax
public class GuardrailLlmPromptSecurity : IDirectResponseSchema
Properties
CustomPolicy
Optional. Use a user-defined LlmPolicy to configure the security guardrail.
Declaration
[JsonProperty("customPolicy")]
public virtual GuardrailLlmPolicy CustomPolicy { get; set; }
Property Value
| Type | Description |
|---|---|
| GuardrailLlmPolicy |
DefaultSettings
Optional. Use the system's predefined default security settings. To select this mode, include an empty 'default_settings' message in the request. The 'default_prompt_template' field within will be populated by the server in the response.
Declaration
[JsonProperty("defaultSettings")]
public virtual GuardrailLlmPromptSecurityDefaultSecuritySettings DefaultSettings { get; set; }
Property Value
| Type | Description |
|---|---|
| GuardrailLlmPromptSecurityDefaultSecuritySettings |
ETag
The ETag of the item.
Declaration
public virtual string ETag { get; set; }
Property Value
| Type | Description |
|---|---|
| string |
FailOpen
Optional. Determines the behavior when the guardrail encounters an LLM error. - If true: the guardrail is bypassed. - If false (default): the guardrail triggers/blocks. Note: If a custom policy is provided, this field is ignored in favor of the policy's 'fail_open' configuration.
Declaration
[JsonProperty("failOpen")]
public virtual bool? FailOpen { get; set; }
Property Value
| Type | Description |
|---|---|
| bool? |