v2

google.cloud.dialogflow. v2

Source:

Members

(static) ApiVersion :number

API version for the agent.

Properties:
Name Type Description
API_VERSION_UNSPECIFIED number

Not specified.

API_VERSION_V1 number

Legacy V1 API.

API_VERSION_V2 number

V2 API.

API_VERSION_V2_BETA_1 number

V2beta1 API.

Source:

(static, constant) AudioEncoding :number

Audio encoding of the audio content sent in the conversational query request. Refer to the Cloud Speech API documentation for more details.

Properties:
Name Type Description
AUDIO_ENCODING_UNSPECIFIED number

Not specified.

AUDIO_ENCODING_LINEAR_16 number

Uncompressed 16-bit signed little-endian samples (Linear PCM).

AUDIO_ENCODING_FLAC number

FLAC (Free Lossless Audio Codec) is the recommended encoding because it is lossless (therefore recognition is not compromised) and requires only about half the bandwidth of LINEAR16. FLAC stream encoding supports 16-bit and 24-bit samples, however, not all fields in STREAMINFO are supported.

AUDIO_ENCODING_MULAW number

8-bit samples that compand 14-bit audio samples using G.711 PCMU/mu-law.

AUDIO_ENCODING_AMR number

Adaptive Multi-Rate Narrowband codec. sample_rate_hertz must be 8000.

AUDIO_ENCODING_AMR_WB number

Adaptive Multi-Rate Wideband codec. sample_rate_hertz must be 16000.

AUDIO_ENCODING_OGG_OPUS number

Opus encoded audio frames in Ogg container (OggOpus). sample_rate_hertz must be 16000.

AUDIO_ENCODING_SPEEX_WITH_HEADER_BYTE number

Although the use of lossy encodings is not recommended, if a very low bitrate encoding is required, OGG_OPUS is highly preferred over Speex encoding. The Speex encoding supported by Dialogflow API has a header byte in each block, as in MIME type audio/x-speex-with-header-byte. It is a variant of the RTP Speex encoding defined in RFC 5574. The stream is a sequence of blocks, one block per RTP packet. Each block starts with a byte containing the length of the block, in bytes, followed by one or more frames of Speex data, padded to an integral number of bytes (octets) as specified in RFC 5574. In other words, each RTP header is replaced with a single byte containing the block length. Only Speex wideband is supported. sample_rate_hertz must be 16000.

Source:

(static) AutoExpansionMode :number

Represents different entity type expansion modes. Automated expansion allows an agent to recognize values that have not been explicitly listed in the entity (for example, new kinds of shopping list items).

Properties:
Name Type Description
AUTO_EXPANSION_MODE_UNSPECIFIED number

Auto expansion disabled for the entity.

AUTO_EXPANSION_MODE_DEFAULT number

Allows an agent to recognize values that have not been explicitly listed in the entity.

Source:

(static) EntityOverrideMode :number

The types of modifications for a session entity type.

Properties:
Name Type Description
ENTITY_OVERRIDE_MODE_UNSPECIFIED number

Not specified. This value should be never used.

ENTITY_OVERRIDE_MODE_OVERRIDE number

The collection of session entities overrides the collection of entities in the corresponding developer entity type.

ENTITY_OVERRIDE_MODE_SUPPLEMENT number

The collection of session entities extends the collection of entities in the corresponding developer entity type.

Note: Even in this override mode calls to ListSessionEntityTypes, GetSessionEntityType, CreateSessionEntityType and UpdateSessionEntityType only return the additional entities added in this session entity type. If you want to get the supplemented list, please call EntityTypes.GetEntityType on the developer entity type and merge.

Source:

(static) HorizontalAlignment :number

Text alignments within a cell.

Properties:
Name Type Description
HORIZONTAL_ALIGNMENT_UNSPECIFIED number

Text is aligned to the leading edge of the column.

LEADING number

Text is aligned to the leading edge of the column.

CENTER number

Text is centered in the column.

TRAILING number

Text is aligned to the trailing edge of the column.

Source:

(static) ImageDisplayOptions :number

Image display options for Actions on Google. This should be used for when the image's aspect ratio does not match the image container's aspect ratio.

Properties:
Name Type Description
IMAGE_DISPLAY_OPTIONS_UNSPECIFIED number

Fill the gaps between the image and the image container with gray bars.

GRAY number

Fill the gaps between the image and the image container with gray bars.

WHITE number

Fill the gaps between the image and the image container with white bars.

CROPPED number

Image is scaled such that the image width and height match or exceed the container dimensions. This may crop the top and bottom of the image if the scaled image height is greater than the container height, or crop the left and right of the image if the scaled image width is greater than the container width. This is similar to "Zoom Mode" on a widescreen TV when playing a 4:3 video.

BLURRED_BACKGROUND number

Pad the gaps between image and image frame with a blurred copy of the same image.

Source:

(static, constant) IntentView :number

Represents the options for views of an intent. An intent can be a sizable object. Therefore, we provide a resource view that does not return training phrases in the response by default.

Properties:
Name Type Description
INTENT_VIEW_UNSPECIFIED number

Training phrases field is not populated in the response.

INTENT_VIEW_FULL number

All fields are populated.

Source:

(static) Kind :number

Represents kinds of entities.

Properties:
Name Type Description
KIND_UNSPECIFIED number

Not specified. This value should be never used.

KIND_MAP number

Map entity types allow mapping of a group of synonyms to a canonical value.

KIND_LIST number

List entity types contain a set of entries that do not map to canonical values. However, list entity types can contain references to other entity types (with or without aliases).

KIND_REGEXP number

Regexp entity types allow to specify regular expressions in entries values.

Source:

(static) MatchMode :number

Match mode determines how intents are detected from user queries.

Properties:
Name Type Description
MATCH_MODE_UNSPECIFIED number

Not specified.

MATCH_MODE_HYBRID number

Best for agents with a small number of examples in intents and/or wide use of templates syntax and composite entities.

MATCH_MODE_ML_ONLY number

Can be used for agents with a large number of examples in intents, especially the ones using @sys.any or very large developer entities.

Source:

(static) MessageType :number

Type of the response message.

Properties:
Name Type Description
MESSAGE_TYPE_UNSPECIFIED number

Not specified. Should never be used.

TRANSCRIPT number

Message contains a (possibly partial) transcript.

END_OF_SINGLE_UTTERANCE number

Event indicates that the server has detected the end of the user's speech utterance and expects no additional inputs. Therefore, the server will not process additional audio (although it may subsequently return additional results). The client should stop sending additional audio data, half-close the gRPC connection, and wait for any additional results until the server closes the gRPC connection. This message is only sent if single_utterance was set to true, and is not used otherwise.

Source:

(static, constant) OutputAudioEncoding :number

Audio encoding of the output audio format in Text-To-Speech.

Properties:
Name Type Description
OUTPUT_AUDIO_ENCODING_UNSPECIFIED number

Not specified.

OUTPUT_AUDIO_ENCODING_LINEAR_16 number

Uncompressed 16-bit signed little-endian samples (Linear PCM). Audio content returned as LINEAR16 also contains a WAV header.

OUTPUT_AUDIO_ENCODING_MP3 number

MP3 audio.

OUTPUT_AUDIO_ENCODING_OGG_OPUS number

Opus encoded audio wrapped in an ogg container. The result will be a file which can be played natively on Android, and in browsers (at least Chrome and Firefox). The quality of the encoding is considerably higher than MP3 while using approximately the same bitrate.

Source:

(static) Platform :number

Represents different platforms that a rich message can be intended for.

Properties:
Name Type Description
PLATFORM_UNSPECIFIED number

Not specified.

FACEBOOK number

Facebook.

SLACK number

Slack.

TELEGRAM number

Telegram.

KIK number

Kik.

SKYPE number

Skype.

LINE number

Line.

VIBER number

Viber.

ACTIONS_ON_GOOGLE number

Actions on Google. When using Actions on Google, you can choose one of the specific Intent.Message types that mention support for Actions on Google, or you can use the advanced Intent.Message.payload field. The payload field provides access to AoG features not available in the specific message types. If using the Intent.Message.payload field, it should have a structure similar to the JSON message shown here. For more information, see Actions on Google Webhook Format

{
  "expectUserResponse": true,
  "isSsml": false,
  "noInputPrompts": [],
  "richResponse": {
    "items": [
      {
        "simpleResponse": {
          "displayText": "hi",
          "textToSpeech": "hello"
        }
      }
    ],
    "suggestions": [
      {
        "title": "Say this"
      },
      {
        "title": "or this"
      }
    ]
  },
  "systemIntent": {
    "data": {
      "@type": "type.googleapis.com/google.actions.v2.OptionValueSpec",
      "listSelect": {
        "items": [
          {
            "optionInfo": {
              "key": "key1",
              "synonyms": [
                "key one"
              ]
            },
            "title": "must not be empty, but unique"
          },
          {
            "optionInfo": {
              "key": "key2",
              "synonyms": [
                "key two"
              ]
            },
            "title": "must not be empty, but unique"
          }
        ]
      }
    },
    "intent": "actions.intent.OPTION"
  }
}
GOOGLE_HANGOUTS number

Google Hangouts.

Source:

(static) ResponseMediaType :number

Format of response media type.

Properties:
Name Type Description
RESPONSE_MEDIA_TYPE_UNSPECIFIED number

Unspecified.

AUDIO number

Response media type is audio.

Source:

(static, constant) SpeechModelVariant :number

Variant of the specified Speech model to use.

See the Cloud Speech documentation for which models have different variants. For example, the "phone_call" model has both a standard and an enhanced variant. When you use an enhanced model, you will generally receive higher quality results than for a standard model.

Properties:
Name Type Description
SPEECH_MODEL_VARIANT_UNSPECIFIED number

No model variant specified. In this case Dialogflow defaults to USE_BEST_AVAILABLE.

USE_BEST_AVAILABLE number

Use the best available variant of the Speech model that the caller is eligible for.

Please see the Dialogflow docs for how to make your project eligible for enhanced models.

USE_STANDARD number

Use standard model variant even if an enhanced model is available. See the Cloud Speech documentation for details about enhanced models.

USE_ENHANCED number

Use an enhanced model variant:

  • If an enhanced variant does not exist for the given model and request language, Dialogflow falls back to the standard variant.

    The Cloud Speech documentation describes which models have enhanced variants.

  • If the API caller isn't eligible for enhanced models, Dialogflow returns an error. Please see the Dialogflow docs for how to make your project eligible.

Source:

(static, constant) SsmlVoiceGender :number

Gender of the voice as described in SSML voice element.

Properties:
Name Type Description
SSML_VOICE_GENDER_UNSPECIFIED number

An unspecified gender, which means that the client doesn't care which gender the selected voice will have.

SSML_VOICE_GENDER_MALE number

A male voice.

SSML_VOICE_GENDER_FEMALE number

A female voice.

SSML_VOICE_GENDER_NEUTRAL number

A gender-neutral voice.

Source:

(static) Tier :number

Represents the agent tier.

Properties:
Name Type Description
TIER_UNSPECIFIED number

Not specified. This value should never be used.

TIER_STANDARD number

Standard tier.

TIER_ENTERPRISE number

Enterprise tier (Essentials).

TIER_ENTERPRISE_PLUS number

Enterprise tier (Plus).

Source:

(static) Type :number

Represents different types of training phrases.

Properties:
Name Type Description
TYPE_UNSPECIFIED number

Not specified. This value should never be used.

EXAMPLE number

Examples do not contain @-prefixed entity type names, but example parts can be annotated with entity types.

TEMPLATE number

Templates are not annotated with entity types, but they can contain

Source:

(static) UrlTypeHint :number

Type of the URI.

Properties:
Name Type Description
URL_TYPE_HINT_UNSPECIFIED number

Unspecified

AMP_ACTION number

Url would be an amp action

AMP_CONTENT number

URL that points directly to AMP content, or to a canonical URL which refers to AMP content via .

Source:

(static) WebhookState :number

Represents the different states that webhooks can be in.

Properties:
Name Type Description
WEBHOOK_STATE_UNSPECIFIED number

Webhook is disabled in the agent and in the intent.

WEBHOOK_STATE_ENABLED number

Webhook is enabled in the agent and in the intent.

WEBHOOK_STATE_ENABLED_FOR_SLOT_FILLING number

Webhook is enabled in the agent and in the intent. Also, each slot filling prompt is forwarded to the webhook.

Source:

Type Definitions

Agent

Represents a conversational agent.

Properties:
Name Type Description
parent string

Required. The project of this agent. Format: projects/<Project ID>.

displayName string

Required. The name of this agent.

defaultLanguageCode string

Required. The default language of the agent as a language tag. See Language Support for a list of the currently supported language codes. This field cannot be set by the Update method.

supportedLanguageCodes Array.<string>

Optional. The list of all languages supported by this agent (except for the default_language_code).

timeZone string

Required. The time zone of this agent from the time zone database, e.g., America/New_York, Europe/Paris.

description string

Optional. The description of this agent. The maximum length is 500 characters. If exceeded, the request is rejected.

avatarUri string

Optional. The URI of the agent's avatar. Avatars are used throughout the Dialogflow console and in the self-hosted Web Demo integration.

enableLogging boolean

Optional. Determines whether this agent should log conversation queries.

matchMode number

Optional. Determines how intents are detected from user queries.

The number should be among the values of MatchMode

classificationThreshold number

Optional. To filter out false positive results and still get variety in matched natural language inputs for your agent, you can tune the machine learning classification threshold. If the returned score value is less than the threshold value, then a fallback intent will be triggered or, if there are no fallback intents defined, no intent will be triggered. The score values range from 0.0 (completely uncertain) to 1.0 (completely certain). If set to 0.0, the default of 0.3 is used.

apiVersion number

Optional. API version displayed in Dialogflow console. If not specified, V2 API is assumed. Clients are free to query different service endpoints for different API versions. However, bots connectors and webhook calls will follow the specified API version.

The number should be among the values of ApiVersion

tier number

Optional. The agent tier. If not specified, TIER_STANDARD is assumed.

The number should be among the values of Tier

Source:
See:

BasicCard

The basic card message. Useful for displaying information.

Properties:
Name Type Description
title string

Optional. The title of the card.

subtitle string

Optional. The subtitle of the card.

formattedText string

Required, unless image is present. The body text of the card.

image Object

Optional. The image for the card.

This object should have the same structure as Image

buttons Array.<Object>

Optional. The collection of card buttons.

This object should have the same structure as Button

Source:
See:

BatchCreateEntitiesRequest

The request message for EntityTypes.BatchCreateEntities.

Properties:
Name Type Description
parent string

Required. The name of the entity type to create entities in. Format: projects/<Project ID>/agent/entityTypes/<Entity Type ID>.

entities Array.<Object>

Required. The entities to create.

This object should have the same structure as Entity

languageCode string

Optional. The language of entity synonyms defined in entities. If not specified, the agent's default language is used. Many languages are supported. Note: languages must be enabled in the agent before they can be used.

Source:
See:

BatchDeleteEntitiesRequest

The request message for EntityTypes.BatchDeleteEntities.

Properties:
Name Type Description
parent string

Required. The name of the entity type to delete entries for. Format: projects/<Project ID>/agent/entityTypes/<Entity Type ID>.

entityValues Array.<string>

Required. The canonical values of the entities to delete. Note that these are not fully-qualified names, i.e. they don't start with projects/<Project ID>.

languageCode string

Optional. The language of entity synonyms defined in entities. If not specified, the agent's default language is used. Many languages are supported. Note: languages must be enabled in the agent before they can be used.

Source:
See:

BatchDeleteEntityTypesRequest

The request message for EntityTypes.BatchDeleteEntityTypes.

Properties:
Name Type Description
parent string

Required. The name of the agent to delete all entities types for. Format: projects/<Project ID>/agent.

entityTypeNames Array.<string>

Required. The names entity types to delete. All names must point to the same agent as parent.

Source:
See:

BatchDeleteIntentsRequest

The request message for Intents.BatchDeleteIntents.

Properties:
Name Type Description
parent string

Required. The name of the agent to delete all entities types for. Format: projects/<Project ID>/agent.

intents Array.<Object>

Required. The collection of intents to delete. Only intent name must be filled in.

This object should have the same structure as Intent

Source:
See:

BatchUpdateEntitiesRequest

The request message for EntityTypes.BatchUpdateEntities.

Properties:
Name Type Description
parent string

Required. The name of the entity type to update or create entities in. Format: projects/<Project ID>/agent/entityTypes/<Entity Type ID>.

entities Array.<Object>

Required. The entities to update or create.

This object should have the same structure as Entity

languageCode string

Optional. The language of entity synonyms defined in entities. If not specified, the agent's default language is used. Many languages are supported. Note: languages must be enabled in the agent before they can be used.

updateMask Object

Optional. The mask to control which fields get updated.

This object should have the same structure as FieldMask

Source:
See:

BatchUpdateEntityTypesRequest

The request message for EntityTypes.BatchUpdateEntityTypes.

Properties:
Name Type Description
parent string

Required. The name of the agent to update or create entity types in. Format: projects/<Project ID>/agent.

entityTypeBatchUri string

The URI to a Google Cloud Storage file containing entity types to update or create. The file format can either be a serialized proto (of EntityBatch type) or a JSON object. Note: The URI must start with "gs://".

entityTypeBatchInline Object

The collection of entity types to update or create.

This object should have the same structure as EntityTypeBatch

languageCode string

Optional. The language of entity synonyms defined in entity_types. If not specified, the agent's default language is used. Many languages are supported. Note: languages must be enabled in the agent before they can be used.

updateMask Object

Optional. The mask to control which fields get updated.

This object should have the same structure as FieldMask

Source:
See:

BatchUpdateEntityTypesResponse

The response message for EntityTypes.BatchUpdateEntityTypes.

Properties:
Name Type Description
entityTypes Array.<Object>

The collection of updated or created entity types.

This object should have the same structure as EntityType

Source:
See:

BatchUpdateIntentsRequest

The request message for Intents.BatchUpdateIntents.

Properties:
Name Type Description
parent string

Required. The name of the agent to update or create intents in. Format: projects/<Project ID>/agent.

intentBatchUri string

The URI to a Google Cloud Storage file containing intents to update or create. The file format can either be a serialized proto (of IntentBatch type) or JSON object. Note: The URI must start with "gs://".

intentBatchInline Object

The collection of intents to update or create.

This object should have the same structure as IntentBatch

languageCode string

Optional. The language of training phrases, parameters and rich messages defined in intents. If not specified, the agent's default language is used. Many languages are supported. Note: languages must be enabled in the agent before they can be used.

updateMask Object

Optional. The mask to control which fields get updated.

This object should have the same structure as FieldMask

intentView number

Optional. The resource view to apply to the returned intent.

The number should be among the values of IntentView

Source:
See:

BatchUpdateIntentsResponse

The response message for Intents.BatchUpdateIntents.

Properties:
Name Type Description
intents Array.<Object>

The collection of updated or created intents.

This object should have the same structure as Intent

Source:
See:

BrowseCarouselCard

Browse Carousel Card for Actions on Google. https://developers.google.com/actions/assistant/responses#browsing_carousel

Properties:
Name Type Description
items Array.<Object>

Required. List of items in the Browse Carousel Card. Minimum of two items, maximum of ten.

This object should have the same structure as BrowseCarouselCardItem

imageDisplayOptions number

Optional. Settings for displaying the image. Applies to every image in items.

The number should be among the values of ImageDisplayOptions

Source:
See:

BrowseCarouselCardItem

Browsing carousel tile

Properties:
Name Type Description
openUriAction Object

Required. Action to present to the user.

This object should have the same structure as OpenUrlAction

title string

Required. Title of the carousel item. Maximum of two lines of text.

description string

Optional. Description of the carousel item. Maximum of four lines of text.

image Object

Optional. Hero image for the carousel item.

This object should have the same structure as Image

footer string

Optional. Text that appears at the bottom of the Browse Carousel Card. Maximum of one line of text.

Source:
See:

Button

The button object that appears at the bottom of a card.

Properties:
Name Type Description
title string

Required. The title of the button.

openUriAction Object

Required. Action to take when a user taps on the button.

This object should have the same structure as OpenUriAction

Source:
See:

Button

Contains information about a button.

Properties:
Name Type Description
text string

Optional. The text to show on the button.

postback string

Optional. The text to send back to the Dialogflow API or a URI to open.

Source:
See:

Card

The card response message.

Properties:
Name Type Description
title string

Optional. The title of the card.

subtitle string

Optional. The subtitle of the card.

imageUri string

Optional. The public URI to an image file for the card.

buttons Array.<Object>

Optional. The collection of card buttons.

This object should have the same structure as Button

Source:
See:

CarouselSelect

The card for presenting a carousel of options to select from.

Properties:
Name Type Description
items Array.<Object>

Required. Carousel items.

This object should have the same structure as Item

Source:
See:

ColumnProperties

Column properties for TableCard.

Properties:
Name Type Description
header string

Required. Column heading.

horizontalAlignment number

Optional. Defines text alignment for all cells in this column.

The number should be among the values of HorizontalAlignment

Source:
See:

Context

Represents a context.

Properties:
Name Type Description
name string

Required. The unique identifier of the context. Format: projects/<Project ID>/agent/sessions/<Session ID>/contexts/<Context ID>.

The Context ID is always converted to lowercase, may only contain characters in [a-zA-Z0-9_-%] and may be at most 250 bytes long.

lifespanCount number

Optional. The number of conversational query requests after which the context expires. If set to 0 (the default) the context expires immediately. Contexts expire automatically after 20 minutes if there are no matching queries.

parameters Object

Optional. The collection of parameters associated with this context. Refer to this doc for syntax.

This object should have the same structure as Struct

Source:
See:

CreateContextRequest

The request message for Contexts.CreateContext.

Properties:
Name Type Description
parent string

Required. The session to create a context for. Format: projects/<Project ID>/agent/sessions/<Session ID>.

context Object

Required. The context to create.

This object should have the same structure as Context

Source:
See:

CreateEntityTypeRequest

The request message for EntityTypes.CreateEntityType.

Properties:
Name Type Description
parent string

Required. The agent to create a entity type for. Format: projects/<Project ID>/agent.

entityType Object

Required. The entity type to create.

This object should have the same structure as EntityType

languageCode string

Optional. The language of entity synonyms defined in entity_type. If not specified, the agent's default language is used. Many languages are supported. Note: languages must be enabled in the agent before they can be used.

Source:
See:

CreateIntentRequest

The request message for Intents.CreateIntent.

Properties:
Name Type Description
parent string

Required. The agent to create a intent for. Format: projects/<Project ID>/agent.

intent Object

Required. The intent to create.

This object should have the same structure as Intent

languageCode string

Optional. The language of training phrases, parameters and rich messages defined in intent. If not specified, the agent's default language is used. Many languages are supported. Note: languages must be enabled in the agent before they can be used.

intentView number

Optional. The resource view to apply to the returned intent.

The number should be among the values of IntentView

Source:
See:

CreateSessionEntityTypeRequest

The request message for SessionEntityTypes.CreateSessionEntityType.

Properties:
Name Type Description
parent string

Required. The session to create a session entity type for. Format: projects/<Project ID>/agent/sessions/<Session ID>.

sessionEntityType Object

Required. The session entity type to create.

This object should have the same structure as SessionEntityType

Source:
See:

DeleteAgentRequest

The request message for Agents.DeleteAgent.

Properties:
Name Type Description
parent string

Required. The project that the agent to delete is associated with. Format: projects/<Project ID>.

Source:
See:

DeleteAllContextsRequest

The request message for Contexts.DeleteAllContexts.

Properties:
Name Type Description
parent string

Required. The name of the session to delete all contexts from. Format: projects/<Project ID>/agent/sessions/<Session ID>.

Source:
See:

DeleteContextRequest

The request message for Contexts.DeleteContext.

Properties:
Name Type Description
name string

Required. The name of the context to delete. Format: projects/<Project ID>/agent/sessions/<Session ID>/contexts/<Context ID>.

Source:
See:

DeleteEntityTypeRequest

The request message for EntityTypes.DeleteEntityType.

Properties:
Name Type Description
name string

Required. The name of the entity type to delete. Format: projects/<Project ID>/agent/entityTypes/<EntityType ID>.

Source:
See:

DeleteIntentRequest

The request message for Intents.DeleteIntent.

Properties:
Name Type Description
name string

Required. The name of the intent to delete. If this intent has direct or indirect followup intents, we also delete them. Format: projects/<Project ID>/agent/intents/<Intent ID>.

Source:
See:

DeleteSessionEntityTypeRequest

The request message for SessionEntityTypes.DeleteSessionEntityType.

Properties:
Name Type Description
name string

Required. The name of the entity type to delete. Format: projects/<Project ID>/agent/sessions/<Session ID>/entityTypes/<Entity Type Display Name>.

Source:
See:

DetectIntentRequest

The request to detect user's intent.

Properties:
Name Type Description
session string

Required. The name of the session this query is sent to. Format: projects/<Project ID>/agent/sessions/<Session ID>. It's up to the API caller to choose an appropriate session ID. It can be a random number or some type of user identifier (preferably hashed). The length of the session ID must not exceed 36 bytes.

queryParams Object

Optional. The parameters of this query.

This object should have the same structure as QueryParameters

queryInput Object

Required. The input specification. It can be set to:

  1. an audio config which instructs the speech recognizer how to process the speech audio,

  2. a conversational query in the form of text, or

  3. an event that specifies which intent to trigger.

This object should have the same structure as QueryInput

outputAudioConfig Object

Optional. Instructs the speech synthesizer how to generate the output audio. If this field is not set and agent-level speech synthesizer is not configured, no output audio is generated.

This object should have the same structure as OutputAudioConfig

inputAudio Buffer

Optional. The natural language speech audio to be processed. This field should be populated iff query_input is set to an input audio config. A single request can contain up to 1 minute of speech audio data.

Source:
See:

DetectIntentResponse

The message returned from the DetectIntent method.

Properties:
Name Type Description
responseId string

The unique identifier of the response. It can be used to locate a response in the training example set or for reporting issues.

queryResult Object

The selected results of the conversational query or event processing. See alternative_query_results for additional potential results.

This object should have the same structure as QueryResult

webhookStatus Object

Specifies the status of the webhook request.

This object should have the same structure as Status

outputAudio Buffer

The audio data bytes encoded as specified in the request. Note: The output audio is generated based on the values of default platform text responses found in the query_result.fulfillment_messages field. If multiple default text responses exist, they will be concatenated when generating audio. If no default platform text responses exist, the generated audio content will be empty.

outputAudioConfig Object

The config used by the speech synthesizer to generate the output audio.

This object should have the same structure as OutputAudioConfig

Source:
See:

Entity

An entity entry for an associated entity type.

Properties:
Name Type Description
value string

Required. The primary value associated with this entity entry. For example, if the entity type is vegetable, the value could be scallions.

For KIND_MAP entity types:

  • A canonical value to be used in place of synonyms.

For KIND_LIST entity types:

  • A string that can contain references to other entity types (with or without aliases).
synonyms Array.<string>

Required. A collection of value synonyms. For example, if the entity type is vegetable, and value is scallions, a synonym could be green onions.

For KIND_LIST entity types:

  • This collection must contain exactly one synonym equal to value.
Source:
See:

EntityType

Represents an entity type. Entity types serve as a tool for extracting parameter values from natural language queries.

Properties:
Name Type Description
name string

The unique identifier of the entity type. Required for EntityTypes.UpdateEntityType and EntityTypes.BatchUpdateEntityTypes methods. Format: projects/<Project ID>/agent/entityTypes/<Entity Type ID>.

displayName string

Required. The name of the entity type.

kind number

Required. Indicates the kind of entity type.

The number should be among the values of Kind

autoExpansionMode number

Optional. Indicates whether the entity type can be automatically expanded.

The number should be among the values of AutoExpansionMode

entities Array.<Object>

Optional. The collection of entity entries associated with the entity type.

This object should have the same structure as Entity

enableFuzzyExtraction boolean

Optional. Enables fuzzy entity extraction during classification.

Source:
See:

EntityTypeBatch

This message is a wrapper around a collection of entity types.

Properties:
Name Type Description
entityTypes Array.<Object>

A collection of entity types.

This object should have the same structure as EntityType

Source:
See:

EventInput

Events allow for matching intents by event name instead of the natural language input. For instance, input <event: { name: "welcome_event", parameters: { name: "Sam" } }> can trigger a personalized welcome response. The parameter name may be used by the agent in the response: "Hello #welcome_event.name! What can I do for you today?".

Properties:
Name Type Description
name string

Required. The unique identifier of the event.

parameters Object

Optional. The collection of parameters associated with the event.

This object should have the same structure as Struct

languageCode string

Required. The language of this query. See Language Support for a list of the currently supported language codes. Note that queries in the same session do not necessarily need to specify the same language.

Source:
See:

ExportAgentRequest

The request message for Agents.ExportAgent.

Properties:
Name Type Description
parent string

Required. The project that the agent to export is associated with. Format: projects/<Project ID>.

agentUri string

Required. The Google Cloud Storage URI to export the agent to. The format of this URI must be gs://<bucket-name>/<object-name>. If left unspecified, the serialized agent is returned inline.

Source:
See:

ExportAgentResponse

The response message for Agents.ExportAgent.

Properties:
Name Type Description
agentUri string

The URI to a file containing the exported agent. This field is populated only if agent_uri is specified in ExportAgentRequest.

agentContent Buffer

Zip compressed raw byte content for agent.

Source:
See:

FollowupIntentInfo

Represents a single followup intent in the chain.

Properties:
Name Type Description
followupIntentName string

The unique identifier of the followup intent. Format: projects/<Project ID>/agent/intents/<Intent ID>.

parentFollowupIntentName string

The unique identifier of the followup intent's parent. Format: projects/<Project ID>/agent/intents/<Intent ID>.

Source:
See:

GetAgentRequest

The request message for Agents.GetAgent.

Properties:
Name Type Description
parent string

Required. The project that the agent to fetch is associated with. Format: projects/<Project ID>.

Source:
See:

GetContextRequest

The request message for Contexts.GetContext.

Properties:
Name Type Description
name string

Required. The name of the context. Format: projects/<Project ID>/agent/sessions/<Session ID>/contexts/<Context ID>.

Source:
See:

GetEntityTypeRequest

The request message for EntityTypes.GetEntityType.

Properties:
Name Type Description
name string

Required. The name of the entity type. Format: projects/<Project ID>/agent/entityTypes/<EntityType ID>.

languageCode string

Optional. The language to retrieve entity synonyms for. If not specified, the agent's default language is used. Many languages are supported. Note: languages must be enabled in the agent before they can be used.

Source:
See:

GetIntentRequest

The request message for Intents.GetIntent.

Properties:
Name Type Description
name string

Required. The name of the intent. Format: projects/<Project ID>/agent/intents/<Intent ID>.

languageCode string

Optional. The language to retrieve training phrases, parameters and rich messages for. If not specified, the agent's default language is used. Many languages are supported. Note: languages must be enabled in the agent before they can be used.

intentView number

Optional. The resource view to apply to the returned intent.

The number should be among the values of IntentView

Source:
See:

GetSessionEntityTypeRequest

The request message for SessionEntityTypes.GetSessionEntityType.

Properties:
Name Type Description
name string

Required. The name of the session entity type. Format: projects/<Project ID>/agent/sessions/<Session ID>/entityTypes/<Entity Type Display Name>.

Source:
See:

Image

The image response message.

Properties:
Name Type Description
imageUri string

Optional. The public URI to an image file.

accessibilityText string

Optional. A text description of the image to be used for accessibility, e.g., screen readers.

Source:
See:

ImportAgentRequest

The request message for Agents.ImportAgent.

Properties:
Name Type Description
parent string

Required. The project that the agent to import is associated with. Format: projects/<Project ID>.

agentUri string

The URI to a Google Cloud Storage file containing the agent to import. Note: The URI must start with "gs://".

agentContent Buffer

Zip compressed raw byte content for agent.

Source:
See:

InputAudioConfig

Instructs the speech recognizer how to process the audio content.

Properties:
Name Type Description
audioEncoding number

Required. Audio encoding of the audio content to process.

The number should be among the values of AudioEncoding

sampleRateHertz number

Required. Sample rate (in Hertz) of the audio content sent in the query. Refer to Cloud Speech API documentation for more details.

languageCode string

Required. The language of the supplied audio. Dialogflow does not do translations. See Language Support for a list of the currently supported language codes. Note that queries in the same session do not necessarily need to specify the same language.

phraseHints Array.<string>

Optional. A list of strings containing words and phrases that the speech recognizer should recognize with higher likelihood.

See the Cloud Speech documentation for more details.

modelVariant number

Optional. Which variant of the Speech model to use.

The number should be among the values of SpeechModelVariant

singleUtterance boolean

Optional. If false (default), recognition does not cease until the client closes the stream. If true, the recognizer will detect a single spoken utterance in input audio. Recognition ceases when it detects the audio's voice has stopped or paused. In this case, once a detected intent is received, the client should close the stream and start a new request with a new stream as needed. Note: This setting is relevant only for streaming methods. Note: When specified, InputAudioConfig.single_utterance takes precedence over StreamingDetectIntentRequest.single_utterance.

Source:
See:

Intent

Represents an intent. Intents convert a number of user expressions or patterns into an action. An action is an extraction of a user command or sentence semantics.

Properties:
Name Type Description
name string

The unique identifier of this intent. Required for Intents.UpdateIntent and Intents.BatchUpdateIntents methods. Format: projects/<Project ID>/agent/intents/<Intent ID>.

displayName string

Required. The name of this intent.

webhookState number

Optional. Indicates whether webhooks are enabled for the intent.

The number should be among the values of WebhookState

priority number

Optional. The priority of this intent. Higher numbers represent higher priorities.

  • If the supplied value is unspecified or 0, the service translates the value to 500,000, which corresponds to the Normal priority in the console.
  • If the supplied value is negative, the intent is ignored in runtime detect intent requests.
isFallback boolean

Optional. Indicates whether this is a fallback intent.

mlDisabled boolean

Optional. Indicates whether Machine Learning is disabled for the intent. Note: If ml_diabled setting is set to true, then this intent is not taken into account during inference in ML ONLY match mode. Also, auto-markup in the UI is turned off.

inputContextNames Array.<string>

Optional. The list of context names required for this intent to be triggered. Format: projects/<Project ID>/agent/sessions/-/contexts/<Context ID>.

events Array.<string>

Optional. The collection of event names that trigger the intent. If the collection of input contexts is not empty, all of the contexts must be present in the active user session for an event to trigger this intent.

trainingPhrases Array.<Object>

Optional. The collection of examples that the agent is trained on.

This object should have the same structure as TrainingPhrase

action string

Optional. The name of the action associated with the intent. Note: The action name must not contain whitespaces.

outputContexts Array.<Object>

Optional. The collection of contexts that are activated when the intent is matched. Context messages in this collection should not set the parameters field. Setting the lifespan_count to 0 will reset the context when the intent is matched. Format: projects/<Project ID>/agent/sessions/-/contexts/<Context ID>.

This object should have the same structure as Context

resetContexts boolean

Optional. Indicates whether to delete all contexts in the current session when this intent is matched.

parameters Array.<Object>

Optional. The collection of parameters associated with the intent.

This object should have the same structure as Parameter

messages Array.<Object>

Optional. The collection of rich messages corresponding to the Response field in the Dialogflow console.

This object should have the same structure as Message

defaultResponsePlatforms Array.<number>

Optional. The list of platforms for which the first responses will be copied from the messages in PLATFORM_UNSPECIFIED (i.e. default platform).

The number should be among the values of Platform

rootFollowupIntentName string

Read-only. The unique identifier of the root intent in the chain of followup intents. It identifies the correct followup intents chain for this intent. We populate this field only in the output.

Format: projects/<Project ID>/agent/intents/<Intent ID>.

parentFollowupIntentName string

Read-only after creation. The unique identifier of the parent intent in the chain of followup intents. You can set this field when creating an intent, for example with CreateIntent or BatchUpdateIntents, in order to make this intent a followup intent.

It identifies the parent followup intent. Format: projects/<Project ID>/agent/intents/<Intent ID>.

followupIntentInfo Array.<Object>

Read-only. Information about all followup intents that have this intent as a direct or indirect parent. We populate this field only in the output.

This object should have the same structure as FollowupIntentInfo

Source:
See:

IntentBatch

This message is a wrapper around a collection of intents.

Properties:
Name Type Description
intents Array.<Object>

A collection of intents.

This object should have the same structure as Intent

Source:
See:

Item

An item in the list.

Properties:
Name Type Description
info Object

Required. Additional information about this option.

This object should have the same structure as SelectItemInfo

title string

Required. The title of the list item.

description string

Optional. The main text describing the item.

image Object

Optional. The image to display.

This object should have the same structure as Image

Source:
See:

Item

An item in the carousel.

Properties:
Name Type Description
info Object

Required. Additional info about the option item.

This object should have the same structure as SelectItemInfo

title string

Required. Title of the carousel item.

description string

Optional. The body text of the card.

image Object

Optional. The image to display.

This object should have the same structure as Image

Source:
See:

LinkOutSuggestion

The suggestion chip message that allows the user to jump out to the app or website associated with this agent.

Properties:
Name Type Description
destinationName string

Required. The name of the app or site this chip is linking to.

uri string

Required. The URI of the app or site to open when the user taps the suggestion chip.

Source:
See:

ListContextsRequest

The request message for Contexts.ListContexts.

Properties:
Name Type Description
parent string

Required. The session to list all contexts from. Format: projects/<Project ID>/agent/sessions/<Session ID>.

pageSize number

Optional. The maximum number of items to return in a single page. By default 100 and at most 1000.

pageToken string

Optional. The next_page_token value returned from a previous list request.

Source:
See:

ListContextsResponse

The response message for Contexts.ListContexts.

Properties:
Name Type Description
contexts Array.<Object>

The list of contexts. There will be a maximum number of items returned based on the page_size field in the request.

This object should have the same structure as Context

nextPageToken string

Token to retrieve the next page of results, or empty if there are no more results in the list.

Source:
See:

ListEntityTypesRequest

The request message for EntityTypes.ListEntityTypes.

Properties:
Name Type Description
parent string

Required. The agent to list all entity types from. Format: projects/<Project ID>/agent.

languageCode string

Optional. The language to list entity synonyms for. If not specified, the agent's default language is used. Many languages are supported. Note: languages must be enabled in the agent before they can be used.

pageSize number

Optional. The maximum number of items to return in a single page. By default 100 and at most 1000.

pageToken string

Optional. The next_page_token value returned from a previous list request.

Source:
See:

ListEntityTypesResponse

The response message for EntityTypes.ListEntityTypes.

Properties:
Name Type Description
entityTypes Array.<Object>

The list of agent entity types. There will be a maximum number of items returned based on the page_size field in the request.

This object should have the same structure as EntityType

nextPageToken string

Token to retrieve the next page of results, or empty if there are no more results in the list.

Source:
See:

ListIntentsRequest

The request message for Intents.ListIntents.

Properties:
Name Type Description
parent string

Required. The agent to list all intents from. Format: projects/<Project ID>/agent.

languageCode string

Optional. The language to list training phrases, parameters and rich messages for. If not specified, the agent's default language is used. Many languages are supported. Note: languages must be enabled in the agent before they can be used.

intentView number

Optional. The resource view to apply to the returned intent.

The number should be among the values of IntentView

pageSize number

Optional. The maximum number of items to return in a single page. By default 100 and at most 1000.

pageToken string

Optional. The next_page_token value returned from a previous list request.

Source:
See:

ListIntentsResponse

The response message for Intents.ListIntents.

Properties:
Name Type Description
intents Array.<Object>

The list of agent intents. There will be a maximum number of items returned based on the page_size field in the request.

This object should have the same structure as Intent

nextPageToken string

Token to retrieve the next page of results, or empty if there are no more results in the list.

Source:
See:

ListSelect

The card for presenting a list of options to select from.

Properties:
Name Type Description
title string

Optional. The overall title of the list.

items Array.<Object>

Required. List items.

This object should have the same structure as Item

subtitle string

Optional. Subtitle of the list.

Source:
See:

ListSessionEntityTypesRequest

The request message for SessionEntityTypes.ListSessionEntityTypes.

Properties:
Name Type Description
parent string

Required. The session to list all session entity types from. Format: projects/<Project ID>/agent/sessions/<Session ID>.

pageSize number

Optional. The maximum number of items to return in a single page. By default 100 and at most 1000.

pageToken string

Optional. The next_page_token value returned from a previous list request.

Source:
See:

ListSessionEntityTypesResponse

The response message for SessionEntityTypes.ListSessionEntityTypes.

Properties:
Name Type Description
sessionEntityTypes Array.<Object>

The list of session entity types. There will be a maximum number of items returned based on the page_size field in the request.

This object should have the same structure as SessionEntityType

nextPageToken string

Token to retrieve the next page of results, or empty if there are no more results in the list.

Source:
See:

MediaContent

The media content card for Actions on Google.

Properties:
Name Type Description
mediaType number

Optional. What type of media is the content (ie "audio").

The number should be among the values of ResponseMediaType

mediaObjects Array.<Object>

Required. List of media objects.

This object should have the same structure as ResponseMediaObject

Source:
See:

Message

Corresponds to the Response field in the Dialogflow console.

Properties:
Name Type Description
text Object

The text response.

This object should have the same structure as Text

image Object

The image response.

This object should have the same structure as Image

quickReplies Object

The quick replies response.

This object should have the same structure as QuickReplies

card Object

The card response.

This object should have the same structure as Card

payload Object

Returns a response containing a custom, platform-specific payload. See the Intent.Message.Platform type for a description of the structure that may be required for your platform.

This object should have the same structure as Struct

simpleResponses Object

The voice and text-only responses for Actions on Google.

This object should have the same structure as SimpleResponses

basicCard Object

The basic card response for Actions on Google.

This object should have the same structure as BasicCard

suggestions Object

The suggestion chips for Actions on Google.

This object should have the same structure as Suggestions

linkOutSuggestion Object

The link out suggestion chip for Actions on Google.

This object should have the same structure as LinkOutSuggestion

listSelect Object

The list card response for Actions on Google.

This object should have the same structure as ListSelect

carouselSelect Object

The carousel card response for Actions on Google.

This object should have the same structure as CarouselSelect

browseCarouselCard Object

Browse carousel card for Actions on Google.

This object should have the same structure as BrowseCarouselCard

tableCard Object

Table card for Actions on Google.

This object should have the same structure as TableCard

mediaContent Object

The media content card for Actions on Google.

This object should have the same structure as MediaContent

platform number

Optional. The platform that this message is intended for.

The number should be among the values of Platform

Source:
See:

OpenUriAction

Opens the given URI.

Properties:
Name Type Description
uri string

Required. The HTTP or HTTPS scheme URI.

Source:
See:

OpenUrlAction

Actions on Google action to open a given url.

Properties:
Name Type Description
url string

Required. URL

urlTypeHint number

Optional. Specifies the type of viewer that is used when opening the URL. Defaults to opening via web browser.

The number should be among the values of UrlTypeHint

Source:
See:

OutputAudioConfig

Instructs the speech synthesizer on how to generate the output audio content.

Properties:
Name Type Description
audioEncoding number

Required. Audio encoding of the synthesized audio content.

The number should be among the values of OutputAudioEncoding

sampleRateHertz number

Optional. The synthesis sample rate (in hertz) for this audio. If not provided, then the synthesizer will use the default sample rate based on the audio encoding. If this is different from the voice's natural sample rate, then the synthesizer will honor this request by converting to the desired sample rate (which might result in worse audio quality).

synthesizeSpeechConfig Object

Optional. Configuration of how speech should be synthesized.

This object should have the same structure as SynthesizeSpeechConfig

Source:
See:

Parameter

Represents intent parameters.

Properties:
Name Type Description
name string

The unique identifier of this parameter.

displayName string

Required. The name of the parameter.

value string

Optional. The definition of the parameter value. It can be:

  • a constant string,
  • a parameter value defined as $parameter_name,
  • an original parameter value defined as $parameter_name.original,
  • a parameter value from some context defined as #context_name.parameter_name.
defaultValue string

Optional. The default value to use when the value yields an empty result. Default values can be extracted from contexts by using the following syntax: #context_name.parameter_name.

entityTypeDisplayName string

Optional. The name of the entity type, prefixed with @, that describes values of the parameter. If the parameter is required, this must be provided.

mandatory boolean

Optional. Indicates whether the parameter is required. That is, whether the intent cannot be completed without collecting the parameter value.

prompts Array.<string>

Optional. The collection of prompts that the agent can present to the user in order to collect a value for the parameter.

isList boolean

Optional. Indicates whether the parameter represents a list of values.

Source:
See:

Part

Represents a part of a training phrase.

Properties:
Name Type Description
text string

Required. The text for this part.

entityType string

Optional. The entity type name prefixed with @. This field is required for annotated parts of the training phrase.

alias string

Optional. The parameter name for the value extracted from the annotated part of the example. This field is required for annotated parts of the training phrase.

userDefined boolean

Optional. Indicates whether the text was manually annotated. This field is set to true when the Dialogflow Console is used to manually annotate the part. When creating an annotated part with the API, you must set this to true.

Source:
See:

QueryInput

Represents the query input. It can contain either:

  1. An audio config which instructs the speech recognizer how to process the speech audio.

  2. A conversational query in the form of text,.

  3. An event that specifies which intent to trigger.

Properties:
Name Type Description
audioConfig Object

Instructs the speech recognizer how to process the speech audio.

This object should have the same structure as InputAudioConfig

text Object

The natural language text to be processed.

This object should have the same structure as TextInput

event Object

The event to be processed.

This object should have the same structure as EventInput

Source:
See:

QueryParameters

Represents the parameters of the conversational query.

Properties:
Name Type Description
timeZone string

Optional. The time zone of this conversational query from the time zone database, e.g., America/New_York, Europe/Paris. If not provided, the time zone specified in agent settings is used.

geoLocation Object

Optional. The geo location of this conversational query.

This object should have the same structure as LatLng

contexts Array.<Object>

Optional. The collection of contexts to be activated before this query is executed.

This object should have the same structure as Context

resetContexts boolean

Optional. Specifies whether to delete all contexts in the current session before the new ones are activated.

sessionEntityTypes Array.<Object>

Optional. Additional session entity types to replace or extend developer entity types with. The entity synonyms apply to all languages and persist for the session of this query.

This object should have the same structure as SessionEntityType

payload Object

Optional. This field can be used to pass custom data into the webhook associated with the agent. Arbitrary JSON objects are supported.

This object should have the same structure as Struct

sentimentAnalysisRequestConfig Object

Optional. Configures the type of sentiment analysis to perform. If not provided, sentiment analysis is not performed.

This object should have the same structure as SentimentAnalysisRequestConfig

Source:
See:

QueryResult

Represents the result of conversational query or event processing.

Properties:
Name Type Description
queryText string

The original conversational query text:

  • If natural language text was provided as input, query_text contains a copy of the input.
  • If natural language speech audio was provided as input, query_text contains the speech recognition result. If speech recognizer produced multiple alternatives, a particular one is picked.
  • If automatic spell correction is enabled, query_text will contain the corrected user input.
languageCode string

The language that was triggered during intent detection. See Language Support for a list of the currently supported language codes.

speechRecognitionConfidence number

The Speech recognition confidence between 0.0 and 1.0. A higher number indicates an estimated greater likelihood that the recognized words are correct. The default of 0.0 is a sentinel value indicating that confidence was not set.

This field is not guaranteed to be accurate or set. In particular this field isn't set for StreamingDetectIntent since the streaming endpoint has separate confidence estimates per portion of the audio in StreamingRecognitionResult.

action string

The action name from the matched intent.

parameters Object

The collection of extracted parameters.

This object should have the same structure as Struct

allRequiredParamsPresent boolean

This field is set to:

  • false if the matched intent has required parameters and not all of the required parameter values have been collected.
  • true if all required parameter values have been collected, or if the matched intent doesn't contain any required parameters.
fulfillmentText string

The text to be pronounced to the user or shown on the screen. Note: This is a legacy field, fulfillment_messages should be preferred.

fulfillmentMessages Array.<Object>

The collection of rich messages to present to the user.

This object should have the same structure as Message

webhookSource string

If the query was fulfilled by a webhook call, this field is set to the value of the source field returned in the webhook response.

webhookPayload Object

If the query was fulfilled by a webhook call, this field is set to the value of the payload field returned in the webhook response.

This object should have the same structure as Struct

outputContexts Array.<Object>

The collection of output contexts. If applicable, output_contexts.parameters contains entries with name <parameter name>.original containing the original parameter values before the query.

This object should have the same structure as Context

intent Object

The intent that matched the conversational query. Some, not all fields are filled in this message, including but not limited to: name, display_name, end_interaction and is_fallback.

This object should have the same structure as Intent

intentDetectionConfidence number

The intent detection confidence. Values range from 0.0 (completely uncertain) to 1.0 (completely certain). This value is for informational purpose only and is only used to help match the best intent within the classification threshold. This value may change for the same end-user expression at any time due to a model retraining or change in implementation. If there are multiple knowledge_answers messages, this value is set to the greatest knowledgeAnswers.match_confidence value in the list.

diagnosticInfo Object

The free-form diagnostic info. For example, this field could contain webhook call latency. The string keys of the Struct's fields map can change without notice.

This object should have the same structure as Struct

sentimentAnalysisResult Object

The sentiment analysis result, which depends on the sentiment_analysis_request_config specified in the request.

This object should have the same structure as SentimentAnalysisResult

Source:
See:

QuickReplies

The quick replies response message.

Properties:
Name Type Description
title string

Optional. The title of the collection of quick replies.

quickReplies Array.<string>

Optional. The collection of quick replies.

Source:
See:

ResponseMediaObject

Response media object for media content card.

Properties:
Name Type Description
name string

Required. Name of media card.

description string

Optional. Description of media card.

largeImage Object

Optional. Image to display above media content.

This object should have the same structure as Image

icon Object

Optional. Icon to display above media content.

This object should have the same structure as Image

contentUrl string

Required. Url where the media is stored.

Source:
See:

RestoreAgentRequest

The request message for Agents.RestoreAgent.

Properties:
Name Type Description
parent string

Required. The project that the agent to restore is associated with. Format: projects/<Project ID>.

agentUri string

The URI to a Google Cloud Storage file containing the agent to restore. Note: The URI must start with "gs://".

agentContent Buffer

Zip compressed raw byte content for agent.

Source:
See:

SearchAgentsRequest

The request message for Agents.SearchAgents.

Properties:
Name Type Description
parent string

Required. The project to list agents from. Format: projects/<Project ID or '-'>.

pageSize number

Optional. The maximum number of items to return in a single page. By default 100 and at most 1000.

pageToken string

The next_page_token value returned from a previous list request.

Source:
See:

SearchAgentsResponse

The response message for Agents.SearchAgents.

Properties:
Name Type Description
agents Array.<Object>

The list of agents. There will be a maximum number of items returned based on the page_size field in the request.

This object should have the same structure as Agent

nextPageToken string

Token to retrieve the next page of results, or empty if there are no more results in the list.

Source:
See:

SelectItemInfo

Additional info about the select item for when it is triggered in a dialog.

Properties:
Name Type Description
key string

Required. A unique key that will be sent back to the agent if this response is given.

synonyms Array.<string>

Optional. A list of synonyms that can also be used to trigger this item in dialog.

Source:
See:

Sentiment

The sentiment, such as positive/negative feeling or association, for a unit of analysis, such as the query text.

Properties:
Name Type Description
score number

Sentiment score between -1.0 (negative sentiment) and 1.0 (positive sentiment).

magnitude number

A non-negative number in the [0, +inf) range, which represents the absolute magnitude of sentiment, regardless of score (positive or negative).

Source:
See:

SentimentAnalysisRequestConfig

Configures the types of sentiment analysis to perform.

Properties:
Name Type Description
analyzeQueryTextSentiment boolean

Optional. Instructs the service to perform sentiment analysis on query_text. If not provided, sentiment analysis is not performed on query_text.

Source:
See:

SentimentAnalysisResult

The result of sentiment analysis as configured by sentiment_analysis_request_config.

Properties:
Name Type Description
queryTextSentiment Object

The sentiment analysis result for query_text.

This object should have the same structure as Sentiment

Source:
See:

SessionEntityType

Represents a session entity type.

Extends or replaces a developer entity type at the user session level (we refer to the entity types defined at the agent level as "developer entity types").

Note: session entity types apply to all queries, regardless of the language.

Properties:
Name Type Description
name string

Required. The unique identifier of this session entity type. Format: projects/<Project ID>/agent/sessions/<Session ID>/entityTypes/<Entity Type Display Name>.

<Entity Type Display Name> must be the display name of an existing entity type in the same agent that will be overridden or supplemented.

entityOverrideMode number

Required. Indicates whether the additional data should override or supplement the developer entity type definition.

The number should be among the values of EntityOverrideMode

entities Array.<Object>

Required. The collection of entities associated with this session entity type.

This object should have the same structure as Entity

Source:
See:

SetAgentRequest

The request message for Agents.SetAgent.

Properties:
Name Type Description
agent Object

Required. The agent to update.

This object should have the same structure as Agent

updateMask Object

Optional. The mask to control which fields get updated.

This object should have the same structure as FieldMask

Source:
See:

SimpleResponse

The simple response message containing speech or text.

Properties:
Name Type Description
textToSpeech string

One of text_to_speech or ssml must be provided. The plain text of the speech output. Mutually exclusive with ssml.

ssml string

One of text_to_speech or ssml must be provided. Structured spoken response to the user in the SSML format. Mutually exclusive with text_to_speech.

displayText string

Optional. The text to display.

Source:
See:

SimpleResponses

The collection of simple response candidates. This message in QueryResult.fulfillment_messages and WebhookResponse.fulfillment_messages should contain only one SimpleResponse.

Properties:
Name Type Description
simpleResponses Array.<Object>

Required. The list of simple responses.

This object should have the same structure as SimpleResponse

Source:
See:

StreamingDetectIntentRequest

The top-level message sent by the client to the StreamingDetectIntent method.

Multiple request messages should be sent in order:

  1. The first message must contain StreamingDetectIntentRequest.session, [StreamingDetectIntentRequest.query_input] plus optionally [StreamingDetectIntentRequest.query_params]. If the client wants to receive an audio response, it should also contain StreamingDetectIntentRequest.output_audio_config. The message must not contain StreamingDetectIntentRequest.input_audio.

  2. If StreamingDetectIntentRequest.query_input was set to StreamingDetectIntentRequest.query_input.audio_config, all subsequent messages must contain [StreamingDetectIntentRequest.input_audio] to continue with Speech recognition. If you decide to rather detect an intent from text input after you already started Speech recognition, please send a message with StreamingDetectIntentRequest.query_input.text.

    However, note that:

    • Dialogflow will bill you for the audio duration so far.
    • Dialogflow discards all Speech recognition results in favor of the input text.
    • Dialogflow will use the language code from the first message.

After you sent all input, you must half-close or abort the request stream.

Properties:
Name Type Description
session string

Required. The name of the session the query is sent to. Format of the session name: projects/<Project ID>/agent/sessions/<Session ID>. It's up to the API caller to choose an appropriate Session ID. It can be a random number or some type of user identifier (preferably hashed). The length of the session ID must not exceed 36 characters.

queryParams Object

Optional. The parameters of this query.

This object should have the same structure as QueryParameters

queryInput Object

Required. The input specification. It can be set to:

  1. an audio config which instructs the speech recognizer how to process the speech audio,

  2. a conversational query in the form of text, or

  3. an event that specifies which intent to trigger.

This object should have the same structure as QueryInput

singleUtterance boolean

Optional. Please use InputAudioConfig.single_utterance instead. If false (default), recognition does not cease until the client closes the stream. If true, the recognizer will detect a single spoken utterance in input audio. Recognition ceases when it detects the audio's voice has stopped or paused. In this case, once a detected intent is received, the client should close the stream and start a new request with a new stream as needed. This setting is ignored when query_input is a piece of text or an event.

outputAudioConfig Object

Optional. Instructs the speech synthesizer how to generate the output audio. If this field is not set and agent-level speech synthesizer is not configured, no output audio is generated.

This object should have the same structure as OutputAudioConfig

inputAudio Buffer

Optional. The input audio content to be recognized. Must be sent if query_input was set to a streaming input audio config. The complete audio over all streaming messages must not exceed 1 minute.

Source:
See:

StreamingDetectIntentResponse

The top-level message returned from the StreamingDetectIntent method.

Multiple response messages can be returned in order:

  1. If the input was set to streaming audio, the first one or more messages contain recognition_result. Each recognition_result represents a more complete transcript of what the user said. The last recognition_result has is_final set to true.

  2. The next message contains response_id, query_result and optionally webhook_status if a WebHook was called.

Properties:
Name Type Description
responseId string

The unique identifier of the response. It can be used to locate a response in the training example set or for reporting issues.

recognitionResult Object

The result of speech recognition.

This object should have the same structure as StreamingRecognitionResult

queryResult Object

The result of the conversational query or event processing.

This object should have the same structure as QueryResult

webhookStatus Object

Specifies the status of the webhook request.

This object should have the same structure as Status

outputAudio Buffer

The audio data bytes encoded as specified in the request. Note: The output audio is generated based on the values of default platform text responses found in the query_result.fulfillment_messages field. If multiple default text responses exist, they will be concatenated when generating audio. If no default platform text responses exist, the generated audio content will be empty.

outputAudioConfig Object

The config used by the speech synthesizer to generate the output audio.

This object should have the same structure as OutputAudioConfig

Source:
See:

StreamingRecognitionResult

Contains a speech recognition result corresponding to a portion of the audio that is currently being processed or an indication that this is the end of the single requested utterance.

Example:

  1. transcript: "tube"

  2. transcript: "to be a"

  3. transcript: "to be"

  4. transcript: "to be or not to be" is_final: true

  5. transcript: " that's"

  6. transcript: " that is"

  7. message_type: END_OF_SINGLE_UTTERANCE

  8. transcript: " that is the question" is_final: true

Only two of the responses contain final results (#4 and #8 indicated by is_final: true). Concatenating these generates the full transcript: "to be or not to be that is the question".

In each response we populate:

  • for TRANSCRIPT: transcript and possibly is_final.

  • for END_OF_SINGLE_UTTERANCE: only message_type.

Properties:
Name Type Description
messageType number

Type of the result message.

The number should be among the values of MessageType

transcript string

Transcript text representing the words that the user spoke. Populated if and only if message_type = TRANSCRIPT.

isFinal boolean

If false, the StreamingRecognitionResult represents an interim result that may change. If true, the recognizer will not return any further hypotheses about this piece of the audio. May only be populated for message_type = TRANSCRIPT.

confidence number

The Speech confidence between 0.0 and 1.0 for the current portion of audio. A higher number indicates an estimated greater likelihood that the recognized words are correct. The default of 0.0 is a sentinel value indicating that confidence was not set.

This field is typically only provided if is_final is true and you should not rely on it being accurate or even set.

Source:
See:

Suggestion

The suggestion chip message that the user can tap to quickly post a reply to the conversation.

Properties:
Name Type Description
title string

Required. The text shown the in the suggestion chip.

Source:
See:

Suggestions

The collection of suggestions.

Properties:
Name Type Description
suggestions Array.<Object>

Required. The list of suggested replies.

This object should have the same structure as Suggestion

Source:
See:

SynthesizeSpeechConfig

Configuration of how speech should be synthesized.

Properties:
Name Type Description
speakingRate number

Optional. Speaking rate/speed, in the range [0.25, 4.0]. 1.0 is the normal native speed supported by the specific voice. 2.0 is twice as fast, and 0.5 is half as fast. If unset(0.0), defaults to the native 1.0 speed. Any other values < 0.25 or > 4.0 will return an error.

pitch number

Optional. Speaking pitch, in the range [-20.0, 20.0]. 20 means increase 20 semitones from the original pitch. -20 means decrease 20 semitones from the original pitch.

volumeGainDb number

Optional. Volume gain (in dB) of the normal native volume supported by the specific voice, in the range [-96.0, 16.0]. If unset, or set to a value of 0.0 (dB), will play at normal native signal amplitude. A value of -6.0 (dB) will play at approximately half the amplitude of the normal native signal amplitude. A value of +6.0 (dB) will play at approximately twice the amplitude of the normal native signal amplitude. We strongly recommend not to exceed +10 (dB) as there's usually no effective increase in loudness for any value greater than that.

effectsProfileId Array.<string>

Optional. An identifier which selects 'audio effects' profiles that are applied on (post synthesized) text to speech. Effects are applied on top of each other in the order they are given.

voice Object

Optional. The desired voice of the synthesized audio.

This object should have the same structure as VoiceSelectionParams

Source:
See:

TableCard

Table card for Actions on Google.

Properties:
Name Type Description
title string

Required. Title of the card.

subtitle string

Optional. Subtitle to the title.

image Object

Optional. Image which should be displayed on the card.

This object should have the same structure as Image

columnProperties Array.<Object>

Optional. Display properties for the columns in this table.

This object should have the same structure as ColumnProperties

rows Array.<Object>

Optional. Rows in this table of data.

This object should have the same structure as TableCardRow

buttons Array.<Object>

Optional. List of buttons for the card.

This object should have the same structure as Button

Source:
See:

TableCardCell

Cell of TableCardRow.

Properties:
Name Type Description
text string

Required. Text in this cell.

Source:
See:

TableCardRow

Row of TableCard.

Properties:
Name Type Description
cells Array.<Object>

Optional. List of cells that make up this row.

This object should have the same structure as TableCardCell

dividerAfter boolean

Optional. Whether to add a visual divider after this row.

Source:
See:

Text

The text response message.

Properties:
Name Type Description
text Array.<string>

Optional. The collection of the agent's responses.

Source:
See:

TextInput

Represents the natural language text to be processed.

Properties:
Name Type Description
text string

Required. The UTF-8 encoded natural language text to be processed. Text length must not exceed 256 characters.

languageCode string

Required. The language of this conversational query. See Language Support for a list of the currently supported language codes. Note that queries in the same session do not necessarily need to specify the same language.

Source:
See:

TrainAgentRequest

The request message for Agents.TrainAgent.

Properties:
Name Type Description
parent string

Required. The project that the agent to train is associated with. Format: projects/<Project ID>.

Source:
See:

TrainingPhrase

Represents an example that the agent is trained on.

Properties:
Name Type Description
name string

Output only. The unique identifier of this training phrase.

type number

Required. The type of the training phrase.

The number should be among the values of Type

parts Array.<Object>

Required. The ordered list of training phrase parts. The parts are concatenated in order to form the training phrase.

Note: The API does not automatically annotate training phrases like the Dialogflow Console does.

Note: Do not forget to include whitespace at part boundaries, so the training phrase is well formatted when the parts are concatenated.

If the training phrase does not need to be annotated with parameters, you just need a single part with only the Part.text field set.

If you want to annotate the training phrase, you must create multiple parts, where the fields of each part are populated in one of two ways:

  • Part.text is set to a part of the phrase that has no parameters.
  • Part.text is set to a part of the phrase that you want to annotate, and the entity_type, alias, and user_defined fields are all set.

This object should have the same structure as Part

timesAddedCount number

Optional. Indicates how many times this example was added to the intent. Each time a developer adds an existing sample by editing an intent or training, this counter is increased.

Source:
See:

UpdateContextRequest

The request message for Contexts.UpdateContext.

Properties:
Name Type Description
context Object

Required. The context to update.

This object should have the same structure as Context

updateMask Object

Optional. The mask to control which fields get updated.

This object should have the same structure as FieldMask

Source:
See:

UpdateEntityTypeRequest

The request message for EntityTypes.UpdateEntityType.

Properties:
Name Type Description
entityType Object

Required. The entity type to update.

This object should have the same structure as EntityType

languageCode string

Optional. The language of entity synonyms defined in entity_type. If not specified, the agent's default language is used. Many languages are supported. Note: languages must be enabled in the agent before they can be used.

updateMask Object

Optional. The mask to control which fields get updated.

This object should have the same structure as FieldMask

Source:
See:

UpdateIntentRequest

The request message for Intents.UpdateIntent.

Properties:
Name Type Description
intent Object

Required. The intent to update.

This object should have the same structure as Intent

languageCode string

Optional. The language of training phrases, parameters and rich messages defined in intent. If not specified, the agent's default language is used. Many languages are supported. Note: languages must be enabled in the agent before they can be used.

updateMask Object

Optional. The mask to control which fields get updated.

This object should have the same structure as FieldMask

intentView number

Optional. The resource view to apply to the returned intent.

The number should be among the values of IntentView

Source:
See:

UpdateSessionEntityTypeRequest

The request message for SessionEntityTypes.UpdateSessionEntityType.

Properties:
Name Type Description
sessionEntityType Object

Required. The entity type to update. Format: projects/<Project ID>/agent/sessions/<Session ID>/entityTypes/<Entity Type Display Name>.

This object should have the same structure as SessionEntityType

updateMask Object

Optional. The mask to control which fields get updated.

This object should have the same structure as FieldMask

Source:
See:

VoiceSelectionParams

Description of which voice to use for speech synthesis.

Properties:
Name Type Description
name string

Optional. The name of the voice. If not set, the service will choose a voice based on the other parameters such as language_code and ssml_gender.

ssmlGender number

Optional. The preferred gender of the voice. If not set, the service will choose a voice based on the other parameters such as language_code and name. Note that this is only a preference, not requirement. If a voice of the appropriate gender is not available, the synthesizer should substitute a voice with a different gender rather than failing the request.

The number should be among the values of SsmlVoiceGender

Source:
See: