v2beta1

google.cloud.dialogflow. v2beta1

Source:

Members

(static) ApiVersion :number

API version for the agent.

Properties:
Name Type Description
API_VERSION_UNSPECIFIED number

Not specified.

API_VERSION_V1 number

Legacy V1 API.

API_VERSION_V2 number

V2 API.

API_VERSION_V2_BETA_1 number

V2beta1 API.

Source:

(static, constant) AudioEncoding :number

Audio encoding of the audio content sent in the conversational query request. Refer to the Cloud Speech API documentation for more details.

Properties:
Name Type Description
AUDIO_ENCODING_UNSPECIFIED number

Not specified.

AUDIO_ENCODING_LINEAR_16 number

Uncompressed 16-bit signed little-endian samples (Linear PCM).

AUDIO_ENCODING_FLAC number

FLAC (Free Lossless Audio Codec) is the recommended encoding because it is lossless (therefore recognition is not compromised) and requires only about half the bandwidth of LINEAR16. FLAC stream encoding supports 16-bit and 24-bit samples, however, not all fields in STREAMINFO are supported.

AUDIO_ENCODING_MULAW number

8-bit samples that compand 14-bit audio samples using G.711 PCMU/mu-law.

AUDIO_ENCODING_AMR number

Adaptive Multi-Rate Narrowband codec. sample_rate_hertz must be 8000.

AUDIO_ENCODING_AMR_WB number

Adaptive Multi-Rate Wideband codec. sample_rate_hertz must be 16000.

AUDIO_ENCODING_OGG_OPUS number

Opus encoded audio frames in Ogg container (OggOpus). sample_rate_hertz must be 16000.

AUDIO_ENCODING_SPEEX_WITH_HEADER_BYTE number

Although the use of lossy encodings is not recommended, if a very low bitrate encoding is required, OGG_OPUS is highly preferred over Speex encoding. The Speex encoding supported by Dialogflow API has a header byte in each block, as in MIME type audio/x-speex-with-header-byte. It is a variant of the RTP Speex encoding defined in RFC 5574. The stream is a sequence of blocks, one block per RTP packet. Each block starts with a byte containing the length of the block, in bytes, followed by one or more frames of Speex data, padded to an integral number of bytes (octets) as specified in RFC 5574. In other words, each RTP header is replaced with a single byte containing the block length. Only Speex wideband is supported. sample_rate_hertz must be 16000.

Source:

(static) AutoExpansionMode :number

Represents different entity type expansion modes. Automated expansion allows an agent to recognize values that have not been explicitly listed in the entity (for example, new kinds of shopping list items).

Properties:
Name Type Description
AUTO_EXPANSION_MODE_UNSPECIFIED number

Auto expansion disabled for the entity.

AUTO_EXPANSION_MODE_DEFAULT number

Allows an agent to recognize values that have not been explicitly listed in the entity.

Source:

(static) CardOrientation :number

Orientation of the card.

Properties:
Name Type Description
CARD_ORIENTATION_UNSPECIFIED number

Not specified.

HORIZONTAL number

Horizontal layout.

VERTICAL number

Vertical layout.

Source:

(static) CardWidth :number

The width of the cards in the carousel.

Properties:
Name Type Description
CARD_WIDTH_UNSPECIFIED number

Not specified.

SMALL number

120 DP. Note that tall media cannot be used.

MEDIUM number

232 DP.

Source:

(static) EntityOverrideMode :number

The types of modifications for a session entity type.

Properties:
Name Type Description
ENTITY_OVERRIDE_MODE_UNSPECIFIED number

Not specified. This value should be never used.

ENTITY_OVERRIDE_MODE_OVERRIDE number

The collection of session entities overrides the collection of entities in the corresponding developer entity type.

ENTITY_OVERRIDE_MODE_SUPPLEMENT number

The collection of session entities extends the collection of entities in the corresponding developer entity type.

Note: Even in this override mode calls to ListSessionEntityTypes, GetSessionEntityType, CreateSessionEntityType and UpdateSessionEntityType only return the additional entities added in this session entity type. If you want to get the supplemented list, please call EntityTypes.GetEntityType on the developer entity type and merge.

Source:

(static) Height :number

Media height

Properties:
Name Type Description
HEIGHT_UNSPECIFIED number

Not specified.

SHORT number

112 DP.

MEDIUM number

168 DP.

TALL number

264 DP. Not available for rich card carousels when the card width is set to small.

Source:

(static) HorizontalAlignment :number

Text alignments within a cell.

Properties:
Name Type Description
HORIZONTAL_ALIGNMENT_UNSPECIFIED number

Text is aligned to the leading edge of the column.

LEADING number

Text is aligned to the leading edge of the column.

CENTER number

Text is centered in the column.

TRAILING number

Text is aligned to the trailing edge of the column.

Source:

(static) ImageDisplayOptions :number

Image display options for Actions on Google. This should be used for when the image's aspect ratio does not match the image container's aspect ratio.

Properties:
Name Type Description
IMAGE_DISPLAY_OPTIONS_UNSPECIFIED number

Fill the gaps between the image and the image container with gray bars.

GRAY number

Fill the gaps between the image and the image container with gray bars.

WHITE number

Fill the gaps between the image and the image container with white bars.

CROPPED number

Image is scaled such that the image width and height match or exceed the container dimensions. This may crop the top and bottom of the image if the scaled image height is greater than the container height, or crop the left and right of the image if the scaled image width is greater than the container width. This is similar to "Zoom Mode" on a widescreen TV when playing a 4:3 video.

BLURRED_BACKGROUND number

Pad the gaps between image and image frame with a blurred copy of the same image.

Source:

(static, constant) IntentView :number

Represents the options for views of an intent. An intent can be a sizable object. Therefore, we provide a resource view that does not return training phrases in the response by default.

Properties:
Name Type Description
INTENT_VIEW_UNSPECIFIED number

Training phrases field is not populated in the response.

INTENT_VIEW_FULL number

All fields are populated.

Source:

(static) Kind :number

Represents kinds of entities.

Properties:
Name Type Description
KIND_UNSPECIFIED number

Not specified. This value should be never used.

KIND_MAP number

Map entity types allow mapping of a group of synonyms to a canonical value.

KIND_LIST number

List entity types contain a set of entries that do not map to canonical values. However, list entity types can contain references to other entity types (with or without aliases).

KIND_REGEXP number

Regexp entity types allow to specify regular expressions in entries values.

Source:

(static) KnowledgeType :number

The knowledge type of document content.

Properties:
Name Type Description
KNOWLEDGE_TYPE_UNSPECIFIED number

The type is unspecified or arbitrary.

FAQ number

The document content contains question and answer pairs as either HTML or CSV. Typical FAQ HTML formats are parsed accurately, but unusual formats may fail to be parsed.

CSV must have questions in the first column and answers in the second, with no header. Because of this explicit format, they are always parsed accurately.

EXTRACTIVE_QA number

Documents for which unstructured text is extracted and used for question answering.

Source:

(static) MatchConfidenceLevel :number

Represents the system's confidence that this knowledge answer is a good match for this conversational query.

Properties:
Name Type Description
MATCH_CONFIDENCE_LEVEL_UNSPECIFIED number

Not specified.

LOW number

Indicates that the confidence is low.

MEDIUM number

Indicates our confidence is medium.

HIGH number

Indicates our confidence is high.

Source:

(static) MatchMode :number

Match mode determines how intents are detected from user queries.

Properties:
Name Type Description
MATCH_MODE_UNSPECIFIED number

Not specified.

MATCH_MODE_HYBRID number

Best for agents with a small number of examples in intents and/or wide use of templates syntax and composite entities.

MATCH_MODE_ML_ONLY number

Can be used for agents with a large number of examples in intents, especially the ones using @sys.any or very large developer entities.

Source:

(static) MessageType :number

Type of the response message.

Properties:
Name Type Description
MESSAGE_TYPE_UNSPECIFIED number

Not specified. Should never be used.

TRANSCRIPT number

Message contains a (possibly partial) transcript.

END_OF_SINGLE_UTTERANCE number

Event indicates that the server has detected the end of the user's speech utterance and expects no additional speech. Therefore, the server will not process additional audio (although it may subsequently return additional results). The client should stop sending additional audio data, half-close the gRPC connection, and wait for any additional results until the server closes the gRPC connection. This message is only sent if single_utterance was set to true, and is not used otherwise.

Source:

(static, constant) OutputAudioEncoding :number

Audio encoding of the output audio format in Text-To-Speech.

Properties:
Name Type Description
OUTPUT_AUDIO_ENCODING_UNSPECIFIED number

Not specified.

OUTPUT_AUDIO_ENCODING_LINEAR_16 number

Uncompressed 16-bit signed little-endian samples (Linear PCM). Audio content returned as LINEAR16 also contains a WAV header.

OUTPUT_AUDIO_ENCODING_MP3 number

MP3 audio at 32kbps.

OUTPUT_AUDIO_ENCODING_OGG_OPUS number

Opus encoded audio wrapped in an ogg container. The result will be a file which can be played natively on Android, and in browsers (at least Chrome and Firefox). The quality of the encoding is considerably higher than MP3 while using approximately the same bitrate.

Source:

(static) Platform :number

Represents different platforms that a rich message can be intended for.

Properties:
Name Type Description
PLATFORM_UNSPECIFIED number

Not specified.

FACEBOOK number

Facebook.

SLACK number

Slack.

TELEGRAM number

Telegram.

KIK number

Kik.

SKYPE number

Skype.

LINE number

Line.

VIBER number

Viber.

ACTIONS_ON_GOOGLE number

Actions on Google. When using Actions on Google, you can choose one of the specific Intent.Message types that mention support for Actions on Google, or you can use the advanced Intent.Message.payload field. The payload field provides access to AoG features not available in the specific message types. If using the Intent.Message.payload field, it should have a structure similar to the JSON message shown here. For more information, see Actions on Google Webhook Format

{
  "expectUserResponse": true,
  "isSsml": false,
  "noInputPrompts": [],
  "richResponse": {
    "items": [
      {
        "simpleResponse": {
          "displayText": "hi",
          "textToSpeech": "hello"
        }
      }
    ],
    "suggestions": [
      {
        "title": "Say this"
      },
      {
        "title": "or this"
      }
    ]
  },
  "systemIntent": {
    "data": {
      "@type": "type.googleapis.com/google.actions.v2.OptionValueSpec",
      "listSelect": {
        "items": [
          {
            "optionInfo": {
              "key": "key1",
              "synonyms": [
                "key one"
              ]
            },
            "title": "must not be empty, but unique"
          },
          {
            "optionInfo": {
              "key": "key2",
              "synonyms": [
                "key two"
              ]
            },
            "title": "must not be empty, but unique"
          }
        ]
      }
    },
    "intent": "actions.intent.OPTION"
  }
}
TELEPHONY number

Telephony Gateway.

GOOGLE_HANGOUTS number

Google Hangouts.

Source:

(static) ResponseMediaType :number

Format of response media type.

Properties:
Name Type Description
RESPONSE_MEDIA_TYPE_UNSPECIFIED number

Unspecified.

AUDIO number

Response media type is audio.

Source:

(static) Severity :number

Represents a level of severity.

Properties:
Name Type Description
SEVERITY_UNSPECIFIED number

Not specified. This value should never be used.

INFO number

The agent doesn't follow Dialogflow best practicies.

WARNING number

The agent may not behave as expected.

ERROR number

The agent may experience partial failures.

CRITICAL number

The agent may completely fail.

Source:

(static, constant) SpeechModelVariant :number

Variant of the specified Speech model to use.

See the Cloud Speech documentation for which models have different variants. For example, the "phone_call" model has both a standard and an enhanced variant. When you use an enhanced model, you will generally receive higher quality results than for a standard model.

Properties:
Name Type Description
SPEECH_MODEL_VARIANT_UNSPECIFIED number

No model variant specified. In this case Dialogflow defaults to USE_BEST_AVAILABLE.

USE_BEST_AVAILABLE number

Use the best available variant of the Speech model that the caller is eligible for.

Please see the Dialogflow docs for how to make your project eligible for enhanced models.

USE_STANDARD number

Use standard model variant even if an enhanced model is available. See the Cloud Speech documentation for details about enhanced models.

USE_ENHANCED number

Use an enhanced model variant:

  • If an enhanced variant does not exist for the given model and request language, Dialogflow falls back to the standard variant.

    The Cloud Speech documentation describes which models have enhanced variants.

  • If the API caller isn't eligible for enhanced models, Dialogflow returns an error. Please see the Dialogflow docs for how to make your project eligible.

Source:

(static, constant) SsmlVoiceGender :number

Gender of the voice as described in SSML voice element.

Properties:
Name Type Description
SSML_VOICE_GENDER_UNSPECIFIED number

An unspecified gender, which means that the client doesn't care which gender the selected voice will have.

SSML_VOICE_GENDER_MALE number

A male voice.

SSML_VOICE_GENDER_FEMALE number

A female voice.

SSML_VOICE_GENDER_NEUTRAL number

A gender-neutral voice.

Source:

(static) State :number

States of the operation.

Properties:
Name Type Description
STATE_UNSPECIFIED number

State unspecified.

PENDING number

The operation has been created.

RUNNING number

The operation is currently running.

DONE number

The operation is done, either cancelled or completed.

Source:

(static) ThumbnailImageAlignment :number

Thumbnail preview alignment for standalone cards with horizontal layout.

Properties:
Name Type Description
THUMBNAIL_IMAGE_ALIGNMENT_UNSPECIFIED number

Not specified.

LEFT number

Thumbnail preview is left-aligned.

RIGHT number

Thumbnail preview is right-aligned.

Source:

(static) Tier :number

Represents the agent tier.

Properties:
Name Type Description
TIER_UNSPECIFIED number

Not specified. This value should never be used.

TIER_STANDARD number

Standard tier.

TIER_ENTERPRISE number

Enterprise tier (Essentials).

TIER_ENTERPRISE_PLUS number

Enterprise tier (Plus).

Source:

(static) Type :number

Represents different types of training phrases.

Properties:
Name Type Description
TYPE_UNSPECIFIED number

Not specified. This value should never be used.

EXAMPLE number

Examples do not contain @-prefixed entity type names, but example parts can be annotated with entity types.

TEMPLATE number

Templates are not annotated with entity types, but they can contain

Source:

(static) UrlTypeHint :number

Type of the URI.

Properties:
Name Type Description
URL_TYPE_HINT_UNSPECIFIED number

Unspecified

AMP_ACTION number

Url would be an amp action

AMP_CONTENT number

URL that points directly to AMP content, or to a canonical URL which refers to AMP content via .

Source:

(static) WebhookState :number

Represents the different states that webhooks can be in.

Properties:
Name Type Description
WEBHOOK_STATE_UNSPECIFIED number

Webhook is disabled in the agent and in the intent.

WEBHOOK_STATE_ENABLED number

Webhook is enabled in the agent and in the intent.

WEBHOOK_STATE_ENABLED_FOR_SLOT_FILLING number

Webhook is enabled in the agent and in the intent. Also, each slot filling prompt is forwarded to the webhook.

Source:

Type Definitions

Agent

Represents a conversational agent.

Properties:
Name Type Description
parent string

Required. The project of this agent. Format: projects/<Project ID>.

displayName string

Required. The name of this agent.

defaultLanguageCode string

Required. The default language of the agent as a language tag. See Language Support for a list of the currently supported language codes. This field cannot be set by the Update method.

supportedLanguageCodes Array.<string>

Optional. The list of all languages supported by this agent (except for the default_language_code).

timeZone string

Required. The time zone of this agent from the time zone database, e.g., America/New_York, Europe/Paris.

description string

Optional. The description of this agent. The maximum length is 500 characters. If exceeded, the request is rejected.

avatarUri string

Optional. The URI of the agent's avatar. Avatars are used throughout the Dialogflow console and in the self-hosted Web Demo integration.

enableLogging boolean

Optional. Determines whether this agent should log conversation queries.

matchMode number

Optional. Determines how intents are detected from user queries.

The number should be among the values of MatchMode

classificationThreshold number

Optional. To filter out false positive results and still get variety in matched natural language inputs for your agent, you can tune the machine learning classification threshold. If the returned score value is less than the threshold value, then a fallback intent will be triggered or, if there are no fallback intents defined, no intent will be triggered. The score values range from 0.0 (completely uncertain) to 1.0 (completely certain). If set to 0.0, the default of 0.3 is used.

apiVersion number

Optional. API version displayed in Dialogflow console. If not specified, V2 API is assumed. Clients are free to query different service endpoints for different API versions. However, bots connectors and webhook calls will follow the specified API version.

The number should be among the values of ApiVersion

tier number

Optional. The agent tier. If not specified, TIER_STANDARD is assumed.

The number should be among the values of Tier

Source:
See:

Answer

An answer from Knowledge Connector.

Properties:
Name Type Description
source string

Indicates which Knowledge Document this answer was extracted from. Format: projects/<Project ID>/knowledgeBases/<Knowledge Base ID>/documents/<Document ID>.

faqQuestion string

The corresponding FAQ question if the answer was extracted from a FAQ Document, empty otherwise.

answer string

The piece of text from the source knowledge base document that answers this conversational query.

matchConfidenceLevel number

The system's confidence level that this knowledge answer is a good match for this conversational query. NOTE: The confidence level for a given <query, answer> pair may change without notice, as it depends on models that are constantly being improved. However, it will change less frequently than the confidence score below, and should be preferred for referencing the quality of an answer.

The number should be among the values of MatchConfidenceLevel

matchConfidence number

The system's confidence score that this Knowledge answer is a good match for this conversational query. The range is from 0.0 (completely uncertain) to 1.0 (completely certain). Note: The confidence score is likely to vary somewhat (possibly even for identical requests), as the underlying model is under constant improvement. It may be deprecated in the future. We recommend using match_confidence_level which should be generally more stable.

Source:
See:

BasicCard

The basic card message. Useful for displaying information.

Properties:
Name Type Description
title string

Optional. The title of the card.

subtitle string

Optional. The subtitle of the card.

formattedText string

Required, unless image is present. The body text of the card.

image Object

Optional. The image for the card.

This object should have the same structure as Image

buttons Array.<Object>

Optional. The collection of card buttons.

This object should have the same structure as Button

Source:
See:

BatchCreateEntitiesRequest

The request message for EntityTypes.BatchCreateEntities.

Properties:
Name Type Description
parent string

Required. The name of the entity type to create entities in. Format: projects/<Project ID>/agent/entityTypes/<Entity Type ID>.

entities Array.<Object>

Required. The entities to create.

This object should have the same structure as Entity

languageCode string

Optional. The language of entity synonyms defined in entities. If not specified, the agent's default language is used. Many languages are supported. Note: languages must be enabled in the agent before they can be used.

Source:
See:

BatchDeleteEntitiesRequest

The request message for EntityTypes.BatchDeleteEntities.

Properties:
Name Type Description
parent string

Required. The name of the entity type to delete entries for. Format: projects/<Project ID>/agent/entityTypes/<Entity Type ID>.

entityValues Array.<string>

Required. The canonical values of the entities to delete. Note that these are not fully-qualified names, i.e. they don't start with projects/<Project ID>.

languageCode string

Optional. The language of entity synonyms defined in entities. If not specified, the agent's default language is used. Many languages are supported. Note: languages must be enabled in the agent before they can be used.

Source:
See:

BatchDeleteEntityTypesRequest

The request message for EntityTypes.BatchDeleteEntityTypes.

Properties:
Name Type Description
parent string

Required. The name of the agent to delete all entities types for. Format: projects/<Project ID>/agent.

entityTypeNames Array.<string>

Required. The names entity types to delete. All names must point to the same agent as parent.

Source:
See:

BatchDeleteIntentsRequest

The request message for Intents.BatchDeleteIntents.

Properties:
Name Type Description
parent string

Required. The name of the agent to delete all entities types for. Format: projects/<Project ID>/agent.

intents Array.<Object>

Required. The collection of intents to delete. Only intent name must be filled in.

This object should have the same structure as Intent

Source:
See:

BatchUpdateEntitiesRequest

The request message for EntityTypes.BatchUpdateEntities.

Properties:
Name Type Description
parent string

Required. The name of the entity type to update or create entities in. Format: projects/<Project ID>/agent/entityTypes/<Entity Type ID>.

entities Array.<Object>

Required. The entities to update or create.

This object should have the same structure as Entity

languageCode string

Optional. The language of entity synonyms defined in entities. If not specified, the agent's default language is used. Many languages are supported. Note: languages must be enabled in the agent before they can be used.

updateMask Object

Optional. The mask to control which fields get updated.

This object should have the same structure as FieldMask

Source:
See:

BatchUpdateEntityTypesRequest

The request message for EntityTypes.BatchUpdateEntityTypes.

Properties:
Name Type Description
parent string

Required. The name of the agent to update or create entity types in. Format: projects/<Project ID>/agent.

entityTypeBatchUri string

The URI to a Google Cloud Storage file containing entity types to update or create. The file format can either be a serialized proto (of EntityBatch type) or a JSON object. Note: The URI must start with "gs://".

entityTypeBatchInline Object

The collection of entity types to update or create.

This object should have the same structure as EntityTypeBatch

languageCode string

Optional. The language of entity synonyms defined in entity_types. If not specified, the agent's default language is used. Many languages are supported. Note: languages must be enabled in the agent before they can be used.

updateMask Object

Optional. The mask to control which fields get updated.

This object should have the same structure as FieldMask

Source:
See:

BatchUpdateEntityTypesResponse

The response message for EntityTypes.BatchUpdateEntityTypes.

Properties:
Name Type Description
entityTypes Array.<Object>

The collection of updated or created entity types.

This object should have the same structure as EntityType

Source:
See:

BatchUpdateIntentsRequest

The request message for Intents.BatchUpdateIntents.

Properties:
Name Type Description
parent string

Required. The name of the agent to update or create intents in. Format: projects/<Project ID>/agent.

intentBatchUri string

The URI to a Google Cloud Storage file containing intents to update or create. The file format can either be a serialized proto (of IntentBatch type) or JSON object. Note: The URI must start with "gs://".

intentBatchInline Object

The collection of intents to update or create.

This object should have the same structure as IntentBatch

languageCode string

Optional. The language of training phrases, parameters and rich messages defined in intents. If not specified, the agent's default language is used. Many languages are supported. Note: languages must be enabled in the agent before they can be used.

updateMask Object

Optional. The mask to control which fields get updated.

This object should have the same structure as FieldMask

intentView number

Optional. The resource view to apply to the returned intent.

The number should be among the values of IntentView

Source:
See:

BatchUpdateIntentsResponse

The response message for Intents.BatchUpdateIntents.

Properties:
Name Type Description
intents Array.<Object>

The collection of updated or created intents.

This object should have the same structure as Intent

Source:
See:

BrowseCarouselCard

Browse Carousel Card for Actions on Google. https://developers.google.com/actions/assistant/responses#browsing_carousel

Properties:
Name Type Description
items Array.<Object>

Required. List of items in the Browse Carousel Card. Minimum of two items, maximum of ten.

This object should have the same structure as BrowseCarouselCardItem

imageDisplayOptions number

Optional. Settings for displaying the image. Applies to every image in items.

The number should be among the values of ImageDisplayOptions

Source:
See:

BrowseCarouselCardItem

Browsing carousel tile

Properties:
Name Type Description
openUriAction Object

Required. Action to present to the user.

This object should have the same structure as OpenUrlAction

title string

Required. Title of the carousel item. Maximum of two lines of text.

description string

Optional. Description of the carousel item. Maximum of four lines of text.

image Object

Optional. Hero image for the carousel item.

This object should have the same structure as Image

footer string

Optional. Text that appears at the bottom of the Browse Carousel Card. Maximum of one line of text.

Source:
See:

Button

Optional. Contains information about a button.

Properties:
Name Type Description
text string

Optional. The text to show on the button.

postback string

Optional. The text to send back to the Dialogflow API or a URI to open.

Source:
See:

Button

The button object that appears at the bottom of a card.

Properties:
Name Type Description
title string

Required. The title of the button.

openUriAction Object

Required. Action to take when a user taps on the button.

This object should have the same structure as OpenUriAction

Source:
See:

Card

The card response message.

Properties:
Name Type Description
title string

Optional. The title of the card.

subtitle string

Optional. The subtitle of the card.

imageUri string

Optional. The public URI to an image file for the card.

buttons Array.<Object>

Optional. The collection of card buttons.

This object should have the same structure as Button

Source:
See:

CarouselSelect

The card for presenting a carousel of options to select from.

Properties:
Name Type Description
items Array.<Object>

Required. Carousel items.

This object should have the same structure as Item

Source:
See:

ColumnProperties

Column properties for TableCard.

Properties:
Name Type Description
header string

Required. Column heading.

horizontalAlignment number

Optional. Defines text alignment for all cells in this column.

The number should be among the values of HorizontalAlignment

Source:
See:

Context

Represents a context.

Properties:
Name Type Description
name string

Required. The unique identifier of the context. Format: projects/<Project ID>/agent/sessions/<Session ID>/contexts/<Context ID>, or projects/<Project ID>/agent/environments/<Environment ID>/users/<User ID>/sessions/<Session ID>/contexts/<Context ID>.

The Context ID is always converted to lowercase, may only contain characters in a-zA-Z0-9_-% and may be at most 250 bytes long.

If Environment ID is not specified, we assume default 'draft' environment. If User ID is not specified, we assume default '-' user.

lifespanCount number

Optional. The number of conversational query requests after which the context expires. If set to 0 (the default) the context expires immediately. Contexts expire automatically after 20 minutes if there are no matching queries.

parameters Object

Optional. The collection of parameters associated with this context. Refer to this doc for syntax.

This object should have the same structure as Struct

Source:
See:

CreateContextRequest

The request message for Contexts.CreateContext.

Properties:
Name Type Description
parent string

Required. The session to create a context for. Format: projects/<Project ID>/agent/sessions/<Session ID> or projects/<Project ID>/agent/environments/<Environment ID>/users/<User ID>/sessions/<Session ID>. If Environment ID is not specified, we assume default 'draft' environment. If User ID is not specified, we assume default '-' user.

context Object

Required. The context to create.

This object should have the same structure as Context

Source:
See:

CreateDocumentRequest

Request message for Documents.CreateDocument.

Properties:
Name Type Description
parent string

Required. The knoweldge base to create a document for. Format: projects/<Project ID>/knowledgeBases/<Knowledge Base ID>.

document Object

Required. The document to create.

This object should have the same structure as Document

Source:
See:

CreateEntityTypeRequest

The request message for EntityTypes.CreateEntityType.

Properties:
Name Type Description
parent string

Required. The agent to create a entity type for. Format: projects/<Project ID>/agent.

entityType Object

Required. The entity type to create.

This object should have the same structure as EntityType

languageCode string

Optional. The language of entity synonyms defined in entity_type. If not specified, the agent's default language is used. Many languages are supported. Note: languages must be enabled in the agent before they can be used.

Source:
See:

CreateIntentRequest

The request message for Intents.CreateIntent.

Properties:
Name Type Description
parent string

Required. The agent to create a intent for. Format: projects/<Project ID>/agent.

intent Object

Required. The intent to create.

This object should have the same structure as Intent

languageCode string

Optional. The language of training phrases, parameters and rich messages defined in intent. If not specified, the agent's default language is used. Many languages are supported. Note: languages must be enabled in the agent before they can be used.

intentView number

Optional. The resource view to apply to the returned intent.

The number should be among the values of IntentView

Source:
See:

CreateKnowledgeBaseRequest

Request message for KnowledgeBases.CreateKnowledgeBase.

Properties:
Name Type Description
parent string

Required. The project to create a knowledge base for. Format: projects/<Project ID>.

knowledgeBase Object

Required. The knowledge base to create.

This object should have the same structure as KnowledgeBase

Source:
See:

CreateSessionEntityTypeRequest

The request message for SessionEntityTypes.CreateSessionEntityType.

Properties:
Name Type Description
parent string

Required. The session to create a session entity type for. Format: projects/<Project ID>/agent/sessions/<Session ID> or projects/<Project ID>/agent/environments/<Environment ID>/users/<User ID>/ sessions/<Session ID>. If Environment ID is not specified, we assume default 'draft' environment. If User ID is not specified, we assume default '-' user.

sessionEntityType Object

Required. The session entity type to create.

This object should have the same structure as SessionEntityType

Source:
See:

DeleteAgentRequest

The request message for Agents.DeleteAgent.

Properties:
Name Type Description
parent string

Required. The project that the agent to delete is associated with. Format: projects/<Project ID>.

Source:
See:

DeleteAllContextsRequest

The request message for Contexts.DeleteAllContexts.

Properties:
Name Type Description
parent string

Required. The name of the session to delete all contexts from. Format: projects/<Project ID>/agent/sessions/<Session ID> or projects/<Project ID>/agent/environments/<Environment ID>/users/<User ID>/sessions/<Session ID>. If Environment ID is not specified we assume default 'draft' environment. If User ID is not specified, we assume default '-' user.

Source:
See:

DeleteContextRequest

The request message for Contexts.DeleteContext.

Properties:
Name Type Description
name string

Required. The name of the context to delete. Format: projects/<Project ID>/agent/sessions/<Session ID>/contexts/<Context ID> or projects/<Project ID>/agent/environments/<Environment ID>/users/<User ID>/sessions/<Session ID>/contexts/<Context ID>. If Environment ID is not specified, we assume default 'draft' environment. If User ID is not specified, we assume default '-' user.

Source:
See:

DeleteDocumentRequest

Request message for Documents.DeleteDocument.

Properties:
Name Type Description
name string

The name of the document to delete. Format: projects/<Project ID>/knowledgeBases/<Knowledge Base ID>/documents/<Document ID>.

Source:
See:

DeleteEntityTypeRequest

The request message for EntityTypes.DeleteEntityType.

Properties:
Name Type Description
name string

Required. The name of the entity type to delete. Format: projects/<Project ID>/agent/entityTypes/<EntityType ID>.

Source:
See:

DeleteIntentRequest

The request message for Intents.DeleteIntent.

Properties:
Name Type Description
name string

Required. The name of the intent to delete. If this intent has direct or indirect followup intents, we also delete them.

Format: projects/<Project ID>/agent/intents/<Intent ID>.

Source:
See:

DeleteKnowledgeBaseRequest

Request message for KnowledgeBases.DeleteKnowledgeBase.

Properties:
Name Type Description
name string

Required. The name of the knowledge base to delete. Format: projects/<Project ID>/knowledgeBases/<Knowledge Base ID>.

force boolean

Optional. Force deletes the knowledge base. When set to true, any documents in the knowledge base are also deleted.

Source:
See:

DeleteSessionEntityTypeRequest

The request message for SessionEntityTypes.DeleteSessionEntityType.

Properties:
Name Type Description
name string

Required. The name of the entity type to delete. Format: projects/<Project ID>/agent/sessions/<Session ID>/entityTypes/<Entity Type Display Name> or projects/<Project ID>/agent/environments/<Environment ID>/users/<User ID>/sessions/<Session ID>/entityTypes/<Entity Type Display Name>. If Environment ID is not specified, we assume default 'draft' environment. If User ID is not specified, we assume default '-' user.

Source:
See:

DetectIntentRequest

The request to detect user's intent.

Properties:
Name Type Description
session string

Required. The name of the session this query is sent to. Format: projects/<Project ID>/agent/sessions/<Session ID>, or projects/<Project ID>/agent/environments/<Environment ID>/users/<User ID>/sessions/<Session ID>. If Environment ID is not specified, we assume default 'draft' environment. If User ID is not specified, we are using "-". It's up to the API caller to choose an appropriate Session ID and User Id. They can be a random number or some type of user and session identifiers (preferably hashed). The length of the Session ID and User ID must not exceed 36 characters.

queryParams Object

Optional. The parameters of this query.

This object should have the same structure as QueryParameters

queryInput Object

Required. The input specification. It can be set to:

  1. an audio config which instructs the speech recognizer how to process the speech audio,

  2. a conversational query in the form of text, or

  3. an event that specifies which intent to trigger.

This object should have the same structure as QueryInput

outputAudioConfig Object

Optional. Instructs the speech synthesizer how to generate the output audio. If this field is not set and agent-level speech synthesizer is not configured, no output audio is generated.

This object should have the same structure as OutputAudioConfig

inputAudio Buffer

Optional. The natural language speech audio to be processed. This field should be populated iff query_input is set to an input audio config. A single request can contain up to 1 minute of speech audio data.

Source:
See:

DetectIntentResponse

The message returned from the DetectIntent method.

Properties:
Name Type Description
responseId string

The unique identifier of the response. It can be used to locate a response in the training example set or for reporting issues.

queryResult Object

The selected results of the conversational query or event processing. See alternative_query_results for additional potential results.

This object should have the same structure as QueryResult

alternativeQueryResults Array.<Object>

If Knowledge Connectors are enabled, there could be more than one result returned for a given query or event, and this field will contain all results except for the top one, which is captured in query_result. The alternative results are ordered by decreasing QueryResult.intent_detection_confidence. If Knowledge Connectors are disabled, this field will be empty until multiple responses for regular intents are supported, at which point those additional results will be surfaced here.

This object should have the same structure as QueryResult

webhookStatus Object

Specifies the status of the webhook request.

This object should have the same structure as Status

outputAudio Buffer

The audio data bytes encoded as specified in the request. Note: The output audio is generated based on the values of default platform text responses found in the query_result.fulfillment_messages field. If multiple default text responses exist, they will be concatenated when generating audio. If no default platform text responses exist, the generated audio content will be empty.

outputAudioConfig Object

The config used by the speech synthesizer to generate the output audio.

This object should have the same structure as OutputAudioConfig

Source:
See:

Document

A document resource.

Note: The projects.agent.knowledgeBases.documents resource is deprecated; only use projects.knowledgeBases.documents.

Properties:
Name Type Description
name string

The document resource name. The name must be empty when creating a document. Format: projects/<Project ID>/knowledgeBases/<Knowledge Base ID>/documents/<Document ID>.

displayName string

Required. The display name of the document. The name must be 1024 bytes or less; otherwise, the creation request fails.

mimeType string

Required. The MIME type of this document.

knowledgeTypes Array.<number>

Required. The knowledge type of document content.

The number should be among the values of KnowledgeType

contentUri string

The URI where the file content is located.

For documents stored in Google Cloud Storage, these URIs must have the form gs://<bucket-name>/<object-name>.

NOTE: External URLs must correspond to public webpages, i.e., they must be indexed by Google Search. In particular, URLs for showing documents in Google Cloud Storage (i.e. the URL in your browser) are not supported. Instead use the gs:// format URI described above.

content string

The raw content of the document. This field is only permitted for EXTRACTIVE_QA and FAQ knowledge types. Note: This field is in the process of being deprecated, please use raw_content instead.

rawContent Buffer

The raw content of the document. This field is only permitted for EXTRACTIVE_QA and FAQ knowledge types.

Source:
See:

Entity

An entity entry for an associated entity type.

Properties:
Name Type Description
value string

Required. The primary value associated with this entity entry. For example, if the entity type is vegetable, the value could be scallions.

For KIND_MAP entity types:

  • A canonical value to be used in place of synonyms.

For KIND_LIST entity types:

  • A string that can contain references to other entity types (with or without aliases).
synonyms Array.<string>

Required. A collection of value synonyms. For example, if the entity type is vegetable, and value is scallions, a synonym could be green onions.

For KIND_LIST entity types:

  • This collection must contain exactly one synonym equal to value.
Source:
See:

EntityType

Represents an entity type. Entity types serve as a tool for extracting parameter values from natural language queries.

Properties:
Name Type Description
name string

The unique identifier of the entity type. Required for EntityTypes.UpdateEntityType and EntityTypes.BatchUpdateEntityTypes methods. Format: projects/<Project ID>/agent/entityTypes/<Entity Type ID>.

displayName string

Required. The name of the entity type.

kind number

Required. Indicates the kind of entity type.

The number should be among the values of Kind

autoExpansionMode number

Optional. Indicates whether the entity type can be automatically expanded.

The number should be among the values of AutoExpansionMode

entities Array.<Object>

Optional. The collection of entity entries associated with the entity type.

This object should have the same structure as Entity

enableFuzzyExtraction boolean

Optional. Enables fuzzy entity extraction during classification.

Source:
See:

EntityTypeBatch

This message is a wrapper around a collection of entity types.

Properties:
Name Type Description
entityTypes Array.<Object>

A collection of entity types.

This object should have the same structure as EntityType

Source:
See:

EventInput

Events allow for matching intents by event name instead of the natural language input. For instance, input <event: { name: "welcome_event", parameters: { name: "Sam" } }> can trigger a personalized welcome response. The parameter name may be used by the agent in the response: "Hello #welcome_event.name! What can I do for you today?".

Properties:
Name Type Description
name string

Required. The unique identifier of the event.

parameters Object

Optional. The collection of parameters associated with the event.

This object should have the same structure as Struct

languageCode string

Required. The language of this query. See Language Support for a list of the currently supported language codes. Note that queries in the same session do not necessarily need to specify the same language.

Source:
See:

ExportAgentRequest

The request message for Agents.ExportAgent.

Properties:
Name Type Description
parent string

Required. The project that the agent to export is associated with. Format: projects/<Project ID>.

agentUri string

Optional. The Google Cloud Storage URI to export the agent to. The format of this URI must be gs://<bucket-name>/<object-name>. If left unspecified, the serialized agent is returned inline.

Source:
See:

ExportAgentResponse

The response message for Agents.ExportAgent.

Properties:
Name Type Description
agentUri string

The URI to a file containing the exported agent. This field is populated only if agent_uri is specified in ExportAgentRequest.

agentContent Buffer

Zip compressed raw byte content for agent.

Source:
See:

FollowupIntentInfo

Represents a single followup intent in the chain.

Properties:
Name Type Description
followupIntentName string

The unique identifier of the followup intent. Format: projects/<Project ID>/agent/intents/<Intent ID>.

parentFollowupIntentName string

The unique identifier of the followup intent's parent. Format: projects/<Project ID>/agent/intents/<Intent ID>.

Source:
See:

GcsSource

Google Cloud Storage location for single input.

Properties:
Name Type Description
uri string

Required. The Google Cloud Storage URIs for the inputs. A URI is of the form: gs://bucket/object-prefix-or-name Whether a prefix or name is used depends on the use case.

Source:
See:

GetAgentRequest

The request message for Agents.GetAgent.

Properties:
Name Type Description
parent string

Required. The project that the agent to fetch is associated with. Format: projects/<Project ID>.

Source:
See:

GetContextRequest

The request message for Contexts.GetContext.

Properties:
Name Type Description
name string

Required. The name of the context. Format: projects/<Project ID>/agent/sessions/<Session ID>/contexts/<Context ID> or projects/<Project ID>/agent/environments/<Environment ID>/users/<User ID>/sessions/<Session ID>/contexts/<Context ID>. If Environment ID is not specified, we assume default 'draft' environment. If User ID is not specified, we assume default '-' user.

Source:
See:

GetDocumentRequest

Request message for Documents.GetDocument.

Properties:
Name Type Description
name string

Required. The name of the document to retrieve. Format projects/<Project ID>/knowledgeBases/<Knowledge Base ID>/documents/<Document ID>.

Source:
See:

GetEntityTypeRequest

The request message for EntityTypes.GetEntityType.

Properties:
Name Type Description
name string

Required. The name of the entity type. Format: projects/<Project ID>/agent/entityTypes/<EntityType ID>.

languageCode string

Optional. The language to retrieve entity synonyms for. If not specified, the agent's default language is used. Many languages are supported. Note: languages must be enabled in the agent before they can be used.

Source:
See:

GetIntentRequest

The request message for Intents.GetIntent.

Properties:
Name Type Description
name string

Required. The name of the intent. Format: projects/<Project ID>/agent/intents/<Intent ID>.

languageCode string

Optional. The language to retrieve training phrases, parameters and rich messages for. If not specified, the agent's default language is used. Many languages are supported. Note: languages must be enabled in the agent before they can be used.

intentView number

Optional. The resource view to apply to the returned intent.

The number should be among the values of IntentView

Source:
See:

GetKnowledgeBaseRequest

Request message for KnowledgeBase.GetDocument.

Properties:
Name Type Description
name string

Required. The name of the knowledge base to retrieve. Format projects/<Project ID>/knowledgeBases/<Knowledge Base ID>.

Source:
See:

GetSessionEntityTypeRequest

The request message for SessionEntityTypes.GetSessionEntityType.

Properties:
Name Type Description
name string

Required. The name of the session entity type. Format: projects/<Project ID>/agent/sessions/<Session ID>/entityTypes/<Entity Type Display Name> or projects/<Project ID>/agent/environments/<Environment ID>/users/<User ID>/sessions/<Session ID>/entityTypes/<Entity Type Display Name>. If Environment ID is not specified, we assume default 'draft' environment. If User ID is not specified, we assume default '-' user.

Source:
See:

GetValidationResultRequest

The request message for Agents.GetValidationResult.

Properties:
Name Type Description
parent string

Required. The project that the agent is associated with. Format: projects/<Project ID>.

languageCode string

Optional. The language for which you want a validation result. If not specified, the agent's default language is used. Many languages are supported. Note: languages must be enabled in the agent before they can be used.

Source:
See:

Image

The image response message.

Properties:
Name Type Description
imageUri string

Optional. The public URI to an image file.

accessibilityText string

A text description of the image to be used for accessibility, e.g., screen readers. Required if image_uri is set for CarouselSelect.

Source:
See:

ImportAgentRequest

The request message for Agents.ImportAgent.

Properties:
Name Type Description
parent string

Required. The project that the agent to import is associated with. Format: projects/<Project ID>.

agentUri string

The URI to a Google Cloud Storage file containing the agent to import. Note: The URI must start with "gs://".

agentContent Buffer

Zip compressed raw byte content for agent.

Source:
See:

InputAudioConfig

Instructs the speech recognizer on how to process the audio content.

Properties:
Name Type Description
audioEncoding number

Required. Audio encoding of the audio content to process.

The number should be among the values of AudioEncoding

sampleRateHertz number

Required. Sample rate (in Hertz) of the audio content sent in the query. Refer to Cloud Speech API documentation for more details.

languageCode string

Required. The language of the supplied audio. Dialogflow does not do translations. See Language Support for a list of the currently supported language codes. Note that queries in the same session do not necessarily need to specify the same language.

enableWordInfo boolean

Optional. If true, Dialogflow returns SpeechWordInfo in StreamingRecognitionResult with information about the recognized speech words, e.g. start and end time offsets. If false or unspecified, Speech doesn't return any word-level information.

phraseHints Array.<string>

Optional. A list of strings containing words and phrases that the speech recognizer should recognize with higher likelihood.

See the Cloud Speech documentation for more details.

speechContexts Array.<Object>

Optional. Context information to assist speech recognition.

See the Cloud Speech documentation for more details.

This object should have the same structure as SpeechContext

model string

Optional. Which Speech model to select for the given request. Select the model best suited to your domain to get best results. If a model is not explicitly specified, then we auto-select a model based on the parameters in the InputAudioConfig. If enhanced speech model is enabled for the agent and an enhanced version of the specified model for the language does not exist, then the speech is recognized using the standard version of the specified model. Refer to Cloud Speech API documentation for more details.

modelVariant number

Optional. Which variant of the Speech model to use.

The number should be among the values of SpeechModelVariant

singleUtterance boolean

Optional. If false (default), recognition does not cease until the client closes the stream. If true, the recognizer will detect a single spoken utterance in input audio. Recognition ceases when it detects the audio's voice has stopped or paused. In this case, once a detected intent is received, the client should close the stream and start a new request with a new stream as needed. Note: This setting is relevant only for streaming methods. Note: When specified, InputAudioConfig.single_utterance takes precedence over StreamingDetectIntentRequest.single_utterance.

Source:
See:

Intent

Represents an intent. Intents convert a number of user expressions or patterns into an action. An action is an extraction of a user command or sentence semantics.

Properties:
Name Type Description
name string

The unique identifier of this intent. Required for Intents.UpdateIntent and Intents.BatchUpdateIntents methods. Format: projects/<Project ID>/agent/intents/<Intent ID>.

displayName string

Required. The name of this intent.

webhookState number

Optional. Indicates whether webhooks are enabled for the intent.

The number should be among the values of WebhookState

priority number

The priority of this intent. Higher numbers represent higher priorities.

  • If the supplied value is unspecified or 0, the service translates the value to 500,000, which corresponds to the Normal priority in the console.
  • If the supplied value is negative, the intent is ignored in runtime detect intent requests.
isFallback boolean

Optional. Indicates whether this is a fallback intent.

mlEnabled boolean

Optional. Indicates whether Machine Learning is enabled for the intent. Note: If ml_enabled setting is set to false, then this intent is not taken into account during inference in ML ONLY match mode. Also, auto-markup in the UI is turned off. DEPRECATED! Please use ml_disabled field instead. NOTE: If both ml_enabled and ml_disabled are either not set or false, then the default value is determined as follows:

  • Before April 15th, 2018 the default is: ml_enabled = false / ml_disabled = true.
  • After April 15th, 2018 the default is: ml_enabled = true / ml_disabled = false.
mlDisabled boolean

Optional. Indicates whether Machine Learning is disabled for the intent. Note: If ml_disabled setting is set to true, then this intent is not taken into account during inference in ML ONLY match mode. Also, auto-markup in the UI is turned off.

endInteraction boolean

Optional. Indicates that this intent ends an interaction. Some integrations (e.g., Actions on Google or Dialogflow phone gateway) use this information to close interaction with an end user. Default is false.

inputContextNames Array.<string>

Optional. The list of context names required for this intent to be triggered. Format: projects/<Project ID>/agent/sessions/-/contexts/<Context ID>.

events Array.<string>

Optional. The collection of event names that trigger the intent. If the collection of input contexts is not empty, all of the contexts must be present in the active user session for an event to trigger this intent.

trainingPhrases Array.<Object>

Optional. The collection of examples that the agent is trained on.

This object should have the same structure as TrainingPhrase

action string

Optional. The name of the action associated with the intent. Note: The action name must not contain whitespaces.

outputContexts Array.<Object>

Optional. The collection of contexts that are activated when the intent is matched. Context messages in this collection should not set the parameters field. Setting the lifespan_count to 0 will reset the context when the intent is matched. Format: projects/<Project ID>/agent/sessions/-/contexts/<Context ID>.

This object should have the same structure as Context

resetContexts boolean

Optional. Indicates whether to delete all contexts in the current session when this intent is matched.

parameters Array.<Object>

Optional. The collection of parameters associated with the intent.

This object should have the same structure as Parameter

messages Array.<Object>

Optional. The collection of rich messages corresponding to the Response field in the Dialogflow console.

This object should have the same structure as Message

defaultResponsePlatforms Array.<number>

Optional. The list of platforms for which the first responses will be copied from the messages in PLATFORM_UNSPECIFIED (i.e. default platform).

The number should be among the values of Platform

rootFollowupIntentName string

Read-only. The unique identifier of the root intent in the chain of followup intents. It identifies the correct followup intents chain for this intent. We populate this field only in the output.

Format: projects/<Project ID>/agent/intents/<Intent ID>.

parentFollowupIntentName string

Read-only after creation. The unique identifier of the parent intent in the chain of followup intents. You can set this field when creating an intent, for example with CreateIntent or BatchUpdateIntents, in order to make this intent a followup intent.

It identifies the parent followup intent. Format: projects/<Project ID>/agent/intents/<Intent ID>.

followupIntentInfo Array.<Object>

Read-only. Information about all followup intents that have this intent as a direct or indirect parent. We populate this field only in the output.

This object should have the same structure as FollowupIntentInfo

Source:
See:

IntentBatch

This message is a wrapper around a collection of intents.

Properties:
Name Type Description
intents Array.<Object>

A collection of intents.

This object should have the same structure as Intent

Source:
See:

Item

An item in the list.

Properties:
Name Type Description
info Object

Required. Additional information about this option.

This object should have the same structure as SelectItemInfo

title string

Required. The title of the list item.

description string

Optional. The main text describing the item.

image Object

Optional. The image to display.

This object should have the same structure as Image

Source:
See:

Item

An item in the carousel.

Properties:
Name Type Description
info Object

Required. Additional info about the option item.

This object should have the same structure as SelectItemInfo

title string

Required. Title of the carousel item.

description string

Optional. The body text of the card.

image Object

Optional. The image to display.

This object should have the same structure as Image

Source:
See:

KnowledgeAnswers

Represents the result of querying a Knowledge base.

Properties:
Name Type Description
answers Array.<Object>

A list of answers from Knowledge Connector.

This object should have the same structure as Answer

Source:
See:

KnowledgeBase

Represents knowledge base resource.

Note: The projects.agent.knowledgeBases resource is deprecated; only use projects.knowledgeBases.

Properties:
Name Type Description
name string

The knowledge base resource name. The name must be empty when creating a knowledge base. Format: projects/<Project ID>/knowledgeBases/<Knowledge Base ID>.

displayName string

Required. The display name of the knowledge base. The name must be 1024 bytes or less; otherwise, the creation request fails.

languageCode string

Language which represents the KnowledgeBase. When the KnowledgeBase is created/updated, this is populated for all non en-us languages. If not populated, the default language en-us applies.

Source:
See:

KnowledgeOperationMetadata

Metadata in google::longrunning::Operation for Knowledge operations.

Properties:
Name Type Description
state number

Required. The current state of this operation.

The number should be among the values of State

Source:
See:

LinkOutSuggestion

The suggestion chip message that allows the user to jump out to the app or website associated with this agent.

Properties:
Name Type Description
destinationName string

Required. The name of the app or site this chip is linking to.

uri string

Required. The URI of the app or site to open when the user taps the suggestion chip.

Source:
See:

ListContextsRequest

The request message for Contexts.ListContexts.

Properties:
Name Type Description
parent string

Required. The session to list all contexts from. Format: projects/<Project ID>/agent/sessions/<Session ID> or projects/<Project ID>/agent/environments/<Environment ID>/users/<User ID>/sessions/<Session ID>. If Environment ID is not specified, we assume default 'draft' environment. If User ID is not specified, we assume default '-' user.

pageSize number

Optional. The maximum number of items to return in a single page. By default 100 and at most 1000.

pageToken string

Optional. The next_page_token value returned from a previous list request.

Source:
See:

ListContextsResponse

The response message for Contexts.ListContexts.

Properties:
Name Type Description
contexts Array.<Object>

The list of contexts. There will be a maximum number of items returned based on the page_size field in the request.

This object should have the same structure as Context

nextPageToken string

Token to retrieve the next page of results, or empty if there are no more results in the list.

Source:
See:

ListDocumentsRequest

Request message for Documents.ListDocuments.

Properties:
Name Type Description
parent string

Required. The knowledge base to list all documents for. Format: projects/<Project ID>/knowledgeBases/<Knowledge Base ID>.

pageSize number

Optional. The maximum number of items to return in a single page. By default 10 and at most 100.

pageToken string

Optional. The next_page_token value returned from a previous list request.

Source:
See:

ListDocumentsResponse

Response message for Documents.ListDocuments.

Properties:
Name Type Description
documents Array.<Object>

The list of documents.

This object should have the same structure as Document

nextPageToken string

Token to retrieve the next page of results, or empty if there are no more results in the list.

Source:
See:

ListEntityTypesRequest

The request message for EntityTypes.ListEntityTypes.

Properties:
Name Type Description
parent string

Required. The agent to list all entity types from. Format: projects/<Project ID>/agent.

languageCode string

Optional. The language to list entity synonyms for. If not specified, the agent's default language is used. Many languages are supported. Note: languages must be enabled in the agent before they can be used.

pageSize number

Optional. The maximum number of items to return in a single page. By default 100 and at most 1000.

pageToken string

Optional. The next_page_token value returned from a previous list request.

Source:
See:

ListEntityTypesResponse

The response message for EntityTypes.ListEntityTypes.

Properties:
Name Type Description
entityTypes Array.<Object>

The list of agent entity types. There will be a maximum number of items returned based on the page_size field in the request.

This object should have the same structure as EntityType

nextPageToken string

Token to retrieve the next page of results, or empty if there are no more results in the list.

Source:
See:

ListIntentsRequest

The request message for Intents.ListIntents.

Properties:
Name Type Description
parent string

Required. The agent to list all intents from. Format: projects/<Project ID>/agent.

languageCode string

Optional. The language to list training phrases, parameters and rich messages for. If not specified, the agent's default language is used. Many languages are supported. Note: languages must be enabled in the agent before they can be used.

intentView number

Optional. The resource view to apply to the returned intent.

The number should be among the values of IntentView

pageSize number

Optional. The maximum number of items to return in a single page. By default 100 and at most 1000.

pageToken string

Optional. The next_page_token value returned from a previous list request.

Source:
See:

ListIntentsResponse

The response message for Intents.ListIntents.

Properties:
Name Type Description
intents Array.<Object>

The list of agent intents. There will be a maximum number of items returned based on the page_size field in the request.

This object should have the same structure as Intent

nextPageToken string

Token to retrieve the next page of results, or empty if there are no more results in the list.

Source:
See:

ListKnowledgeBasesRequest

Request message for KnowledgeBases.ListKnowledgeBases.

Properties:
Name Type Description
parent string

Required. The project to list of knowledge bases for. Format: projects/<Project ID>.

pageSize number

Optional. The maximum number of items to return in a single page. By default 10 and at most 100.

pageToken string

Optional. The next_page_token value returned from a previous list request.

Source:
See:

ListKnowledgeBasesResponse

Response message for KnowledgeBases.ListKnowledgeBases.

Properties:
Name Type Description
knowledgeBases Array.<Object>

The list of knowledge bases.

This object should have the same structure as KnowledgeBase

nextPageToken string

Token to retrieve the next page of results, or empty if there are no more results in the list.

Source:
See:

ListSelect

The card for presenting a list of options to select from.

Properties:
Name Type Description
title string

Optional. The overall title of the list.

items Array.<Object>

Required. List items.

This object should have the same structure as Item

subtitle string

Optional. Subtitle of the list.

Source:
See:

ListSessionEntityTypesRequest

The request message for SessionEntityTypes.ListSessionEntityTypes.

Properties:
Name Type Description
parent string

Required. The session to list all session entity types from. Format: projects/<Project ID>/agent/sessions/<Session ID> or projects/<Project ID>/agent/environments/<Environment ID>/users/<User ID>/ sessions/<Session ID>. If Environment ID is not specified, we assume default 'draft' environment. If User ID is not specified, we assume default '-' user.

pageSize number

Optional. The maximum number of items to return in a single page. By default 100 and at most 1000.

pageToken string

Optional. The next_page_token value returned from a previous list request.

Source:
See:

ListSessionEntityTypesResponse

The response message for SessionEntityTypes.ListSessionEntityTypes.

Properties:
Name Type Description
sessionEntityTypes Array.<Object>

The list of session entity types. There will be a maximum number of items returned based on the page_size field in the request.

This object should have the same structure as SessionEntityType

nextPageToken string

Token to retrieve the next page of results, or empty if there are no more results in the list.

Source:
See:

MediaContent

The media content card for Actions on Google.

Properties:
Name Type Description
mediaType number

Optional. What type of media is the content (ie "audio").

The number should be among the values of ResponseMediaType

mediaObjects Array.<Object>

Required. List of media objects.

This object should have the same structure as ResponseMediaObject

Source:
See:

Message

Corresponds to the Response field in the Dialogflow console.

Properties:
Name Type Description
text Object

Returns a text response.

This object should have the same structure as Text

image Object

Displays an image.

This object should have the same structure as Image

quickReplies Object

Displays quick replies.

This object should have the same structure as QuickReplies

card Object

Displays a card.

This object should have the same structure as Card

payload Object

Returns a response containing a custom, platform-specific payload. See the Intent.Message.Platform type for a description of the structure that may be required for your platform.

This object should have the same structure as Struct

simpleResponses Object

Returns a voice or text-only response for Actions on Google.

This object should have the same structure as SimpleResponses

basicCard Object

Displays a basic card for Actions on Google.

This object should have the same structure as BasicCard

suggestions Object

Displays suggestion chips for Actions on Google.

This object should have the same structure as Suggestions

linkOutSuggestion Object

Displays a link out suggestion chip for Actions on Google.

This object should have the same structure as LinkOutSuggestion

listSelect Object

Displays a list card for Actions on Google.

This object should have the same structure as ListSelect

carouselSelect Object

Displays a carousel card for Actions on Google.

This object should have the same structure as CarouselSelect

telephonyPlayAudio Object

Plays audio from a file in Telephony Gateway.

This object should have the same structure as TelephonyPlayAudio

telephonySynthesizeSpeech Object

Synthesizes speech in Telephony Gateway.

This object should have the same structure as TelephonySynthesizeSpeech

telephonyTransferCall Object

Transfers the call in Telephony Gateway.

This object should have the same structure as TelephonyTransferCall

rbmText Object

Rich Business Messaging (RBM) text response.

RBM allows businesses to send enriched and branded versions of SMS. See https://jibe.google.com/business-messaging.

This object should have the same structure as RbmText

rbmStandaloneRichCard Object

Standalone Rich Business Messaging (RBM) rich card response.

This object should have the same structure as RbmStandaloneCard

rbmCarouselRichCard Object

Rich Business Messaging (RBM) carousel rich card response.

This object should have the same structure as RbmCarouselCard

browseCarouselCard Object

Browse carousel card for Actions on Google.

This object should have the same structure as BrowseCarouselCard

tableCard Object

Table card for Actions on Google.

This object should have the same structure as TableCard

mediaContent Object

The media content card for Actions on Google.

This object should have the same structure as MediaContent

platform number

Optional. The platform that this message is intended for.

The number should be among the values of Platform

Source:
See:

OpenUriAction

Opens the given URI.

Properties:
Name Type Description
uri string

Required. The HTTP or HTTPS scheme URI.

Source:
See:

OpenUrlAction

Actions on Google action to open a given url.

Properties:
Name Type Description
url string

Required. URL

urlTypeHint number

Optional. Specifies the type of viewer that is used when opening the URL. Defaults to opening via web browser.

The number should be among the values of UrlTypeHint

Source:
See:

OutputAudioConfig

Instructs the speech synthesizer how to generate the output audio content.

Properties:
Name Type Description
audioEncoding number

Required. Audio encoding of the synthesized audio content.

The number should be among the values of OutputAudioEncoding

sampleRateHertz number

Optional. The synthesis sample rate (in hertz) for this audio. If not provided, then the synthesizer will use the default sample rate based on the audio encoding. If this is different from the voice's natural sample rate, then the synthesizer will honor this request by converting to the desired sample rate (which might result in worse audio quality).

synthesizeSpeechConfig Object

Optional. Configuration of how speech should be synthesized.

This object should have the same structure as SynthesizeSpeechConfig

Source:
See:

Parameter

Represents intent parameters.

Properties:
Name Type Description
name string

The unique identifier of this parameter.

displayName string

Required. The name of the parameter.

value string

Optional. The definition of the parameter value. It can be:

  • a constant string,
  • a parameter value defined as $parameter_name,
  • an original parameter value defined as $parameter_name.original,
  • a parameter value from some context defined as #context_name.parameter_name.
defaultValue string

Optional. The default value to use when the value yields an empty result. Default values can be extracted from contexts by using the following syntax: #context_name.parameter_name.

entityTypeDisplayName string

Optional. The name of the entity type, prefixed with @, that describes values of the parameter. If the parameter is required, this must be provided.

mandatory boolean

Optional. Indicates whether the parameter is required. That is, whether the intent cannot be completed without collecting the parameter value.

prompts Array.<string>

Optional. The collection of prompts that the agent can present to the user in order to collect a value for the parameter.

isList boolean

Optional. Indicates whether the parameter represents a list of values.

Source:
See:

Part

Represents a part of a training phrase.

Properties:
Name Type Description
text string

Required. The text for this part.

entityType string

Optional. The entity type name prefixed with @. This field is required for annotated parts of the training phrase.

alias string

Optional. The parameter name for the value extracted from the annotated part of the example. This field is required for annotated parts of the training phrase.

userDefined boolean

Optional. Indicates whether the text was manually annotated. This field is set to true when the Dialogflow Console is used to manually annotate the part. When creating an annotated part with the API, you must set this to true.

Source:
See:

QueryInput

Represents the query input. It can contain either:

  1. An audio config which instructs the speech recognizer how to process the speech audio.

  2. A conversational query in the form of text.

  3. An event that specifies which intent to trigger.

Properties:
Name Type Description
audioConfig Object

Instructs the speech recognizer how to process the speech audio.

This object should have the same structure as InputAudioConfig

text Object

The natural language text to be processed.

This object should have the same structure as TextInput

event Object

The event to be processed.

This object should have the same structure as EventInput

Source:
See:

QueryParameters

Represents the parameters of the conversational query.

Properties:
Name Type Description
timeZone string

Optional. The time zone of this conversational query from the time zone database, e.g., America/New_York, Europe/Paris. If not provided, the time zone specified in agent settings is used.

geoLocation Object

Optional. The geo location of this conversational query.

This object should have the same structure as LatLng

contexts Array.<Object>

Optional. The collection of contexts to be activated before this query is executed.

This object should have the same structure as Context

resetContexts boolean

Optional. Specifies whether to delete all contexts in the current session before the new ones are activated.

sessionEntityTypes Array.<Object>

Optional. Additional session entity types to replace or extend developer entity types with. The entity synonyms apply to all languages and persist for the session of this query.

This object should have the same structure as SessionEntityType

payload Object

Optional. This field can be used to pass custom data into the webhook associated with the agent. Arbitrary JSON objects are supported.

This object should have the same structure as Struct

knowledgeBaseNames Array.<string>

Optional. KnowledgeBases to get alternative results from. If not set, the KnowledgeBases enabled in the agent (through UI) will be used. Format: projects/<Project ID>/knowledgeBases/<Knowledge Base ID>.

sentimentAnalysisRequestConfig Object

Optional. Configures the type of sentiment analysis to perform. If not provided, sentiment analysis is not performed. Note: Sentiment Analysis is only currently available for Enterprise Edition agents.

This object should have the same structure as SentimentAnalysisRequestConfig

webhookHeaders Object.<string, string>

Optional. This field can be used to pass HTTP headers for a webhook call. These headers will be sent to webhook alone with the headers that have been configured through Dialogflow web console. The headers defined within this field will overwrite the headers configured through Dialogflow console if there is a conflict. Header names are case-insensitive. Google's specified headers are not allowed. Including: "Host", "Content-Length", "Connection", "From", "User-Agent", "Accept-Encoding", "If-Modified-Since", "If-None-Match", "X-Forwarded-For", etc.

Source:
See:

QueryResult

Represents the result of conversational query or event processing.

Properties:
Name Type Description
queryText string

The original conversational query text:

  • If natural language text was provided as input, query_text contains a copy of the input.
  • If natural language speech audio was provided as input, query_text contains the speech recognition result. If speech recognizer produced multiple alternatives, a particular one is picked.
  • If automatic spell correction is enabled, query_text will contain the corrected user input.
languageCode string

The language that was triggered during intent detection. See Language Support for a list of the currently supported language codes.

speechRecognitionConfidence number

The Speech recognition confidence between 0.0 and 1.0. A higher number indicates an estimated greater likelihood that the recognized words are correct. The default of 0.0 is a sentinel value indicating that confidence was not set.

This field is not guaranteed to be accurate or set. In particular this field isn't set for StreamingDetectIntent since the streaming endpoint has separate confidence estimates per portion of the audio in StreamingRecognitionResult.

action string

The action name from the matched intent.

parameters Object

The collection of extracted parameters.

This object should have the same structure as Struct

allRequiredParamsPresent boolean

This field is set to:

  • false if the matched intent has required parameters and not all of the required parameter values have been collected.
  • true if all required parameter values have been collected, or if the matched intent doesn't contain any required parameters.
fulfillmentText string

The text to be pronounced to the user or shown on the screen. Note: This is a legacy field, fulfillment_messages should be preferred.

fulfillmentMessages Array.<Object>

The collection of rich messages to present to the user.

This object should have the same structure as Message

webhookSource string

If the query was fulfilled by a webhook call, this field is set to the value of the source field returned in the webhook response.

webhookPayload Object

If the query was fulfilled by a webhook call, this field is set to the value of the payload field returned in the webhook response.

This object should have the same structure as Struct

outputContexts Array.<Object>

The collection of output contexts. If applicable, output_contexts.parameters contains entries with name <parameter name>.original containing the original parameter values before the query.

This object should have the same structure as Context

intent Object

The intent that matched the conversational query. Some, not all fields are filled in this message, including but not limited to: name, display_name, end_interaction and is_fallback.

This object should have the same structure as Intent

intentDetectionConfidence number

The intent detection confidence. Values range from 0.0 (completely uncertain) to 1.0 (completely certain). This value is for informational purpose only and is only used to help match the best intent within the classification threshold. This value may change for the same end-user expression at any time due to a model retraining or change in implementation. If there are multiple knowledge_answers messages, this value is set to the greatest knowledgeAnswers.match_confidence value in the list.

diagnosticInfo Object

The free-form diagnostic info. For example, this field could contain webhook call latency. The string keys of the Struct's fields map can change without notice.

This object should have the same structure as Struct

sentimentAnalysisResult Object

The sentiment analysis result, which depends on the sentiment_analysis_request_config specified in the request.

This object should have the same structure as SentimentAnalysisResult

knowledgeAnswers Object

The result from Knowledge Connector (if any), ordered by decreasing KnowledgeAnswers.match_confidence.

This object should have the same structure as KnowledgeAnswers

Source:
See:

QuickReplies

The quick replies response message.

Properties:
Name Type Description
title string

Optional. The title of the collection of quick replies.

quickReplies Array.<string>

Optional. The collection of quick replies.

Source:
See:

RbmCardContent

Rich Business Messaging (RBM) Card content

Properties:
Name Type Description
title string

Optional. Title of the card (at most 200 bytes).

At least one of the title, description or media must be set.

description string

Optional. Description of the card (at most 2000 bytes).

At least one of the title, description or media must be set.

media Object

Optional. However at least one of the title, description or media must be set. Media (image, GIF or a video) to include in the card.

This object should have the same structure as RbmMedia

suggestions Array.<Object>

Optional. List of suggestions to include in the card.

This object should have the same structure as RbmSuggestion

Source:
See:

RbmCarouselCard

Carousel Rich Business Messaging (RBM) rich card.

Rich cards allow you to respond to users with more vivid content, e.g. with media and suggestions.

For more details about RBM rich cards, please see: https://developers.google.com/rcs-business-messaging/rbm/guides/build/send-messages#rich-cards. If you want to show a single card with more control over the layout, please use RbmStandaloneCard instead.

Properties:
Name Type Description
cardWidth number

Required. The width of the cards in the carousel.

The number should be among the values of CardWidth

cardContents Array.<Object>

Required. The cards in the carousel. A carousel must have at least 2 cards and at most 10.

This object should have the same structure as RbmCardContent

Source:
See:

RbmMedia

Rich Business Messaging (RBM) Media displayed in Cards The following media-types are currently supported:

Image Types

image/jpeg image/jpg' image/gif image/png

Video Types

video/h263 video/m4v video/mp4 video/mpeg video/mpeg4 video/webm

Properties:
Name Type Description
fileUri string

Required. Publicly reachable URI of the file. The RBM platform determines the MIME type of the file from the content-type field in the HTTP headers when the platform fetches the file. The content-type field must be present and accurate in the HTTP response from the URL.

thumbnailUri string

Optional. Publicly reachable URI of the thumbnail.If you don't provide a thumbnail URI, the RBM platform displays a blank placeholder thumbnail until the user's device downloads the file. Depending on the user's setting, the file may not download automatically and may require the user to tap a download button.

height number

Required for cards with vertical orientation. The height of the media within a rich card with a vertical layout. (https://goo.gl/NeFCjz). For a standalone card with horizontal layout, height is not customizable, and this field is ignored.

The number should be among the values of Height

Source:
See:

RbmStandaloneCard

Standalone Rich Business Messaging (RBM) rich card.

Rich cards allow you to respond to users with more vivid content, e.g. with media and suggestions.

For more details about RBM rich cards, please see: https://developers.google.com/rcs-business-messaging/rbm/guides/build/send-messages#rich-cards. You can group multiple rich cards into one using RbmCarouselCard but carousel cards will give you less control over the card layout.

Properties:
Name Type Description
cardOrientation number

Required. Orientation of the card.

The number should be among the values of CardOrientation

thumbnailImageAlignment number

Required if orientation is horizontal. Image preview alignment for standalone cards with horizontal layout.

The number should be among the values of ThumbnailImageAlignment

cardContent Object

Required. Card content.

This object should have the same structure as RbmCardContent

Source:
See:

RbmSuggestedAction

Rich Business Messaging (RBM) suggested client-side action that the user can choose from the card.

Properties:
Name Type Description
text string

Text to display alongside the action.

postbackData string

Opaque payload that the Dialogflow receives in a user event when the user taps the suggested action. This data will be also forwarded to webhook to allow performing custom business logic.

dial Object

Suggested client side action: Dial a phone number

This object should have the same structure as RbmSuggestedActionDial

openUrl Object

Suggested client side action: Open a URI on device

This object should have the same structure as RbmSuggestedActionOpenUri

shareLocation Object

Suggested client side action: Share user location

This object should have the same structure as RbmSuggestedActionShareLocation

Source:
See:

RbmSuggestedActionDial

Opens the user's default dialer app with the specified phone number but does not dial automatically (https://goo.gl/ergbB2).

Properties:
Name Type Description
phoneNumber string

Required. The phone number to fill in the default dialer app. This field should be in E.164 format. An example of a correctly formatted phone number: +15556767888.

Source:
See:

RbmSuggestedActionOpenUri

Opens the user's default web browser app to the specified uri (https://goo.gl/6GLJD2). If the user has an app installed that is registered as the default handler for the URL, then this app will be opened instead, and its icon will be used in the suggested action UI.

Properties:
Name Type Description
uri string

Required. The uri to open on the user device

Source:
See:

RbmSuggestedActionShareLocation

Opens the device's location chooser so the user can pick a location to send back to the agent (https://goo.gl/GXotJW).

Source:
See:

RbmSuggestedReply

Rich Business Messaging (RBM) suggested reply that the user can click instead of typing in their own response.

Properties:
Name Type Description
text string

Suggested reply text.

postbackData string

Opaque payload that the Dialogflow receives in a user event when the user taps the suggested reply. This data will be also forwarded to webhook to allow performing custom business logic.

Source:
See:

RbmSuggestion

Rich Business Messaging (RBM) suggestion. Suggestions allow user to easily select/click a predefined response or perform an action (like opening a web uri).

Properties:
Name Type Description
reply Object

Predefined replies for user to select instead of typing

This object should have the same structure as RbmSuggestedReply

action Object

Predefined client side actions that user can choose

This object should have the same structure as RbmSuggestedAction

Source:
See:

RbmText

Rich Business Messaging (RBM) text response with suggestions.

Properties:
Name Type Description
text string

Required. Text sent and displayed to the user.

rbmSuggestion Array.<Object>

Optional. One or more suggestions to show to the user.

This object should have the same structure as RbmSuggestion

Source:
See:

ReloadDocumentRequest

Request message for Documents.ReloadDocument.

Properties:
Name Type Description
name string

The name of the document to reload. Format: projects/<Project ID>/knowledgeBases/<Knowledge Base ID>/documents/<Document ID>

gcsSource Object

The path of gcs source file for reloading document content.

This object should have the same structure as GcsSource

Source:
See:

ResponseMediaObject

Response media object for media content card.

Properties:
Name Type Description
name string

Required. Name of media card.

description string

Optional. Description of media card.

largeImage Object

Optional. Image to display above media content.

This object should have the same structure as Image

icon Object

Optional. Icon to display above media content.

This object should have the same structure as Image

contentUrl string

Required. Url where the media is stored.

Source:
See:

RestoreAgentRequest

The request message for Agents.RestoreAgent.

Properties:
Name Type Description
parent string

Required. The project that the agent to restore is associated with. Format: projects/<Project ID>.

agentUri string

The URI to a Google Cloud Storage file containing the agent to restore. Note: The URI must start with "gs://".

agentContent Buffer

Zip compressed raw byte content for agent.

Source:
See:

SearchAgentsRequest

The request message for Agents.SearchAgents.

Properties:
Name Type Description
parent string

Required. The project to list agents from. Format: projects/<Project ID or '-'>.

pageSize number

Optional. The maximum number of items to return in a single page. By default 100 and at most 1000.

pageToken string

Optional. The next_page_token value returned from a previous list request.

Source:
See:

SearchAgentsResponse

The response message for Agents.SearchAgents.

Properties:
Name Type Description
agents Array.<Object>

The list of agents. There will be a maximum number of items returned based on the page_size field in the request.

This object should have the same structure as Agent

nextPageToken string

Token to retrieve the next page of results, or empty if there are no more results in the list.

Source:
See:

SelectItemInfo

Additional info about the select item for when it is triggered in a dialog.

Properties:
Name Type Description
key string

Required. A unique key that will be sent back to the agent if this response is given.

synonyms Array.<string>

Optional. A list of synonyms that can also be used to trigger this item in dialog.

Source:
See:

Sentiment

The sentiment, such as positive/negative feeling or association, for a unit of analysis, such as the query text.

Properties:
Name Type Description
score number

Sentiment score between -1.0 (negative sentiment) and 1.0 (positive sentiment).

magnitude number

A non-negative number in the [0, +inf) range, which represents the absolute magnitude of sentiment, regardless of score (positive or negative).

Source:
See:

SentimentAnalysisRequestConfig

Configures the types of sentiment analysis to perform.

Properties:
Name Type Description
analyzeQueryTextSentiment boolean

Optional. Instructs the service to perform sentiment analysis on query_text. If not provided, sentiment analysis is not performed on query_text.

Source:
See:

SentimentAnalysisResult

The result of sentiment analysis as configured by sentiment_analysis_request_config.

Properties:
Name Type Description
queryTextSentiment Object

The sentiment analysis result for query_text.

This object should have the same structure as Sentiment

Source:
See:

SessionEntityType

Represents a session entity type.

Extends or replaces a developer entity type at the user session level (we refer to the entity types defined at the agent level as "developer entity types").

Note: session entity types apply to all queries, regardless of the language.

Properties:
Name Type Description
name string

Required. The unique identifier of this session entity type. Format: projects/<Project ID>/agent/sessions/<Session ID>/entityTypes/<Entity Type Display Name>, or projects/<Project ID>/agent/environments/<Environment ID>/users/<User ID>/sessions/<Session ID>/entityTypes/<Entity Type Display Name>. If Environment ID is not specified, we assume default 'draft' environment. If User ID is not specified, we assume default '-' user.

<Entity Type Display Name> must be the display name of an existing entity type in the same agent that will be overridden or supplemented.

entityOverrideMode number

Required. Indicates whether the additional data should override or supplement the developer entity type definition.

The number should be among the values of EntityOverrideMode

entities Array.<Object>

Required. The collection of entities associated with this session entity type.

This object should have the same structure as Entity

Source:
See:

SetAgentRequest

The request message for Agents.SetAgent.

Properties:
Name Type Description
agent Object

Required. The agent to update.

This object should have the same structure as Agent

updateMask Object

Optional. The mask to control which fields get updated.

This object should have the same structure as FieldMask

Source:
See:

SimpleResponse

The simple response message containing speech or text.

Properties:
Name Type Description
textToSpeech string

One of text_to_speech or ssml must be provided. The plain text of the speech output. Mutually exclusive with ssml.

ssml string

One of text_to_speech or ssml must be provided. Structured spoken response to the user in the SSML format. Mutually exclusive with text_to_speech.

displayText string

Optional. The text to display.

Source:
See:

SimpleResponses

The collection of simple response candidates. This message in QueryResult.fulfillment_messages and WebhookResponse.fulfillment_messages should contain only one SimpleResponse.

Properties:
Name Type Description
simpleResponses Array.<Object>

Required. The list of simple responses.

This object should have the same structure as SimpleResponse

Source:
See:

SpeechContext

Hints for the speech recognizer to help with recognition in a specific conversation state.

Properties:
Name Type Description
phrases Array.<string>

Optional. A list of strings containing words and phrases that the speech recognizer should recognize with higher likelihood.

This list can be used to:

  • improve accuracy for words and phrases you expect the user to say, e.g. typical commands for your Dialogflow agent
  • add additional words to the speech recognizer vocabulary
  • ...

See the Cloud Speech documentation for usage limits.

boost number

Optional. Boost for this context compared to other contexts:

  • If the boost is positive, Dialogflow will increase the probability that the phrases in this context are recognized over similar sounding phrases.
  • If the boost is unspecified or non-positive, Dialogflow will not apply any boost.

Dialogflow recommends that you use boosts in the range (0, 20] and that you find a value that fits your use case with binary search.

Source:
See:

SpeechWordInfo

Information for a word recognized by the speech recognizer.

Properties:
Name Type Description
word string

The word this info is for.

startOffset Object

Time offset relative to the beginning of the audio that corresponds to the start of the spoken word. This is an experimental feature and the accuracy of the time offset can vary.

This object should have the same structure as Duration

endOffset Object

Time offset relative to the beginning of the audio that corresponds to the end of the spoken word. This is an experimental feature and the accuracy of the time offset can vary.

This object should have the same structure as Duration

confidence number

The Speech confidence between 0.0 and 1.0 for this word. A higher number indicates an estimated greater likelihood that the recognized word is correct. The default of 0.0 is a sentinel value indicating that confidence was not set.

This field is not guaranteed to be fully stable over time for the same audio input. Users should also not rely on it to always be provided.

Source:
See:

StreamingDetectIntentRequest

The top-level message sent by the client to the StreamingDetectIntent method.

Multiple request messages should be sent in order:

  1. The first message must contain StreamingDetectIntentRequest.session, [StreamingDetectIntentRequest.query_input] plus optionally [StreamingDetectIntentRequest.query_params]. If the client wants to receive an audio response, it should also contain StreamingDetectIntentRequest.output_audio_config. The message must not contain StreamingDetectIntentRequest.input_audio.

  2. If StreamingDetectIntentRequest.query_input was set to StreamingDetectIntentRequest.query_input.audio_config, all subsequent messages must contain [StreamingDetectIntentRequest.input_audio] to continue with Speech recognition. If you decide to rather detect an intent from text input after you already started Speech recognition, please send a message with StreamingDetectIntentRequest.query_input.text.

    However, note that:

    • Dialogflow will bill you for the audio duration so far.
    • Dialogflow discards all Speech recognition results in favor of the input text.
    • Dialogflow will use the language code from the first message.

After you sent all input, you must half-close or abort the request stream.

Properties:
Name Type Description
session string

Required. The name of the session the query is sent to. Format of the session name: projects/<Project ID>/agent/sessions/<Session ID>, or projects/<Project ID>/agent/environments/<Environment ID>/users/<User ID>/sessions/<Session ID>. If Environment ID is not specified, we assume default 'draft' environment. If User ID is not specified, we are using "-". It's up to the API caller to choose an appropriate Session ID and User Id. They can be a random number or some type of user and session identifiers (preferably hashed). The length of the Session ID and User ID must not exceed 36 characters.

queryParams Object

Optional. The parameters of this query.

This object should have the same structure as QueryParameters

queryInput Object

Required. The input specification. It can be set to:

  1. an audio config which instructs the speech recognizer how to process the speech audio,

  2. a conversational query in the form of text, or

  3. an event that specifies which intent to trigger.

This object should have the same structure as QueryInput

singleUtterance boolean

DEPRECATED. Please use InputAudioConfig.single_utterance instead. Optional. If false (default), recognition does not cease until the client closes the stream. If true, the recognizer will detect a single spoken utterance in input audio. Recognition ceases when it detects the audio's voice has stopped or paused. In this case, once a detected intent is received, the client should close the stream and start a new request with a new stream as needed. This setting is ignored when query_input is a piece of text or an event.

outputAudioConfig Object

Optional. Instructs the speech synthesizer how to generate the output audio. If this field is not set and agent-level speech synthesizer is not configured, no output audio is generated.

This object should have the same structure as OutputAudioConfig

inputAudio Buffer

Optional. The input audio content to be recognized. Must be sent if query_input was set to a streaming input audio config. The complete audio over all streaming messages must not exceed 1 minute.

Source:
See:

StreamingDetectIntentResponse

The top-level message returned from the StreamingDetectIntent method.

Multiple response messages can be returned in order:

  1. If the input was set to streaming audio, the first one or more messages contain recognition_result. Each recognition_result represents a more complete transcript of what the user said. The last recognition_result has is_final set to true.

  2. The next message contains response_id, query_result, alternative_query_results and optionally webhook_status if a WebHook was called.

  3. If output_audio_config was specified in the request or agent-level speech synthesizer is configured, all subsequent messages contain output_audio and output_audio_config.

Properties:
Name Type Description
responseId string

The unique identifier of the response. It can be used to locate a response in the training example set or for reporting issues.

recognitionResult Object

The result of speech recognition.

This object should have the same structure as StreamingRecognitionResult

queryResult Object

The selected results of the conversational query or event processing. See alternative_query_results for additional potential results.

This object should have the same structure as QueryResult

alternativeQueryResults Array.<Object>

If Knowledge Connectors are enabled, there could be more than one result returned for a given query or event, and this field will contain all results except for the top one, which is captured in query_result. The alternative results are ordered by decreasing QueryResult.intent_detection_confidence. If Knowledge Connectors are disabled, this field will be empty until multiple responses for regular intents are supported, at which point those additional results will be surfaced here.

This object should have the same structure as QueryResult

webhookStatus Object

Specifies the status of the webhook request.

This object should have the same structure as Status

outputAudio Buffer

The audio data bytes encoded as specified in the request. Note: The output audio is generated based on the values of default platform text responses found in the query_result.fulfillment_messages field. If multiple default text responses exist, they will be concatenated when generating audio. If no default platform text responses exist, the generated audio content will be empty.

outputAudioConfig Object

The config used by the speech synthesizer to generate the output audio.

This object should have the same structure as OutputAudioConfig

Source:
See:

StreamingRecognitionResult

Contains a speech recognition result corresponding to a portion of the audio that is currently being processed or an indication that this is the end of the single requested utterance.

Example:

  1. transcript: "tube"

  2. transcript: "to be a"

  3. transcript: "to be"

  4. transcript: "to be or not to be" is_final: true

  5. transcript: " that's"

  6. transcript: " that is"

  7. message_type: END_OF_SINGLE_UTTERANCE

  8. transcript: " that is the question" is_final: true

Only two of the responses contain final results (#4 and #8 indicated by is_final: true). Concatenating these generates the full transcript: "to be or not to be that is the question".

In each response we populate:

  • for TRANSCRIPT: transcript and possibly is_final.

  • for END_OF_SINGLE_UTTERANCE: only message_type.

Properties:
Name Type Description
messageType number

Type of the result message.

The number should be among the values of MessageType

transcript string

Transcript text representing the words that the user spoke. Populated if and only if message_type = TRANSCRIPT.

isFinal boolean

If false, the StreamingRecognitionResult represents an interim result that may change. If true, the recognizer will not return any further hypotheses about this piece of the audio. May only be populated for message_type = TRANSCRIPT.

confidence number

The Speech confidence between 0.0 and 1.0 for the current portion of audio. A higher number indicates an estimated greater likelihood that the recognized words are correct. The default of 0.0 is a sentinel value indicating that confidence was not set.

This field is typically only provided if is_final is true and you should not rely on it being accurate or even set.

stability number

An estimate of the likelihood that the speech recognizer will not change its guess about this interim recognition result:

  • If the value is unspecified or 0.0, Dialogflow didn't compute the stability. In particular, Dialogflow will only provide stability for TRANSCRIPT results with is_final = false.
  • Otherwise, the value is in (0.0, 1.0] where 0.0 means completely unstable and 1.0 means completely stable.
speechWordInfo Array.<Object>

Word-specific information for the words recognized by Speech in transcript. Populated if and only if message_type = TRANSCRIPT and [InputAudioConfig.enable_word_info] is set.

This object should have the same structure as SpeechWordInfo

speechEndOffset Object

Time offset of the end of this Speech recognition result relative to the beginning of the audio. Only populated for message_type = TRANSCRIPT.

This object should have the same structure as Duration

Source:
See:

Suggestion

The suggestion chip message that the user can tap to quickly post a reply to the conversation.

Properties:
Name Type Description
title string

Required. The text shown the in the suggestion chip.

Source:
See:

Suggestions

The collection of suggestions.

Properties:
Name Type Description
suggestions Array.<Object>

Required. The list of suggested replies.

This object should have the same structure as Suggestion

Source:
See:

SynthesizeSpeechConfig

Configuration of how speech should be synthesized.

Properties:
Name Type Description
speakingRate number

Optional. Speaking rate/speed, in the range [0.25, 4.0]. 1.0 is the normal native speed supported by the specific voice. 2.0 is twice as fast, and 0.5 is half as fast. If unset(0.0), defaults to the native 1.0 speed. Any other values < 0.25 or > 4.0 will return an error.

pitch number

Optional. Speaking pitch, in the range [-20.0, 20.0]. 20 means increase 20 semitones from the original pitch. -20 means decrease 20 semitones from the original pitch.

volumeGainDb number

Optional. Volume gain (in dB) of the normal native volume supported by the specific voice, in the range [-96.0, 16.0]. If unset, or set to a value of 0.0 (dB), will play at normal native signal amplitude. A value of -6.0 (dB) will play at approximately half the amplitude of the normal native signal amplitude. A value of +6.0 (dB) will play at approximately twice the amplitude of the normal native signal amplitude. We strongly recommend not to exceed +10 (dB) as there's usually no effective increase in loudness for any value greater than that.

effectsProfileId Array.<string>

Optional. An identifier which selects 'audio effects' profiles that are applied on (post synthesized) text to speech. Effects are applied on top of each other in the order they are given.

voice Object

Optional. The desired voice of the synthesized audio.

This object should have the same structure as VoiceSelectionParams

Source:
See:

TableCard

Table card for Actions on Google.

Properties:
Name Type Description
title string

Required. Title of the card.

subtitle string

Optional. Subtitle to the title.

image Object

Optional. Image which should be displayed on the card.

This object should have the same structure as Image

columnProperties Array.<Object>

Optional. Display properties for the columns in this table.

This object should have the same structure as ColumnProperties

rows Array.<Object>

Optional. Rows in this table of data.

This object should have the same structure as TableCardRow

buttons Array.<Object>

Optional. List of buttons for the card.

This object should have the same structure as Button

Source:
See:

TableCardCell

Cell of TableCardRow.

Properties:
Name Type Description
text string

Required. Text in this cell.

Source:
See:

TableCardRow

Row of TableCard.

Properties:
Name Type Description
cells Array.<Object>

Optional. List of cells that make up this row.

This object should have the same structure as TableCardCell

dividerAfter boolean

Optional. Whether to add a visual divider after this row.

Source:
See:

TelephonyPlayAudio

Plays audio from a file in Telephony Gateway.

Properties:
Name Type Description
audioUri string

Required. URI to a Google Cloud Storage object containing the audio to play, e.g., "gs://bucket/object". The object must contain a single channel (mono) of linear PCM audio (2 bytes / sample) at 8kHz.

This object must be readable by the service-<Project Number>@gcp-sa-dialogflow.iam.gserviceaccount.com service account where is the number of the Telephony Gateway project (usually the same as the Dialogflow agent project). If the Google Cloud Storage bucket is in the Telephony Gateway project, this permission is added by default when enabling the Dialogflow V2 API.

For audio from other sources, consider using the TelephonySynthesizeSpeech message with SSML.

Source:
See:

TelephonySynthesizeSpeech

Synthesizes speech and plays back the synthesized audio to the caller in Telephony Gateway.

Telephony Gateway takes the synthesizer settings from DetectIntentResponse.output_audio_config which can either be set at request-level or can come from the agent-level synthesizer config.

Properties:
Name Type Description
text string

The raw text to be synthesized.

ssml string

The SSML to be synthesized. For more information, see SSML.

Source:
See:

TelephonyTransferCall

Transfers the call in Telephony Gateway.

Properties:
Name Type Description
phoneNumber string

Required. The phone number to transfer the call to in E.164 format.

We currently only allow transferring to US numbers (+1xxxyyyzzzz).

Source:
See:

Text

The text response message.

Properties:
Name Type Description
text Array.<string>

Optional. The collection of the agent's responses.

Source:
See:

TextInput

Represents the natural language text to be processed.

Properties:
Name Type Description
text string

Required. The UTF-8 encoded natural language text to be processed. Text length must not exceed 256 characters.

languageCode string

Required. The language of this conversational query. See Language Support for a list of the currently supported language codes. Note that queries in the same session do not necessarily need to specify the same language.

Source:
See:

TrainAgentRequest

The request message for Agents.TrainAgent.

Properties:
Name Type Description
parent string

Required. The project that the agent to train is associated with. Format: projects/<Project ID>.

Source:
See:

TrainingPhrase

Represents an example that the agent is trained on.

Properties:
Name Type Description
name string

Output only. The unique identifier of this training phrase.

type number

Required. The type of the training phrase.

The number should be among the values of Type

parts Array.<Object>

Required. The ordered list of training phrase parts. The parts are concatenated in order to form the training phrase.

Note: The API does not automatically annotate training phrases like the Dialogflow Console does.

Note: Do not forget to include whitespace at part boundaries, so the training phrase is well formatted when the parts are concatenated.

If the training phrase does not need to be annotated with parameters, you just need a single part with only the Part.text field set.

If you want to annotate the training phrase, you must create multiple parts, where the fields of each part are populated in one of two ways:

  • Part.text is set to a part of the phrase that has no parameters.
  • Part.text is set to a part of the phrase that you want to annotate, and the entity_type, alias, and user_defined fields are all set.

This object should have the same structure as Part

timesAddedCount number

Optional. Indicates how many times this example was added to the intent. Each time a developer adds an existing sample by editing an intent or training, this counter is increased.

Source:
See:

UpdateContextRequest

The request message for Contexts.UpdateContext.

Properties:
Name Type Description
context Object

Required. The context to update.

This object should have the same structure as Context

updateMask Object

Optional. The mask to control which fields get updated.

This object should have the same structure as FieldMask

Source:
See:

UpdateDocumentRequest

Request message for Documents.UpdateDocument.

Properties:
Name Type Description
document Object

Required. The document to update.

This object should have the same structure as Document

updateMask Object

Optional. Not specified means update all. Currently, only display_name can be updated, an InvalidArgument will be returned for attempting to update other fields.

This object should have the same structure as FieldMask

Source:
See:

UpdateEntityTypeRequest

The request message for EntityTypes.UpdateEntityType.

Properties:
Name Type Description
entityType Object

Required. The entity type to update.

This object should have the same structure as EntityType

languageCode string

Optional. The language of entity synonyms defined in entity_type. If not specified, the agent's default language is used. Many languages are supported. Note: languages must be enabled in the agent before they can be used.

updateMask Object

Optional. The mask to control which fields get updated.

This object should have the same structure as FieldMask

Source:
See:

UpdateIntentRequest

The request message for Intents.UpdateIntent.

Properties:
Name Type Description
intent Object

Required. The intent to update.

This object should have the same structure as Intent

languageCode string

Optional. The language of training phrases, parameters and rich messages defined in intent. If not specified, the agent's default language is used. Many languages are supported. Note: languages must be enabled in the agent before they can be used.

updateMask Object

Optional. The mask to control which fields get updated.

This object should have the same structure as FieldMask

intentView number

Optional. The resource view to apply to the returned intent.

The number should be among the values of IntentView

Source:
See:

UpdateKnowledgeBaseRequest

Request message for KnowledgeBases.UpdateKnowledgeBase.

Properties:
Name Type Description
knowledgeBase Object

Required. The knowledge base to update.

This object should have the same structure as KnowledgeBase

updateMask Object

Optional. Not specified means update all. Currently, only display_name can be updated, an InvalidArgument will be returned for attempting to update other fields.

This object should have the same structure as FieldMask

Source:
See:

UpdateSessionEntityTypeRequest

The request message for SessionEntityTypes.UpdateSessionEntityType.

Properties:
Name Type Description
sessionEntityType Object

Required. The entity type to update. Format: projects/<Project ID>/agent/sessions/<Session ID>/entityTypes/<Entity Type Display Name> or projects/<Project ID>/agent/environments/<Environment ID>/users/<User ID>/sessions/<Session ID>/entityTypes/<Entity Type Display Name>. If Environment ID is not specified, we assume default 'draft' environment. If User ID is not specified, we assume default '-' user.

This object should have the same structure as SessionEntityType

updateMask Object

Optional. The mask to control which fields get updated.

This object should have the same structure as FieldMask

Source:
See:

ValidationError

Represents a single validation error.

Properties:
Name Type Description
severity number

The severity of the error.

The number should be among the values of Severity

entries Array.<string>

The names of the entries that the error is associated with. Format:

  • "projects//agent", if the error is associated with the entire agent.
  • "projects//agent/intents/", if the error is associated with certain intents.
  • "projects//agent/intents//trainingPhrases/", if the error is associated with certain intent training phrases.
  • "projects//agent/intents//parameters/", if the error is associated with certain intent parameters.
  • "projects//agent/entities/", if the error is associated with certain entities.
errorMessage string

The detailed error messsage.

Source:
See:

ValidationResult

Represents the output of agent validation.

Properties:
Name Type Description
validationErrors Array.<Object>

Contains all validation errors.

This object should have the same structure as ValidationError

Source:
See:

VoiceSelectionParams

Description of which voice to use for speech synthesis.

Properties:
Name Type Description
name string

Optional. The name of the voice. If not set, the service will choose a voice based on the other parameters such as language_code and ssml_gender.

ssmlGender number

Optional. The preferred gender of the voice. If not set, the service will choose a voice based on the other parameters such as language_code and name. Note that this is only a preference, not requirement. If a voice of the appropriate gender is not available, the synthesizer should substitute a voice with a different gender rather than failing the request.

The number should be among the values of SsmlVoiceGender

Source:
See: