mmv1/products/dialogflowcx/Flow.yaml (618 lines of code) (raw):
# Copyright 2024 Google Inc.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
---
name: 'Flow'
description: |
Flows represents the conversation flows when you build your chatbot agent.
references:
guides:
'Official Documentation': 'https://cloud.google.com/dialogflow/cx/docs'
api: 'https://cloud.google.com/dialogflow/cx/docs/reference/rest/v3/projects.locations.agents.flows'
docs:
id_format: '{{parent}}/flows/{{name}}'
base_url: '{{parent}}/flows'
update_verb: 'PATCH'
update_mask: true
import_format:
- '{{parent}}/flows/{{name}}'
timeouts:
insert_minutes: 40
update_minutes: 40
delete_minutes: 20
custom_code:
pre_create: 'templates/terraform/pre_create/dialogflowcx_set_location_skip_default_obj.go.tmpl'
pre_read: 'templates/terraform/pre_create/dialogflow_set_location.go.tmpl'
pre_update: 'templates/terraform/pre_create/dialogflow_set_location.go.tmpl'
pre_delete: 'templates/terraform/pre_delete/dialogflowcx_set_location_skip_default_obj.go.tmpl'
custom_import: 'templates/terraform/custom_import/dialogflowcx_flow.go.tmpl'
exclude_sweeper: true
examples:
- name: 'dialogflowcx_flow_basic'
primary_resource_id: 'basic_flow'
vars:
agent_name: 'dialogflowcx-agent'
ignore_read_extra:
- 'advanced_settings.0.logging_settings'
- name: 'dialogflowcx_flow_full'
primary_resource_id: 'basic_flow'
vars:
agent_name: 'dialogflowcx-agent'
bucket_name: 'dialogflowcx-bucket'
- name: 'dialogflowcx_flow_default_start_flow'
primary_resource_id: 'default_start_flow'
vars:
agent_name: 'dialogflowcx-agent'
exclude_docs: true
virtual_fields:
- name: 'is_default_start_flow'
description: |
Marks this as the [Default Start Flow](https://cloud.google.com/dialogflow/cx/docs/concept/flow#start) for an agent. When you create an agent, the Default Start Flow is created automatically.
The Default Start Flow cannot be deleted; deleting the `google_dialogflow_cx_flow` resource does nothing to the underlying GCP resources.
~> Avoid having multiple `google_dialogflow_cx_flow` resources linked to the same agent with `is_default_start_flow = true` because they will compete to control a single Default Start Flow resource in GCP.
type: Boolean
immutable: true
parameters:
- name: 'parent'
type: String
description: |
The agent to create a flow for.
Format: projects/<Project ID>/locations/<Location ID>/agents/<Agent ID>.
url_param_only: true
immutable: true
- name: 'languageCode'
type: String
description: |
The language of the following fields in flow:
Flow.event_handlers.trigger_fulfillment.messages
Flow.event_handlers.trigger_fulfillment.conditional_cases
Flow.transition_routes.trigger_fulfillment.messages
Flow.transition_routes.trigger_fulfillment.conditional_cases
If not specified, the agent's default language is used. Many languages are supported. Note: languages must be enabled in the agent before they can be used.
immutable: true
properties:
- name: 'name'
type: String
description: |
The unique identifier of the flow.
Format: projects/<Project ID>/locations/<Location ID>/agents/<Agent ID>/flows/<Flow ID>.
output: true
custom_flatten: 'templates/terraform/custom_flatten/name_from_self_link.tmpl'
- name: 'displayName'
type: String
description: |
The human-readable name of the flow.
required: true
- name: 'description'
type: String
description: |
The description of the flow. The maximum length is 500 characters. If exceeded, the request is rejected.
validation:
function: 'validation.StringLenBetween(0, 500)'
- name: 'transitionRoutes'
type: Array
description: |
A flow's transition routes serve two purposes:
They are responsible for matching the user's first utterances in the flow.
They are inherited by every page's [transition routes][Page.transition_routes] and can support use cases such as the user saying "help" or "can I talk to a human?", which can be handled in a common way regardless of the current page. Transition routes defined in the page have higher priority than those defined in the flow.
TransitionRoutes are evalauted in the following order:
TransitionRoutes with intent specified.
TransitionRoutes with only condition specified.
TransitionRoutes with intent specified are inherited by pages in the flow.
item_type:
type: NestedObject
properties:
- name: 'name'
type: String
description: |
The unique identifier of this transition route.
output: true
- name: 'intent'
type: String
description: |
The unique identifier of an Intent.
Format: projects/<Project ID>/locations/<Location ID>/agents/<Agent ID>/intents/<Intent ID>. Indicates that the transition can only happen when the given intent is matched. At least one of intent or condition must be specified. When both intent and condition are specified, the transition can only happen when both are fulfilled.
- name: 'condition'
type: String
description: |
The condition to evaluate against form parameters or session parameters.
At least one of intent or condition must be specified. When both intent and condition are specified, the transition can only happen when both are fulfilled.
- name: 'triggerFulfillment'
type: NestedObject
description: |
The fulfillment to call when the condition is satisfied. At least one of triggerFulfillment and target must be specified. When both are defined, triggerFulfillment is executed first.
properties:
- name: 'messages'
type: Array
description: |
The list of rich message responses to present to the user.
item_type:
type: NestedObject
properties:
- name: 'channel'
type: String
description: |
The channel which the response is associated with. Clients can specify the channel via QueryParameters.channel, and only associated channel response will be returned.
- name: 'text'
type: NestedObject
description: |
The text response message.
properties:
- name: 'text'
type: Array
description: |
A collection of text responses.
item_type:
type: String
- name: 'allowPlaybackInterruption'
type: Boolean
description: |
Whether the playback of this message can be interrupted by the end user's speech and the client can then starts the next Dialogflow request.
output: true
# This can be an arbitrary json blob, so we use a string instead of a NestedObject.
- name: 'payload'
type: String
description: |
A custom, platform-specific payload.
state_func: 'func(v interface{}) string { s, _ := structure.NormalizeJsonString(v); return s }'
custom_flatten: 'templates/terraform/custom_flatten/json_schema.tmpl'
custom_expand: 'templates/terraform/custom_expand/json_schema.tmpl'
validation:
function: 'validation.StringIsJSON'
- name: 'conversationSuccess'
type: NestedObject
description: |
Indicates that the conversation succeeded, i.e., the bot handled the issue that the customer talked to it about.
Dialogflow only uses this to determine which conversations should be counted as successful and doesn't process the metadata in this message in any way. Note that Dialogflow also considers conversations that get to the conversation end page as successful even if they don't return ConversationSuccess.
You may set this, for example:
* In the entryFulfillment of a Page if entering the page indicates that the conversation succeeded.
* In a webhook response when you determine that you handled the customer issue.
properties:
# This can be an arbitrary json blob, so we use a string instead of a NestedObject.
- name: 'metadata'
type: String
description: |
Custom metadata. Dialogflow doesn't impose any structure on this.
state_func: 'func(v interface{}) string { s, _ := structure.NormalizeJsonString(v); return s }'
custom_flatten: 'templates/terraform/custom_flatten/json_schema.tmpl'
custom_expand: 'templates/terraform/custom_expand/json_schema.tmpl'
validation:
function: 'validation.StringIsJSON'
- name: 'outputAudioText'
type: NestedObject
description: |
A text or ssml response that is preferentially used for TTS output audio synthesis, as described in the comment on the ResponseMessage message.
properties:
- name: 'allowPlaybackInterruption'
type: Boolean
description: |
Whether the playback of this message can be interrupted by the end user's speech and the client can then starts the next Dialogflow request.
output: true
- name: 'text'
type: String
description: |
The raw text to be synthesized.
- name: 'ssml'
type: String
description: |
The SSML text to be synthesized. For more information, see SSML.
- name: 'liveAgentHandoff'
type: NestedObject
description: |
Indicates that the conversation should be handed off to a live agent.
Dialogflow only uses this to determine which conversations were handed off to a human agent for measurement purposes. What else to do with this signal is up to you and your handoff procedures.
You may set this, for example:
* In the entryFulfillment of a Page if entering the page indicates something went extremely wrong in the conversation.
* In a webhook response when you determine that the customer issue can only be handled by a human.
properties:
# This can be an arbitrary json blob, so we use a string instead of a NestedObject.
- name: 'metadata'
type: String
description: |
Custom metadata. Dialogflow doesn't impose any structure on this.
state_func: 'func(v interface{}) string { s, _ := structure.NormalizeJsonString(v); return s }'
custom_flatten: 'templates/terraform/custom_flatten/json_schema.tmpl'
custom_expand: 'templates/terraform/custom_expand/json_schema.tmpl'
validation:
function: 'validation.StringIsJSON'
- name: 'playAudio'
type: NestedObject
description: |
Specifies an audio clip to be played by the client as part of the response.
properties:
- name: 'audioUri'
type: String
description: |
URI of the audio clip. Dialogflow does not impose any validation on this value. It is specific to the client that reads it.
required: true
- name: 'allowPlaybackInterruption'
type: Boolean
description: |
Whether the playback of this message can be interrupted by the end user's speech and the client can then starts the next Dialogflow request.
output: true
- name: 'telephonyTransferCall'
type: NestedObject
description: |
Represents the signal that telles the client to transfer the phone call connected to the agent to a third-party endpoint.
properties:
- name: 'phoneNumber'
type: String
description: |
Transfer the call to a phone number in E.164 format.
required: true
- name: 'webhook'
type: String
description: |
The webhook to call. Format: projects/<Project ID>/locations/<Location ID>/agents/<Agent ID>/webhooks/<Webhook ID>.
- name: 'returnPartialResponses'
type: Boolean
description: |
Whether Dialogflow should return currently queued fulfillment response messages in streaming APIs. If a webhook is specified, it happens before Dialogflow invokes webhook. Warning: 1) This flag only affects streaming API. Responses are still queued and returned once in non-streaming API. 2) The flag can be enabled in any fulfillment but only the first 3 partial responses will be returned. You may only want to apply it to fulfillments that have slow webhooks.
- name: 'tag'
type: String
description: |
The tag used by the webhook to identify which fulfillment is being called. This field is required if webhook is specified.
- name: 'setParameterActions'
type: Array
description: |
Set parameter values before executing the webhook.
item_type:
type: NestedObject
properties:
- name: 'parameter'
type: String
description: |
Display name of the parameter.
- name: 'value'
type: String
description: |
The new JSON-encoded value of the parameter. A null value clears the parameter.
state_func: 'func(v interface{}) string { s, _ := structure.NormalizeJsonString(v); return s }'
custom_flatten: 'templates/terraform/custom_flatten/json_schema.tmpl'
custom_expand: 'templates/terraform/custom_expand/json_value.tmpl'
validation:
function: 'validation.StringIsJSON'
- name: 'conditionalCases'
type: Array
description: |
Conditional cases for this fulfillment.
item_type:
type: NestedObject
properties:
# This object has a recursive schema so we use a string instead of a NestedObject
- name: 'cases'
type: String
description: |
A JSON encoded list of cascading if-else conditions. Cases are mutually exclusive. The first one with a matching condition is selected, all the rest ignored.
See [Case](https://cloud.google.com/dialogflow/cx/docs/reference/rest/v3/Fulfillment#case) for the schema.
state_func: 'func(v interface{}) string { s, _ := structure.NormalizeJsonString(v); return s }'
custom_flatten: 'templates/terraform/custom_flatten/json_schema.tmpl'
custom_expand: 'templates/terraform/custom_expand/json_value.tmpl'
validation:
function: 'validation.StringIsJSON'
- name: 'targetPage'
type: String
description: |
The target page to transition to.
Format: projects/<Project ID>/locations/<Location ID>/agents/<Agent ID>/flows/<Flow ID>/pages/<Page ID>.
- name: 'targetFlow'
type: String
description: |
The target flow to transition to.
Format: projects/<Project ID>/locations/<Location ID>/agents/<Agent ID>/flows/<Flow ID>.
- name: 'eventHandlers'
type: Array
description: |
A flow's event handlers serve two purposes:
They are responsible for handling events (e.g. no match, webhook errors) in the flow.
They are inherited by every page's [event handlers][Page.event_handlers], which can be used to handle common events regardless of the current page. Event handlers defined in the page have higher priority than those defined in the flow.
Unlike transitionRoutes, these handlers are evaluated on a first-match basis. The first one that matches the event get executed, with the rest being ignored.
default_from_api: true
item_type:
type: NestedObject
properties:
- name: 'name'
type: String
description: |
The unique identifier of this event handler.
output: true
- name: 'event'
type: String
description: |
The name of the event to handle.
- name: 'triggerFulfillment'
type: NestedObject
description: |
The fulfillment to call when the event occurs. Handling webhook errors with a fulfillment enabled with webhook could cause infinite loop. It is invalid to specify such fulfillment for a handler handling webhooks.
properties:
- name: 'messages'
type: Array
description: |
The list of rich message responses to present to the user.
item_type:
type: NestedObject
properties:
- name: 'channel'
type: String
description: |
The channel which the response is associated with. Clients can specify the channel via QueryParameters.channel, and only associated channel response will be returned.
- name: 'text'
type: NestedObject
description: |
The text response message.
properties:
- name: 'text'
type: Array
description: |
A collection of text responses.
item_type:
type: String
- name: 'allowPlaybackInterruption'
type: Boolean
description: |
Whether the playback of this message can be interrupted by the end user's speech and the client can then starts the next Dialogflow request.
output: true
- name: 'payload'
type: String
description: |
A custom, platform-specific payload.
state_func: 'func(v interface{}) string { s, _ := structure.NormalizeJsonString(v); return s }'
custom_flatten: 'templates/terraform/custom_flatten/json_schema.tmpl'
custom_expand: 'templates/terraform/custom_expand/json_schema.tmpl'
validation:
function: 'validation.StringIsJSON'
- name: 'conversationSuccess'
type: NestedObject
description: |
Indicates that the conversation succeeded, i.e., the bot handled the issue that the customer talked to it about.
Dialogflow only uses this to determine which conversations should be counted as successful and doesn't process the metadata in this message in any way. Note that Dialogflow also considers conversations that get to the conversation end page as successful even if they don't return ConversationSuccess.
You may set this, for example:
* In the entryFulfillment of a Page if entering the page indicates that the conversation succeeded.
* In a webhook response when you determine that you handled the customer issue.
properties:
- name: 'metadata'
type: String
description: |
Custom metadata. Dialogflow doesn't impose any structure on this.
state_func: 'func(v interface{}) string { s, _ := structure.NormalizeJsonString(v); return s }'
custom_flatten: 'templates/terraform/custom_flatten/json_schema.tmpl'
custom_expand: 'templates/terraform/custom_expand/json_schema.tmpl'
validation:
function: 'validation.StringIsJSON'
- name: 'outputAudioText'
type: NestedObject
description: |
A text or ssml response that is preferentially used for TTS output audio synthesis, as described in the comment on the ResponseMessage message.
properties:
- name: 'allowPlaybackInterruption'
type: Boolean
description: |
Whether the playback of this message can be interrupted by the end user's speech and the client can then starts the next Dialogflow request.
output: true
- name: 'text'
type: String
description: |
The raw text to be synthesized.
- name: 'ssml'
type: String
description: |
The SSML text to be synthesized. For more information, see SSML.
- name: 'liveAgentHandoff'
type: NestedObject
description: |
Indicates that the conversation should be handed off to a live agent.
Dialogflow only uses this to determine which conversations were handed off to a human agent for measurement purposes. What else to do with this signal is up to you and your handoff procedures.
You may set this, for example:
* In the entryFulfillment of a Page if entering the page indicates something went extremely wrong in the conversation.
* In a webhook response when you determine that the customer issue can only be handled by a human.
properties:
- name: 'metadata'
type: String
description: |
Custom metadata. Dialogflow doesn't impose any structure on this.
state_func: 'func(v interface{}) string { s, _ := structure.NormalizeJsonString(v); return s }'
custom_flatten: 'templates/terraform/custom_flatten/json_schema.tmpl'
custom_expand: 'templates/terraform/custom_expand/json_schema.tmpl'
validation:
function: 'validation.StringIsJSON'
- name: 'playAudio'
type: NestedObject
description: |
Specifies an audio clip to be played by the client as part of the response.
properties:
- name: 'audioUri'
type: String
description: |
URI of the audio clip. Dialogflow does not impose any validation on this value. It is specific to the client that reads it.
required: true
- name: 'allowPlaybackInterruption'
type: Boolean
description: |
Whether the playback of this message can be interrupted by the end user's speech and the client can then starts the next Dialogflow request.
output: true
- name: 'telephonyTransferCall'
type: NestedObject
description: |
Represents the signal that telles the client to transfer the phone call connected to the agent to a third-party endpoint.
properties:
- name: 'phoneNumber'
type: String
description: |
Transfer the call to a phone number in E.164 format.
required: true
- name: 'webhook'
type: String
description: |
The webhook to call. Format: projects/<Project ID>/locations/<Location ID>/agents/<Agent ID>/webhooks/<Webhook ID>.
- name: 'returnPartialResponses'
type: Boolean
description: |
Whether Dialogflow should return currently queued fulfillment response messages in streaming APIs. If a webhook is specified, it happens before Dialogflow invokes webhook. Warning: 1) This flag only affects streaming API. Responses are still queued and returned once in non-streaming API. 2) The flag can be enabled in any fulfillment but only the first 3 partial responses will be returned. You may only want to apply it to fulfillments that have slow webhooks.
- name: 'tag'
type: String
description: |
The tag used by the webhook to identify which fulfillment is being called. This field is required if webhook is specified.
- name: 'setParameterActions'
type: Array
description: |
Set parameter values before executing the webhook.
item_type:
type: NestedObject
properties:
- name: 'parameter'
type: String
description: |
Display name of the parameter.
- name: 'value'
type: String
description: |
The new JSON-encoded value of the parameter. A null value clears the parameter.
state_func: 'func(v interface{}) string { s, _ := structure.NormalizeJsonString(v); return s }'
custom_flatten: 'templates/terraform/custom_flatten/json_schema.tmpl'
custom_expand: 'templates/terraform/custom_expand/json_value.tmpl'
validation:
function: 'validation.StringIsJSON'
- name: 'conditionalCases'
type: Array
description: |
Conditional cases for this fulfillment.
item_type:
type: NestedObject
properties:
- name: 'cases'
type: String
description: |
A JSON encoded list of cascading if-else conditions. Cases are mutually exclusive. The first one with a matching condition is selected, all the rest ignored.
See [Case](https://cloud.google.com/dialogflow/cx/docs/reference/rest/v3/Fulfillment#case) for the schema.
state_func: 'func(v interface{}) string { s, _ := structure.NormalizeJsonString(v); return s }'
custom_flatten: 'templates/terraform/custom_flatten/json_schema.tmpl'
custom_expand: 'templates/terraform/custom_expand/json_value.tmpl'
validation:
function: 'validation.StringIsJSON'
- name: 'targetPage'
type: String
description: |
The target page to transition to.
Format: projects/<Project ID>/locations/<Location ID>/agents/<Agent ID>/flows/<Flow ID>/pages/<Page ID>.
- name: 'targetFlow'
type: String
description: |
The target flow to transition to.
Format: projects/<Project ID>/locations/<Location ID>/agents/<Agent ID>/flows/<Flow ID>.
- name: 'transitionRouteGroups'
type: Array
description: |
A flow's transition route group serve two purposes:
They are responsible for matching the user's first utterances in the flow.
They are inherited by every page's [transition route groups][Page.transition_route_groups]. Transition route groups defined in the page have higher priority than those defined in the flow.
Format:projects/<Project ID>/locations/<Location ID>/agents/<Agent ID>/flows/<Flow ID>/transitionRouteGroups/<TransitionRouteGroup ID>.
item_type:
type: String
- name: 'nluSettings'
type: NestedObject
description: |
NLU related settings of the flow.
properties:
- name: 'modelType'
type: Enum
description: |
Indicates the type of NLU model.
* MODEL_TYPE_STANDARD: Use standard NLU model.
* MODEL_TYPE_ADVANCED: Use advanced NLU model.
enum_values:
- 'MODEL_TYPE_STANDARD'
- 'MODEL_TYPE_ADVANCED'
- name: 'classificationThreshold'
type: Double
description: |
To filter out false positive results and still get variety in matched natural language inputs for your agent, you can tune the machine learning classification threshold.
If the returned score value is less than the threshold value, then a no-match event will be triggered. The score values range from 0.0 (completely uncertain) to 1.0 (completely certain). If set to 0.0, the default of 0.3 is used.
- name: 'modelTrainingMode'
type: Enum
description: |
Indicates NLU model training mode.
* MODEL_TRAINING_MODE_AUTOMATIC: NLU model training is automatically triggered when a flow gets modified. User can also manually trigger model training in this mode.
* MODEL_TRAINING_MODE_MANUAL: User needs to manually trigger NLU model training. Best for large flows whose models take long time to train.
enum_values:
- 'MODEL_TRAINING_MODE_AUTOMATIC'
- 'MODEL_TRAINING_MODE_MANUAL'
- name: 'advancedSettings'
type: NestedObject
description: |
Hierarchical advanced settings for this flow. The settings exposed at the lower level overrides the settings exposed at the higher level.
Hierarchy: Agent->Flow->Page->Fulfillment/Parameter.
properties:
- name: 'audioExportGcsDestination'
type: NestedObject
description: |
If present, incoming audio is exported by Dialogflow to the configured Google Cloud Storage destination. Exposed at the following levels:
* Agent level
* Flow level
properties:
- name: 'uri'
type: String
description: |
The Google Cloud Storage URI for the exported objects. Whether a full object name, or just a prefix, its usage depends on the Dialogflow operation.
Format: gs://bucket/object-name-or-prefix
- name: 'speechSettings'
type: NestedObject
description: |
Settings for speech to text detection. Exposed at the following levels:
* Agent level
* Flow level
* Page level
* Parameter level
properties:
- name: 'endpointerSensitivity'
type: Integer
description: |
Sensitivity of the speech model that detects the end of speech. Scale from 0 to 100.
- name: 'noSpeechTimeout'
type: String
description: |
Timeout before detecting no speech.
A duration in seconds with up to nine fractional digits, ending with 's'. Example: "3.5s".
- name: 'useTimeoutBasedEndpointing'
type: Boolean
description: |
Use timeout based endpointing, interpreting endpointer sensitivity as seconds of timeout value.
- name: 'models'
type: KeyValuePairs
description: |
Mapping from language to Speech-to-Text model. The mapped Speech-to-Text model will be selected for requests from its corresponding language. For more information, see [Speech models](https://cloud.google.com/dialogflow/cx/docs/concept/speech-models).
An object containing a list of **"key": value** pairs. Example: **{ "name": "wrench", "mass": "1.3kg", "count": "3" }**.
- name: 'dtmfSettings'
type: NestedObject
description: |
Define behaviors for DTMF (dual tone multi frequency). DTMF settings does not override each other. DTMF settings set at different levels define DTMF detections running in parallel. Exposed at the following levels:
* Agent level
* Flow level
* Page level
* Parameter level
properties:
- name: 'enabled'
type: Boolean
description: |
If true, incoming audio is processed for DTMF (dual tone multi frequency) events. For example, if the caller presses a button on their telephone keypad and DTMF processing is enabled, Dialogflow will detect the event (e.g. a "3" was pressed) in the incoming audio and pass the event to the bot to drive business logic (e.g. when 3 is pressed, return the account balance).
- name: 'maxDigits'
type: Integer
description: |
Max length of DTMF digits.
- name: 'finishDigit'
type: String
description: |
The digit that terminates a DTMF digit sequence.
- name: 'loggingSettings'
type: NestedObject
ignore_read: true
# Ignore read as API does not return loggingSettings back, only accepts in the /create/update API call
description: |
Settings for logging. Settings for Dialogflow History, Contact Center messages, StackDriver logs, and speech logging. Exposed at the following levels:
* Agent level
properties:
- name: 'enableStackdriverLogging'
type: Boolean
description: |
Enables Google Cloud Logging.
- name: 'enableInteractionLogging'
type: Boolean
description: |
Enables DF Interaction logging.
- name: 'enableConsentBasedRedaction'
type: Boolean
description: |
Enables consent-based end-user input redaction, if true, a pre-defined session parameter **$session.params.conversation-redaction** will be used to determine if the utterance should be redacted.