AWS Machine Learning Blog

Announcing Visual Conversation Builder for Amazon Lex

Amazon Lex is a service for building conversational interfaces using voice and text. Amazon Lex provides high-quality speech recognition and language understanding capabilities. With Amazon Lex, you can add sophisticated, natural language bots to new and existing applications. Amazon Lex reduces multi-platform development efforts, allowing you to easily publish your speech or text chatbots to mobile devices and multiple chat services, like Facebook Messenger, Slack, Kik, or Twilio SMS.

Today, we added a Visual Conversation Builder (VCB) to Amazon Lex—a drag-and-drop conversation builder that provides an easy to use interface to visualize and build conversation flows. Users can connect conversation nodes and iterate on conversation designs in a no-code environment. There are three main benefits of the VCB:

  • It’s easier to collaborate
  • It simplifies conversational design and testing
  • It reduces code complexity

In this post, we introduce the VCB, how to use it, and share customer success stories.

Overview of the Visual Conversation Builder

In addition to the already available menu-based editor and Amazon Lex APIs, the visual builder gives a single view of an entire conversation flow in one location, simplifying bot design and reducing dependency on development teams. Conversational designers, UX designers, and product managers—anyone with an interest in building a conversation on Amazon Lex—can utilize the builder.

Designers and developers can now collaborate and build conversations easily in the VCB without coding the business logic behind the conversation. The visual builder helps accelerate time to market for Amazon Lex-based solutions by providing better collaboration, easier iterations of the conversation design, and reduced code complexity.

With the visual builder, it’s now possible to quickly view the entire conversation flow of the intent at a glance and get visual feedback as changes are made. Changes to your design are instantly reflected in the view, and any effects to dependencies or branching logic is immediately apparent to the designer. You can use the visual builder to make any changes to the intent, such as adding utterances, slots, prompts, or responses. Each block type has its own settings that you can configure to tailor the flow of the conversation.

Previously, complex branching of conversations required implementation of AWS Lambda—a serverless, event-driven compute service—to achieve the desired pathing. The visual builder reduces the need for Lambda integrations, and designers can perform conversation branching without the need for Lambda code, as shown in the following example. This helps to decouple conversation design activities from Lambda business logic and integrations. You can still use the existing intent editor in conjunction with the visual builder, or switch between them at any time when creating and modifying intents.

The VCB is a no-code method of designing complex conversations. For example, you can now add a confirmation prompt in an intent and branch based on a Yes or No response to different paths in the flow without code. Where future Lambda business logic is needed, conversation designers can add placeholder blocks into the flow so developers know what needs to be addressed through code. Code hook blocks with no Lambda functions attached automatically take the Success pathway so testing of the flow can continue until the business logic is completed and implemented. In addition to branching, the visual builder offers designers the ability to go to another intent as part of the conversation flow.

Upon saving, VCB automatically scans the build to detect any errors in the conversation flow. In addition, the VCB auto-detects missing failure paths and provides the capability to auto-add those paths into the flow, as shown in the following example.

Using the Visual Conversation Builder

You can access the VCB via the Amazon Lex console by going to a bot and editing or creating a new intent. On the intent page, you can now switch between the visual builder interface and the traditional intent editor, as shown in the following screenshot.

For the intent, the visual builder shows what has already been designed in a visual layout, whereas new intents start with a blank canvas. The visual builder displays existing intents graphically on the canvas. For new intents, you start with a blank canvas and simply drag the components you want to add onto the canvas and begin connecting them together to create the conversation flow.

The visual builder has three main components: blocks, ports, and edges. Let’s get into how these are used in conjunction to create a conversation from beginning to end within an intent.

The basic building unit of a conversation flow is called a block. The top menu of the visual builder contains all the blocks you are able to use. To add a block to a conversation flow, drag it from the top menu onto the flow.

Each block has a specific functionality to handle different use cases of a conversation. The currently available block types are as follows:

  • Start – The root or first block of the conversation flow that can also be configured to send an initial response
  • Get slot value – Tries to elicit a value for a single slot
  • Condition – Can contain up to four custom branches (with conditions) and one default branch
  • Dialog code hook – Handles invocation of the dialog Lambda function and includes bot responses based on dialog Lambda functions succeeding, failing, or timing out
  • Confirmation – Queries the customer prior to fulfillment of the intent and includes bot responses based on the customer saying yes or no to the confirmation prompt
  • Fulfillment – Handles fulfillment of the intent and can be configured to invoke Lambda functions and respond with messages if fulfillment succeeds or fails
  • Closing response – Allows the bot to respond with a message before ending the conversation
  • Wait for user input – Captures input from the customer and switches to another intent based on the utterance
  • End conversation – Indicates the end of the conversation flow

Take the Order Flowers bot as an example. The OrderFlowers intent, when viewed in the visual builder, uses five blocks: Start, three different Get slot value blocks, and Confirmation.

Each block can contain one more ports, which are used to connect one block to another. Blocks contain an input port and one or more output ports based on desired paths for states such a success, timeout, and error.

The connection between the output port of one block and the input port of another block is referred to as an edge.

In the OrderFlowers intent, when the conversation starts, the Start output port is connected to the Get slot value: FlowerType input port using an edge. Each Get slot value block is connected using ports and edges to create a sequence in the conversation flow, which ensures the intent has all the slot values it needs to put in the order.

Notice that currently there is no edge connected to the failure output port of these blocks, but the builder will automatically add these if you choose Save intent and then choose Confirm in the pop-up Auto add block and edges for failure paths. The visual builder then adds an End conversation block and a Go to intent block, connecting the failure and error output ports to Go to intent and connecting the Yes/No ports of the Confirmation block to End conversation.

After the builder adds the blocks and edges, the intent is saved and the conversation flow can be built and tested. Let’s add a Welcome intent to the bot using the visual builder. From the OrderFlowers intent visual builder, choose Back to intents list in the navigation pane. On the Intents page, choose Add intent followed by Add empty intent. In the Intent name field, enter Welcome and choose Add.

Switch to the Visual builder tab and you will see an empty intent, with only the Start block currently on the canvas. To start, add some utterances to this intent so that the bot will be able to direct users to the Welcome intent. Choose the edit button of the Start block and scroll down to Sample utterances. Add the following utterances to this intent and then close the block:

  • Can you help me?
  • Hi
  • Hello
  • I need help

Now let’s add a response for the bot to give when it hits this intent. Because the Welcome intent won’t be processing any logic, we can drag a Closing response block into the canvas to add this message. After you add the block, choose the edit icon on the block and enter the following response:

Hi! I am the Order Flowers Bot. How can I help you today?

The canvas should now have two blocks, but they aren’t connected to each other. We can connect the ports of these two blocks using an edge.

To connect the two ports, simply click and drag from the No response output port of the Start block to the input port of the Closing response block.

At this point, you can complete the conversation flow in two different ways:

  • First, you can manually add the End conversation block and connect it to the Closing response block.
  • Alternatively, choose Save intent and then choose Confirm to have the builder create this block and connection for you.

After the intent is saved, choose Build and wait for the build to complete, then choose Test.

The bot will now properly greet the customer if an utterance matches this newly created intent.

Customer stories

NeuraFlash is an Advanced AWS Partner with over 40 collective years of experience in the voice and automation space. With a dedicated team of Conversational Experience Designers, Speech Scientists, and AWS developers, NeuraFlash helps customers take advantage of the power of Amazon Lex in their contact centers.

“One of our key focus areas is helping customers leverage AI capabilities for developing conversational interfaces. These interfaces often require specialized bot configuration skills to build effective flows. With the Visual Conversation Builder, our designers can quickly and easily build conversational interfaces, allowing them to experiment at a faster rate and deliver quality products for our customers without requiring developer skills. The drag-and-drop UI and the visual conversation flow is a game-changer for reinventing the contact center experience.”

The SmartBots ML-powered platform lies at the core of the design, prototyping, testing, validating, and deployment of AI-driven chatbots. This platform supports the development of custom enterprise bots that can easily integrate with any application—even an enterprise’s custom application ecosystem.

“The Visual Conversation Builder’s easy-to-use drag-and-drop interface enables us to easily onboard Amazon Lex, and build complex conversational experiences for our customers’ contact centers. With this new functionality, we can improve Interactive Voice Response (IVR) systems faster and with minimal effort. Implementing new technology can be difficult with a steep learning curve, but we found that the drag-and-drop features were easy to understand, allowing us to realize value immediately.“

Conclusion

The Visual Conversation Builder for Amazon Lex is now generally available, for free, in all AWS Regions where Amazon Lex V2 operates.

Additionally, on August 17, 2022, Amazon Lex V2 released a change to the way conversations are managed with the user. This change gives you more control over the path that the user takes through the conversation. For more information, see Understanding conversation flow management. Note that bots created before August 17, 2022, do not support the VCB for creating conversation flows.

To learn more, see Amazon Lex FAQs and the Amazon Lex V2 Developer Guide. Please send feedback to AWS re:Post for Amazon Lex or through your usual AWS support contacts.


About the authors

Thomas Rindfuss is a Sr. Solutions Architect on the Amazon Lex team. He invents, develops, prototypes, and evangelizes new technical features and solutions for Language AI services that improves the customer experience and eases adoption.

Austin Johnson is a Solutions Architect at AWS , helping customers on their cloud journey. He is passionate about building and utilizing conversational AI platforms to add sophisticated, natural language interfaces to their applications.