Building a Conversational Chatbot for Slack using Rasa and Python -Part 1

Building a Conversational Chatbot for Slack using Rasa and Python -Part 1A guide to creating a chatbot with Rasa stack and Python.

Parul PandeyBlockedUnblockFollowFollowingJan 7SourceIt’s time to go beyond ‘Sorry, I didn’t get that’ type bots and building AI assistants that scale using machine learning : RasaConversational AI systems are becoming an indispensable part of the human ecosystem.

Well-known examples of conversational AI include Apple’s Siri, Amazon’s Alexa and Microsoft’s Cortana.

The conversational chatbots have come a long way from their rule-based predecessors and almost every tech company today employs one or more chatty assistant.

A Conversation chatbot understands the context of the conversation and can handle any user goal gracefully and help accomplish it as best as possible.

This doesn’t always mean that the bot will be able to answer all questions but it can handle the conversation well.

ObjectiveIn this article, we are going to build a chatbot called ‘Robo’ capable of checking in on people’s mood and taking the necessary actions to cheer them up.

We will then deploy it to Slack.

It will be a fully function Slackbot capable of listening and responding to your requests.

The demo screenshot below should motivate you enough to build one of your own.

The source for this article is a wonderful talk that Tom Bocklisch, Head of Engineering @ Rasa gave at Pydata Berlin.

RequirementsWe majorly require the installation of Rasa Stack and a language model.

The language model is going to be used to parse incoming text messages and extract the necessary information.

We will be working with the SpaCy language model.

Rasa StackRasa Stack is a set of open source machine learning tools for developers to create contextual AI assistants and chatbots.

It is the leading open source machine learning toolkit that lets developers expand bots beyond answering simple questions with minimal training data.

The bots are based on a machine learning model trained on example conversations.

It consists of two frameworks:sourceRasa NLU: a library for natural language understanding with intent classification and entity extraction.

This helps the chatbot to understand what the user is saying.

Rasa Core: a chatbot framework with machine learning-based dialogue management that predicts the next best action based on the input from NLU, the conversation history, and the training data.

Rasa has great documentation including some interactive examples to easily grasp the subject.

InstallationsWe will be using Jupyter notebook for running the code.

However, I would recommend using Google’s Colaboratory since it doesn’t require any setup and the code is executed in a virtual machine dedicated to your account.

The only catch is that virtual machines are recycled when idle for a while, and have a maximum lifetime enforced by the system.

Primary InstallationsYou’ll need a Rasa NLU, Rasa Core and a spaCy language model.

#Rasa NLUpython m pip install rasa_nlu[spacy] (https://rasa.

com/docs/nlu/installation/)#Rasa Corepython m pip install -U rasa_core == 0.

9.

6(https://rasa.

com/docs/core/installation/)#Language Modelpython -m spacy download en_core_web_mdpython -m spacy link en_core_web_md en –force;The Rasa Core version used in this article is not the latest one.

Working with the latest one was throwing errors while executing the code so I had to work with the older version.

I am currently working to get the code running on the latest version.

We will install the other dependencies as per their requirement.

1.

Teaching the bot to understand user inputs using Rasa NLUNLU deals with teaching a chatbot on how to understand user inputs.

Here is an example of a conversation that we would like to have with our bot.

To be able to achieve this, we are going to build a Rasa NLU model and feed in the training data which the user has to prepare.

The model will then convert the data into a structured format consisting of entities and intents.

1.

Preparing the NLU Training DataThe training data consists of a list of messages that one expects to receive from the bot.

This data is annotated with the intent and entities that Rasa NLU should learn to extract.

Let’s Understand the concept of intent and entities with an example.

Intent: The intent describes what the messages.

For instance, for a weather prediction bot, the sentence :What’s the weather like tomorrow?” has a request_weather intent.

Entity: Pieces of information which help a chatbot understand what specifically a user is asking about by recognising the structured data in the sentence.

Here cuisine and location are both extracted entities.

Below is an extract from the training data.

You can also add some spelling mistakes or slangs since that will give a flavour of the spoken language to the bot.

For the entire training data, refer to the notebook.

nlu_md = """## intent:greet- hey- hello there- hi- hello there## intent:goodbye- cu- good by- cee you later## intent:mood_unhappy- my day was horrible- I am sad- I don't feel very well- I am disappointed%store nlu_md > nlu.

mdThe training data will be written to nlu.

md file and stored in the same directory as your notebook.

Training data is usually stored in a markdown file.

2.

Defining the NLU Model ConfigurationRasa NLU has a number of different components, which together make a pipeline.

Once the training data is ready, we can feed it to the NLU model pipeline.

All the components that are listed in the pipeline will be trained one after another.

You can read more about the pipelines here.

This file contains the configurations that will be used in the nlu model.

The configuration file is going to be important for model training because it will provide quite a few important parameters that are going to be used while training the model.

3.

Training the NLU Model.

It’s time to train our model to recognise user inputs so that when you send a message like “hello” to your bot, it will recognise this as a greet intent and when you send ‘bye’, it recognises it as a goodbye intent.

The trained model files will be stored at path: ‘.

/models/nlu/current’.

4.

Evaluating the NLU modelIt is time to test how our model performs.

Let’s pass in some random messages.

# small helper to make dict dumps a bit prettierdef pprint(o): print(json.

dumps(o, indent=2))pprint(interpreter.

parse("I am unhappy"))Our model has performed well.

Let us now evaluate it on a test data set.

However, for our purpose let’s evaluate it on the data at hand i.

e nlu.

mdfrom rasa_nlu.

evaluate import run_evaluationrun_evaluation("nlu.

md", model_directory)We get an Intent Confusion matrix with the with various evaluation results.

We have successfully created a basic Bot that can only understand the natural language but no dialogues.

It’s time to add the dialogue capabilities to our bot.

2.

Teaching the bot to respond using Rasa CoreOur Bot, is now capable of understanding what the user is saying i.

e whether is our mood like, is it happy or sad.

Now the next task would be to make the Bot respond to messages.

In our case, it would be to fetch an image of a dog, cat or a bird depending upon user’s choice to cheer them up.

We will teach ‘robo’ to make responses by training a dialogue management model using Rasa Core.

1.

Writing StoriesThe training data for dialogue management models is called stories.

A story consist of an actual piece of conversation that takes place between a user and Bot.

The user ’s inputs are expressed as intents as well as corresponding entities, and chatbot responses are expressed as actions.

Let’s see how a typical story looks like.

This is just an excerpt and for full data refer to the notebook.

stories_md = """## happy path * greet – utter_greet* mood_great – utter_happy* mood_affirm – utter_happy* mood_affirm – utter_goodbye ## sad path * greet – utter_greet * mood_unhappy – utter_ask_picture* inform{"animal":"dog"} – action_retrieve_image – utter_did_that_help* mood_affirm – utter_happy"""%store stories_md > stories.

mdThe format of a typical story is as follows:## denotes the start of a story and you can give it a name like happy path, sad path etc.

* denotes the messages sent by the user in the form of intents.

– denotes the action taken by the bot.

2.

Defining a DomainThe domain is like a universe where the bot lives in operates.

This includes what user inputs it should expect to get, what actions it should be able to predict, how to respond and what information to store.

The domain consists of five key parts consisting of intents, slots, entities, actions and templates.

We are aware of the first two, let’s understand the others.

slots: slots are like placeholders for the values that enable the bot to keep a track of the conversations.

actions: things our bot would say or do.

templates: template strings for the things that bot would sayWe define the domain in form of a domain.

yml life.

Here is an example domain for our bot:3.

Custom ActionsSince we want our Bot to make an API call to retrieve photographs of a dog, cat or a bird, depending on which was specified by the user, we need to create a custom action.

The bot will know which type of picture should be received by retrieving the value of the slot group.

4.

Training a Dialogue ModelFinally, we will train the dialogue management model citing the policies that should be used to train it.

For our example, we will implement a neural network in Keras which earns to predict which action to take next.

The main component of the model is a recurrent neural network (an LSTM), which maps from raw dialogue history directly to distribution over system actions.

Sometimes you want to fall back to a fallback action like saying “Sorry, I didn’t understand that”.

To do this, add the FallbackPolicy to your policy ensemble.

The fitted keras policy modelThe model will be saved at the path :‘models/dialogue’4.

Time to chatIt’s time to chat with our bot.

Execute the following the code and start chatting.

import warningswarnings.

simplefilter('ignore', ruamel.

yaml.

error.

UnsafeLoaderWarning)from rasa_core.

agent import Agentfrom rasa_core.

interpreter import NaturalLanguageInterpreterinterpreter = NaturalLanguageInterpreter.

create(model_directory)agent = Agent.

load('models/dialogue', interpreter=interpreter)print("Your bot is ready to talk! Type your messages here or send 'stop'")while True: a = input() if a == 'stop': break responses = agent.

handle_text(a) for response in responses: print(response["text"])The NotebookYou can either access the notebook from Github or can have a look below:ConclusionWe conclude our Part 1 here.

We created a chatbot which is capable of listening to user’s input and responding contextually.

In Part 2 we shall deploy this bot on Slack.

We utilised the capabilities of Rasa NLU and Rasa Core to create a bot with minimum training data.

In the next part, we will utilise the model created to deploy the bot on slack.

Rasa makes it really easy for users to experiment with chatbots and create them with without a hassle.

So it’s time you start creating a bot for your use case with Rasa.

.. More details

Leave a Reply