Build it Yourself — Chatbot API with Keras/TensorFlow Model

Build it Yourself — Chatbot API with Keras/TensorFlow ModelStep-by-step solution with source code to build a simple chatbot on top of Keras/TensorFlow modelAndrejus BaranovskisBlockedUnblockFollowFollowingApr 24Source: PixabayIs not that complex to build your own chatbot (or assistant, this word is a new trendy term for chatbot) as you may think.

Various chatbot platforms are using classification models to recognize user intent.

While obviously, you get a strong heads-up when building a chatbot on top of the existing platform, it never hurts to study the background concepts and try to build it yourself.

Why not use a similar model yourself.

Chatbot implementation main challenges are:Classify user input to recognize intent (this can be solved with Machine Learning, I’m using Keras with TensorFlow backend)Keep context.

This part is programming and there is nothing much ML related here.

I’m using Node.

js backend logic to track conversation context (while in context, typically we don’t require a classification for user intents — user input is treated as answers to chatbot questions)Complete source code for this article with readme instructions is available on my GitHub repo (open source).

This is the list of Python libraries which are used in the implementation.

Keras deep learning library is used to build a classification model.

Keras runs training on top of TensorFlow backend.

Lancaster stemming library is used to collapse distinct word forms:import nltkfrom nltk.

stem.

lancaster import LancasterStemmerstemmer = LancasterStemmer()# things we need for Tensorflowimport numpy as npfrom keras.

models import Sequentialfrom keras.

layers import Dense, Activation, Dropoutfrom keras.

optimizers import SGDimport pandas as pdimport pickleimport randomChatbot intents and patterns to learn are defined in a plain JSON file.

There is no need to have a huge vocabulary.

Our goal is to build a chatbot for a specific domain.

Classification model can be created for small vocabulary too, it will be able to recognize a set of patterns provided for the training:chatbot training dataBefore we could start with classification model training, we need to build vocabulary first.

Patterns are processed to build a vocabulary.

Each word is stemmed to produce generic root, this would help to cover more combinations of user input:words = []classes = []documents = []ignore_words = ['?']# loop through each sentence in our intents patternsfor intent in intents['intents']: for pattern in intent['patterns']: # tokenize each word in the sentence w = nltk.

word_tokenize(pattern) # add to our words list words.

extend(w) # add to documents in our corpus documents.

append((w, intent['tag'])) # add to our classes list if intent['tag'] not in classes: classes.

append(intent['tag'])# stem and lower each word and remove duplicateswords = [stemmer.

stem(w.

lower()) for w in words if w not in ignore_words]words = sorted(list(set(words)))# sort classesclasses = sorted(list(set(classes)))# documents = combination between patterns and intentsprint (len(documents), "documents")# classes = intentsprint (len(classes), "classes", classes)# words = all words, vocabularyprint (len(words), "unique stemmed words", words)This is the output of vocabulary creation.

There are 9 intents (classes) and 82 vocabulary words:45 documents9 classes ['adverse_drug', 'blood_pressure', 'blood_pressure_search', 'goodbye', 'greeting', 'hospital_search', 'options', 'pharmacy_search', 'thanks']82 unique stemmed words ["'s", ',', 'a', 'advers', 'al', 'anyon', 'ar', 'awesom', 'be', 'behavy', 'blood', 'by', 'bye', 'can', 'caus', 'chat', 'check', 'could', 'dat', 'day', 'detail', 'do', 'dont', 'drug', 'entry', 'find', 'for', 'giv', 'good', 'goodby', 'hav', 'hello', 'help', 'hi', 'hist', 'hospit', 'how', 'i', 'id', 'is', 'lat', 'list', 'load', 'loc', 'log', 'look', 'lookup', 'man', 'me', 'mod', 'nearby', 'next', 'nic', 'of', 'off', 'op', 'paty', 'pharm', 'press', 'provid', 'react', 'rel', 'result', 'search', 'see', 'show', 'suit', 'support', 'task', 'thank', 'that', 'ther', 'til', 'tim', 'to', 'transf', 'up', 'want', 'what', 'which', 'with', 'you']Training would not run based on the vocabulary of words, words are meaningless for the machine.

We need to translate words into bags of words with arrays containing 0/1.

Array length will be equal to vocabulary size and 1 will be set when a word from the current pattern is located in the given position:# create our training datatraining = []# create an empty array for our outputoutput_empty = [0] * len(classes)# training set, bag of words for each sentencefor doc in documents: # initialize our bag of words bag = [] # list of tokenized words for the pattern pattern_words = doc[0] # stem each word – create base word, in attempt to represent related words pattern_words = [stemmer.

stem(word.

lower()) for word in pattern_words] # create our bag of words array with 1, if word match found in current pattern for w in words: bag.

append(1) if w in pattern_words else bag.

append(0) # output is a '0' for each tag and '1' for current tag (for each pattern) output_row = list(output_empty) output_row[classes.

index(doc[1])] = 1 training.

append([bag, output_row])# shuffle our features and turn into np.

arrayrandom.

shuffle(training)training = np.

array(training)# create train and test lists.

X – patterns, Y – intentstrain_x = list(training[:,0])train_y = list(training[:,1])Training data — X (pattern converted into array [0,1,0,1…, 0]), Y (intents converted into array [1, 0, 0, 0,…,0], there will be single 1 for intents array).

Model is built with Keras, based on three layers.

According to my experiments, three layers provide good results (but it all depends on training data).

Classification output will be multiclass array, which would help to identify encoded intent.

Using softmax activation to produce multiclass classification output (result returns an array of 0/1: [1,0,0,…,0] — this set identifies encoded intent):# Create model – 3 layers.

First layer 128 neurons, second layer 64 neurons and 3rd output layer contains number of neurons# equal to number of intents to predict output intent with softmaxmodel = Sequential()model.

add(Dense(128, input_shape=(len(train_x[0]),), activation='relu'))model.

add(Dropout(0.

5))model.

add(Dense(64, activation='relu'))model.

add(Dropout(0.

5))model.

add(Dense(len(train_y[0]), activation='softmax'))Compile Keras model with SGD optimizer:# Compile model.

Stochastic gradient descent with Nesterov accelerated gradient gives good results for this modelsgd = SGD(lr=0.

01, decay=1e-6, momentum=0.

9, nesterov=True)model.

compile(loss='categorical_crossentropy', optimizer=sgd, metrics=['accuracy'])Fit the model — execute training and construct classification model.

I’m executing training in 200 iterations, with batch size = 5:# Fit the modelmodel.

fit(np.

array(train_x), np.

array(train_y), epochs=200, batch_size=5, verbose=1)Model is built.

Now we can define two helper functions.

Function bow helps to translate user sentence into a bag of words with array 0/1:def clean_up_sentence(sentence): # tokenize the pattern – split words into array sentence_words = nltk.

word_tokenize(sentence) # stem each word – create short form for word sentence_words = [stemmer.

stem(word.

lower()) for word in sentence_words] return sentence_words# return bag of words array: 0 or 1 for each word in the bag that exists in the sentencedef bow(sentence, words, show_details=True): # tokenize the pattern sentence_words = clean_up_sentence(sentence) # bag of words – matrix of N words, vocabulary matrix bag = [0]*len(words) for s in sentence_words: for i,w in enumerate(words): if w == s: # assign 1 if current word is in the vocabulary position bag[i] = 1 if show_details: print ("found in bag: %s" % w)return(np.

array(bag))Check this example — translating the sentence into a bag of words:p = bow("Load blood pessure for patient", words)print (p)print (classes)When the function finds a word from the sentence in chatbot vocabulary, it sets 1 into the corresponding position in the array.

This array will be sent to be classified by the model to identify to what intent it belongs:found in bag: loadfound in bag: bloodfound in bag: forfound in bag: paty[0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]It is a good practice to save the trained model into a pickle file to be able to reuse it to publish through Flask REST API:# Use pickle to load in the pre-trained modelglobal graphgraph = tf.

get_default_graph()with open(f'katana-assistant-model.

pkl', 'rb') as f: model = pickle.

load(f)Before publishing model through Flask REST API, is always good to run an extra test.

Use model.

predict function to classify user input and based on calculated probability return intent (multiple intents can be returned):def classify_local(sentence): ERROR_THRESHOLD = 0.

25 # generate probabilities from the model input_data = pd.

DataFrame([bow(sentence, words)], dtype=float, index=['input']) results = model.

predict([input_data])[0] # filter out predictions below a threshold, and provide intent index results = [[i,r] for i,r in enumerate(results) if r>ERROR_THRESHOLD] # sort by strength of probability results.

sort(key=lambda x: x[1], reverse=True) return_list = [] for r in results: return_list.

append((classes[r[0]], str(r[1]))) # return tuple of intent and probability return return_listExample to classify sentence:classify_local('Fetch blood result for patient')The intent is calculated correctly:found in bag: bloodfound in bag: resultfound in bag: forfound in bag: paty[('blood_pressure_search', '1.

0')]To publish the same function through REST endpoint, we can wrap it into Flask API:app = Flask(__name__)CORS(app)@app.

route("/katana-ml/api/v1.

0/assistant", methods=['POST'])def classify(): ERROR_THRESHOLD = 0.

25 sentence = request.

json['sentence'] # generate probabilities from the model input_data = pd.

DataFrame([bow(sentence, words)], dtype=float, index=['input']) results = model.

predict([input_data])[0] # filter out predictions below a threshold results = [[i,r] for i,r in enumerate(results) if r>ERROR_THRESHOLD] # sort by strength of probability results.

sort(key=lambda x: x[1], reverse=True) return_list = [] for r in results: return_list.

append({"intent": classes[r[0]], "probability": str(r[1])}) # return tuple of intent and probability response = jsonify(return_list) return response# running REST interface, port=5000 for direct test, port=5001 for deployment from PM2if __name__ == "__main__": app.

run(debug=False, host='0.

0.

0.

0', port=5001)I have explained how to implement the classification part.

In the GitHub repo referenced at the beginning of the post, you will find a complete example of how to maintain the context.

Context is maintained by logic written in JavaScript and running on Node.

js backend.

Context flow must be defined in the list of intents, as soon as the intent is classified and backend logic finds a start of the context — we enter into the loop and ask related questions.

How advanced is context handling all depends on the backend implementation (this is beyond Machine Learning scope at this stage).

Chatbot UI:Chatbot UI implemented with Oracle JET.. More details

Leave a Reply