Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

Building a Cognitive Customer Service Bot

If there is something being talked about with more passion and frequency than “customer 360” or “internet of things” it is certainly “bots.” Bots are taking shape to play a significant role in our lives and have a vital role to play in customer service interactions. In the past, bots or virtual assistants were considered little more than a chat-based IVR system but more and more these A.I. entities are becoming smarter, more helpful and empowering us to do greater things.

When it comes to customer service the bot conversation is simple: One bot can do the work of 1,000 agents at a fraction of the price taking less time to train. As customer expectations for response times, 24/7 availability and simplicity increase, bots are fast becoming an integral part of support teams around the world.

This blog will illustrate some of the ways in which Microsoft Bots can be built to facilitate customer conversations like never before.

Sample code provided in this blog is written to the Microsoft Bot Framework V3. I code my bots using Node.js, please see my colleague Geoff Innis’ blog for related .NET content and samples.

Bots are like Onions

Just like an excellent Customer Service Representative, a Cognitive Service Bot is comprised of layers of intelligence. The foundation rests on the Microsoft Bot Framework, an extensible platform that allows you to easily build bots and connect them to several (10+) channels so that your bot is as responsive to customer needs and touch points as a real agent.

The Bot Framework consists of a number of components including the Bot Builder SDK, Developer Portal and the Bot Directory. Features of the Bot Builder SDK include dialogs, prompts, built-in connection to Artificial Intelligence frameworks and the ability to run on nearly any bot platform. I choose to build my bots using the Bot Builder for Node.js.

There is an excellent beginning tutorial to get started with a “Hello World” bot within the Bot Framework Documentation here.

While a basic bot can be a lot of fun, Customer Service has unique challenges and a bot tasked with having a conversation with real customers needs to be more versatile and intelligent than your average bot.

Language Understanding Intelligent Service (LUIS)

This is where Luis comes to the rescue. Part of the Microsoft Cognitive Services and natively built in to the Microsoft Bot Framework, LUIS allows your bot to interpret natural and conversational language to understand intentions and entities that relate to your organization.

An entity is

An intention is

Let’s walk through an example. Sometimes a customer may need help finding documentation or FAQs. When my bot understands a customer’s intention is to get information, it runs a search into the Microsoft Dynamics Knowledgebase to retrieve relevant content. It uses the entity to ensure the search it is performing is relevant.

First, I’ll need to create a LUIS Model for my bot.  I do so by signing in to Luis.ai and creating a new application for my bot. I will create a new Intent entitled “Search” and a new Entity titled “SearchTopic.” Now, I need to train LUIS by entering phrases that I think my customers might say and tagging them appropriately.

I then must tag the utterance with the appropriate intent (Search) to train my LUIS model how I would like it to interpret the intention. I can then highlight and tag the entity from the utterance. This tells LUIS what portion of this message should be treated as the SearchPhrase. I will enter several different examples into the LUIS interface (this can also be done via bulk upload) in order to effectively train the model. The more content and training you give your model the more effectively it will be able to interpret intentions and entities.

Once the LUIS model has been appropriately trained, it should start making accurate assumptions about novel utterances as seen below:

You can also test your queries live against the LUIS API after publishing your application. This allows you to retrieve the JSON response so that you can code your bot appropriately.

The above yields the following JSON response

{
"query": "what is the claims process?",
"intents": [{
"intent": "Search",
"score": 0.973099649
}

],
"entities": [

{
"entity": "claims process",
"type": "SearchPhrase",
"startIndex": 12,
"endIndex": 25,
"score": 0.938603938
}

]
}

As you can see, this allows us to single out the search phrase so that we can work with the entity most effectively. I can then pass the entity “claims process” to the Dynamics Knowledgebase to return a result.

Search the Microsoft Dynamics Knowledgebase using LUIS

Now that we have our LUIS Model built out for our application, we need to make sure our bot is able to communicate with it.

//=========================================================
// Connect to LUIS Model for dialog
//=========================================================
var model = process.env.model || 'https://api.projectoxford.ai/luis/v1/application?id=XXX&subscription-key=XXX';
var recognizer = new builder.LuisRecognizer(model);
var intents = new builder.IntentDialog({ recognizers: [recognizer] });
bot.dialog('/', intents);

Now, for every intention we have registered in our LUIS Model, we need to tell the bot how we want it to behave. In this case, if our customer’s message is interpreted by LUIS as a “Search” intention we want to identify the entity and query the Dynamics Knowledgebase. We can find the entity using the EntityRecognizer Class.

intents.matches('Search', [
function (session, args, next) {
var searchPhrase = builder.EntityRecognizer.findEntity(args.entities, 'SearchPhrase');
if (!searchPhrase) {
builder.Prompts.text(session, "What can I help you find?");
} else {
next({ response: searchPhrase.entity });
}
},function (session, results) {
if (results.response) {
// Do some data validation
session.send('I think I may have a Knowledgebase article that might help you...');
// Pass entity to callback function to fetch article information...
getArticle(results.response,function(kbLink, kbTitle, kbDesc){
// Format and send article details - renders differently in each channel!
var msg = new builder.Message(session)
.attachments([
new builder.HeroCard(session)
.title(kbTitle)
.subtitle(kbDesc)
.tap(builder.CardAction.openUrl(session, kbLink))
]);
session.send(msg);
session.endDialog();
} else {
session.send("Ok");
}
}

]);

In the code above, the entity SearchPhrase is stored as a variable which is then passed into our getArticle() callback function to retrieve the matching Knowledgebase article. It is important to remember to use callback functions whenever asynchronous requests are needed!

Here is an example of a simple function that queries the Microsoft Dynamics Knowledgebase via the WebAPI. In this example, I am returning one KB article (title, id and description) where the title includes the entity phrase.

//=========================================================
// Fetch Dynamics CRM Knowledge Article
//=========================================================
//Get KB Article by Title API Syntax: knowledgearticles?$select=title
//require 'CRMWebAPI'
function getArticle(phrase, fn){
var apiconfig = { APIUrl: 'https://kamichel01.crm.dynamics.com/api/data/v8.0/', AccessToken: myAccessToken };
var crmAPI = new CRMWebAPI(apiconfig);
var queryOptions = { Top:1 ,
FormattedValues:true,
Select:['title', 'description', 'articlepublicnumber'],
Filter:"contains(title,'"+phrase+"')",
OrderBy:['articlepublicnumber']};
crmAPI.GetList("knowledgearticles",queryOptions).then (
function (response){
var kbPortalLink = portalurl_kb + response["List"][0]['articlepublicnumber'] + "/en-us";
var formatedTitle = "[" + response["List"][0]['title'] + "]("+ kbPortalLink +")";
console.log(response["List"][0]['articlepublicnumber'] + formatedTitle);
fn(kbPortalLink, response["List"][0]['title'], response["List"][0]['description']);
},
function(error){console.log(error)});

}

Recognizing Images using Computer Vision

So far we have a bot who can recognize intention and entities from text but not all customer correspondence takes the form of language. In many cases, it may become necessary for a customer service bot to interpret images, videos and sound and respond appropriately.

In the example above, the bot asked for a photo of a car. The customer provided a picture of a dog and the bot was not only able to recognize that there was no car in the photo but it was also able to give a dynamic and vivid description of what it did read in the image. Your bot can accomplish the above by leveraging Microsoft’s Computer Vision APIs.

//=========================================================
// Interpret Picture
//=========================================================
Requires 'Unirest'
function getPicture(myPic, fn){
console.log(myPic);
unirest.post('https://api.projectoxford.ai/vision/v1/analyses?visualFeatures=Description&')
.headers({'Content-Type': 'application/json', 'Ocp-Apim-Subscription-Key': 'XXXX'})
.send({"visualFeatures":"Description","Url": myPic})
.end(function (response) {
var imageTags = JSON.stringify(response.body.description.tags);
//Returning if there is a car and a description of image
fn(imageTags.includes("car"), response.body.description.captions[0]['text']);
});

}

Recognizing Customer Sentiment

A valid fear when implementing bots in the Customer Service Space is that they will become just another IVR system leading to the same types of frustrations customers report today. Imagine your customers typing “Representative!” furiously to your chat bot. Now, LUIS could certainly pick up on that intention and initiate a warm transfer to a live agent but it should never come to that.

By reading customer sentiment throughout their interaction with your service bot will become emotionally intelligent and can decide when a live agent may be required to help an unhappy customer. Additionally, if there are points in your bot’s dialog where customers are regularly logged with negative sentiment it allows you to adjust the bot scripts and/or your company’s policy to improve CSAT and sentiment scores.

Practical application to equipping your bot with sentiment awareness:

  • Log customer emotional reactions to your bot in easy to consume numerical data, excellent candid feedback
  • Allow the bot to respond with different language or tone (ie. more empathy) in response to customer sentiment
  • Escalate to a live agent when customer sentiment dips below a specified threshold
  • Identify and address bottlenecks or issues with existing bot dialogs

//=========================================================
// Check Sentiment
//=========================================================
// Requires 'Unirest'
function getSentiment(myText, fn){
unirest.post('https://westus.api.cognitive.microsoft.com/text/analytics/v2.0/sentiment?')
.headers({'Content-Type': 'application/json', 'Ocp-Apim-Subscription-Key': 'XXXX'})
.send({ "documents": [{"language": "en", "id": "bot", "text": myText}]})
.end(function (response) {
var myScore = response.body['documents'][0]['score'];
fn(myScore);
});

}

Other Differentiating Capabilities

The Microsoft Bot Framework and supporting Cognitive Services provide the intelligence to empower a Customer Service Bot to hold meaningful conversations with your customers whether as a Tier 0 agent or as an end to end service provider.

While there will be many more developments and blog posts here and elsewhere around Bots, here are some additional capabilities for further learning and to build on what was built for this post.

  • Authentication
  • Video Analyses
  • Calling Bots (via Skype)

Share the post

Building a Cognitive Customer Service Bot

×

Subscribe to Msdn Blogs | Get The Latest Information, Insights, Announcements, And News From Microsoft Experts And Developers In The Msdn Blogs.

Get updates delivered right to your inbox!

Thank you for your subscription

×