The tremendous advances over the last few years in AI and machine learning have really pushed the envelope in a lot of fields, not least of which is IT. Thanks to sophisticated natural language question-answering systems like IBM Watson, deploying self-service capabilities in the enterprise is not only technically feasible, it’s also now economically viable.
Combining IBM Watson’s AI with automation can profoundly transform a help desk, leading to a number of positive tertiary results for your enterprise IT operations. Let’s explore that in more detail.
For starters, it’s worthwhile reviewing a brief history of artificial intelligence, and some of the major milestones that preceded our current AI capabilities today.
There’s a general consensus that AI can trace its modern history back to one man – Alan Turing. For those of you who don’t know about Turing, he is widely considered to be the father of theoretical computer science. His life was somewhat tragic, but what he is best known for today is something you may have heard of called The Turing Test. This test has become the standard for defining a machine as intelligent based on one criterion – can a human operator differentiate the machine’s responses from a human being’s.
Interestingly, it’s virtually certain that everyone reading this article has experienced a REVERSED form of the Turing test that’s in wide use on the internet – and that’s the CAPTCHA test used to determine whether the user is a human or a computer.
BTW – Turing’s overall contributions to computer science were so monumental, that to honor his achievements, the Bank of England is releasing a new 50 pound note in June 2021 bearing his image.
In the same year that Turing proposed his test, Isaac Asimov, arguably the greatest science fiction writer of all time, proposed his famous Three Laws of Robotics. They were published in a collection of short stories and essays many of you might be familiar with called “I, Robot”:
- Law #1: A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- Law #2: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
- Law #3: A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
These Three Laws have had a profound impact on the way people think about the ethics of artificial intelligence.
Now, a few of you might get a chuckle out of this next AI milestone, which took the world by storm a nearly a quarter century ago – Furby. The Furby, which looks like the product of some strange union between Yoda and a chinchilla, is actually considered to be the first successful attempt at producing and selling a domestically-targeted AI-enabled robot.
Finally, the last big AI milestone is well-known to you Jeopardy fans. In 2011, IBM Watson appeared as a contestant on Jeopardy, competing against two of that gameshow’s greatest champions – Ken Jennings and Brad Rutter. Watson defeated both quite handily and took home a $1 Million prize for 1st place. That victory persuaded IBM executives to finally begin commercializing Watson, making it available for things a bit more useful than identifying potent potables for the late Alex Trebek.
The specific IBM Watson service offering of most interest to enabling a self-service help desk is Natural Language Processing, or NLP. Here’s IBM’s definition of this service:
|“Natural language processing (NLP) is a subfield of artificial intelligence and computer science that focuses on the tokenization and parsing of human language into its elemental pieces. NLP combines computational linguistics with statistical, machine learning techniques, and deep learning models, enabling computers to process human language in the form of text or voice data and to understand its full meaning, complete with the speaker or writer’s intent and sentiment.”|
That last part is particularly relevant for IT because in order for AI and Automation to take on more of the robotic kinds of tasks that currently overburden service desks, there has to be an ability to understand what the user is requesting.
If a user calls up a technician and says “I can’t log into the network”, the technician intuitively understands this means they should reset the user’s password, give them a temporary one, and ask them to try logging in again. The user didn’t explicitly have to tell the technician that.
With NLP, you can essentially achieve the same results. It can understand the user’s intent and/or sentiment, which in turn prompts it to reset the user’s password automatically, just like a technician would do.
Let’s drill down a little bit further on this by reviewing the 3 concepts necessary to understand when building a front-end interface to an AI tool like Watson.
First among these is the concept of “Intents”. Intents are the purposes or goals expressed by a customer’s input, such as answering a question or processing a bill payment. In an enterprise help desk example, an intent might be something like “I want to reset my password”.
“Entities” are a class of object or a data type that is relevant to a user’s purpose, and that helps IBM Watson choose the specific actions to take to fulfill an intent. In the example above of resetting a password, the password itself is the entity.
“Fulfillment” is the final concept, and just refers to executing a user’s request, based on an understanding of what their intent is towards a particular entity. As an example, if a chatbot’s understanding of a user’s request is that they want to reset their password, the end result of that would trigger the Ayehu automation platform to actually reset the password and send the user a new password.
There’s a number of options for interfacing with AI.
Undoubtedly the most popular way, and the one I’m sure many of you are most familiar with is via chatbots. Chatbots have become ubiquitous. They’re on virtually every website you visit these days, and they’re all powered by some form of AI.
Another less well known way to interface with AI is with good old-fashioned email. It’s just another form of conversation, only not as instantaneous as chatbots.
And not surprisingly, a 3rd way to interface with AI is via SMS. SMS behaves very much like a chatbot, except that it’s purely text conversation. There’s currently no mechanism for incorporating buttons or other design elements you may be accustomed to when conversing with chatbots.
I mentioned at the outset that deploying self-service capabilities in the enterprise is not only technically feasible, it’s also now economically viable. Here’s an illustration of that, specifically as it pertains to Cost Per Ticket.
There’s a general industry figure out there, published by Jeff Rumburg of MetricNet, that the average cost of an L1 service desk ticket is $20.
However, if you turn any given service request into a self-help or self-service function using an automation platform like Ayehu, then you can drive down that cost by 80% to just $4 per L1 ticket.
That’s an extraordinary savings that not only contributes to the bottom line, but can also be accomplished without reducing service effectiveness!
Using AI to reroute call volume away from the service desk and shifting that load to end-users by empowering them with self-service, can have numerous benefits aside from reducing costs and lowering call volume.
When you deploy AI as digital labor, you can also:
- Slash MTTR by accelerating end users’ ability to resolve their own incidents and requests
- Liberate IT staff from doing tedious work and free them up for more important tasks
- Raise customer satisfaction ratings, an increasingly critical KPI for IT Operations
If you’re interested in test driving Ayehu NG and seeing how easy it is to combine it with IBM Watson’s AI to power your self-service help desk, please feel free to explore our website and download your very own free 30-day trial version today by clicking here.