Chatbots are just the first step in the journey to achieve true AI assistants and autonomous organizations.
TL;DR: Chatbots are the first step toward autonomous organizations: companies whose operations are largely run by many different AI assistants. Analogous to autonomous cars, there are five levels of sophistication for AI assistants. Currently, basic level two AI assistants are mainstream, and Google just showed the world what a level three assistant looks like. Achieving true level five capabilities for AI assistants will result in a significant shift for society, with many implications for businesses and their customers.
The recent backlash about chatbots is both absolutely correct and completely misses the point. Yes, most chatbots we’ve seen since F8 2016 are bad. Most have failed to add value to the end user when compared to existing websites or apps.
However, chatbots are not the end game. We know from working with Fortune 500 companies there are powerful examples showing that state of the art chatbots can work and actually do help companies generate additional revenue or save costs. Our mission is to work toward true AI assistants that make it possible for customers to express what they want, in their own terms, without a human on the other end.
AI assistants can be applied both for direct customer service and within the operations of an organization. AI that understands customers, context, and that can be proactive will lead to automation of many repetitive tasks.
Five levels of AI assistants: From notification assistants to autonomous organizations
Figure 1. There are five levels of AI assistants—from dumb to super smart. Currently, level two is mainstream. Image by Alan Nichol.
Much has been written about the five levels of autonomous driving. Based on the chatbots and assistants that have been built in the last years, we see five levels of AI assistant intelligence emerging:
Level 1: Notification Assistants—This is what we know; simple notifications on your phone, but they show up in a messaging app like WhatsApp instead.
Level 2: FAQ Assistants—By far the most common type of assistants at the moment, allows the user to ask a simple question and get a response, which is a slight improvement from commonly known FAQ pages with a search bar. The only difference is that it is sometimes enhanced with one to two follow up questions.
Level 3: Contextual Assistants—As most bot developers could tell you, giving end users a box to type in rarely ends up being just a simple question and answer. That’s why context matters: what the user has said before, when / where / how she said it, and so on. Considering context also means being capable of understanding and responding to different and unexpected inputs.
Level 4: Personalized Assistants—As you might expect from a human that gets to know you over time, AI assistants will start to do the same. For example, an AI assistant will learn when it’s a good time to get in touch and proactively reach out based on this context. It will remember your preferences and give you the ultimate, personalized interface.
Level 5: Autonomous Organization of Assistants—Eventually, there will be a group of AI assistants that know every customer personally and eventually run large parts of company operations—from lead generation over marketing, sales, HR, or finance. This is a major leap forward that will take many years, but this is a vision we see as reality.
FAQ assistants are mainstream and mostly feel dumb
We see most assistants at level two at the moment, with the majority of our open source community actively seeking and working toward expanding their AI assistant capabilities into near-level three and beyond. The key is context. Without context, many use cases don’t work.
For example, recently we learned that merely 2% of the consumers who own the 50 million Alexa-enabled devices have used the voice assistant to make a purchase. Buying something via a conversation involves a lot of context and, thus, gets complex very quickly. It’s clear this is not yet satisfactory when compared to the rich context of browsers and apps
Level three is on the horizon. The closest we’ve seen so far is Google Duplex, which showed what a level three assistant could look like with a specific use case in a specific context, and the world has been impressed. However, Google put a lot of brainpower into making it work for just one use case, and we know that adding additional ones takes a lot of more. That’s why these near-level three assistants are currently niche products. The current approach doesn’t scale.
Taking contextual assistants mainstream: AI that can generalize
Let’s look at how humans understand context and expand to new “use cases.” For example, if we train a salesperson to sell car insurance, we would consider that person to be “intelligent” if they can use those newly acquired skills to sell another type of insurance, such as home insurance. Humans are able to generalize and apply understanding across different references and contexts.
AI assistants should actually be able to achieve the same result even if not specifically trained. Otherwise, they’re not scalable. So what does this imply to reach level three? Generalization here means that an assistant can handle conversations about new (similar) use cases without explicitly pre-programming every rule for all possible edge cases explicitly in advance.
Figure 2. Both examples show “uncooperative” behaviour from the user as they deviate from the happy path with the same question—”How is the weather there”—but in a different context. Level three AI assistants should be able to re-use the knowledge they learned in Example 1 and apply it to Example 2. Image by Alan Nichol.
This analogy shows how smarter assistants are developed faster with fewer resources and training data. At Rasa, we’re working on solving this exact problem, aiming to make this technology available to all makers so that level three assistants go mainstream.
The path to autonomous organizations
A question we get a lot from executives is: “will there be one killer bot for everything?” For us, the answer is no because you also don’t have one employee who does everything in your company. There are teams, there are departments, there are subsidiaries, and I don’t think AI will allow us to completely get rid of that structure. Structure is actually a key to efficiency.
What we do believe is that teams, departments, and full organizations can be replaced by a group of AI assistants and leave a handful of humans to do interesting work. Shai Wininger, the founder and CEO of Lemonade, calls this the “autonomous organization.”
This will lead to true level five capabilities for AI assistants and to a significant shift for society, with many implications for businesses and their customers. Leadership who can see this future and understand its importance—and ensure that intelligence isn’t outsourced in return for short-term profit—will be crucial for success.
“Commercial speech recognition systems in the age of big data and deep learning”: Yishay Carmiel on applications of deep learning in text and speech
“Deep learning revolutionizes conversational AI”
“Topic models—past, present, and future”: David Blei discusses the origins and applications of topic models.
“Natural language analysis using Hierarchical Temporal Memory”
Continue reading The next generation of AI assistants in enterprise.
Read more: feedproxy.google.com