Why does Kore.ai Bots platform combine Fundamental Meaning (FM) and Machine Learning (ML) for NLP?
Most products only use Machine Learning (ML) for natural language processing. The weakness of only using machine learning to train bots is that it takes a lot of data.
With ML you must provide a collection of sentences that match a chatbot’s intended goal (and eventually a collection of sentences that do not). In this instance the bot itself does not inherently understand an input sentence. Instead, it tries to measure how similar the data input is to what it already knows.
Example: You set an intended goal for a task, let’s say “create a lead.” You then give the bot a training sentence of “create a lead.” It’s a one-to-one match of goal-to-input, which is a fantastic result in theory.
But if that’s all the input training you give the bot, that’s all it will know. Inputs like “make a lead” would fail and the likelihood of a user matching an input exactly with the bot’s trained goal is low.
Only after presenting the bot with a range of additional options, and some that are known to be incorrect to accommodate more requests, would traditional ML adjust its detection. For every synonym of your task verb, you’d have to give it a sample sentence using that synonym. As you can imagine, the process is tedious and makes the user experience with a bot likely to be confusing and cumbersome.
An ML only approach can also be inaccurate because it requires extensive training of a bot for high success rates. Our prescription combines fundamental meaning (FM) with ML to make it easier to build NL-capable chatbots out of the gate – whether or not rich training data is available. We also use ML to further train the chatbot over time.
Together, enterprise developers can solve for real-world dynamics and gain the inherent benefits of both approaches, while eliminating the shortcomings each has on its own.
Advantages of a Dual-Pronged Approach to NLP
For the enterprise:
- Faster bot training – The fundamental meaning approach means less data is required to optimize NL.
- Easy oversight of training data – Because ML uses massive amounts of data, you can’t easily look at a long list of training sentences and see what you’ve missed. On the contrary, with FM, it’s easy to look over a list of synonyms or a small set of idiomatic sentences and see what has been overlooked.
- Automatic handling of conjugated words – Our NLP engine has a dictionary and understands the relationship of word conjugations so it doesn’t have to process all verb tenses as different words (unique data).
- Fewer false positives – Administrators have less need to trawl through success logs looking for rare wrong interpretations.
- Higher user satisfaction and adoption – A bot that understands more and processes intent and requests correctly is going to be more useful to employees and customers. It will also be more satisfying to use and reduce frustration from failed inputs.
- Ability to extend the Platform – The Platform provides logic exits where clients can extend the Platform to leverage their own ML models or custom task implementations wherever necessary for abstract tasks.
For the user:
- Fewer false positives – Just as this is beneficial to the enterprise from a deployment perspective, it’s also beneficial to the user from a UX perspective. The user gets a more consistently successful experience because the bot doesn’t do things incorrectly, it only prompts the user for more information.
- Simpler UI – In the case of our two-pronged approach, you don’t have to change your speech pattern to use a bot correctly. Rather than catering your input to the bot, the bot caters to your input, which not only makes it simpler to use but makes it consistent across channels and devices. It’s a marked alternative to GUI systems.
- Understand context – The bot is able to understand, remember and leverage contextual information in the conversational dialog with the user in order to provide a personalized bot experience.