Should not identify the opposite phrases from the trained phrases

Hi there,

We have a bot that answers FAQ questions. We are using the knowledge graph for the same. There is a case where the platform identifies the phrase wrongly if user provided the opposite of the trained question.

For example: If we have trained a question “I want to restrict the user access to the portal”. We have an answer that is added to the question in knowledge graph along with the alternative questions. We are also added Tags and path level synonyms for better cover and so far the model is doing good.

But if user asks “I don’t want to restrict the user access to the portal”, the model still gives the response for the “how to restrict” type of question that we have originally trained. If user asking such opposite meaning questions, we wanted the bot to give the fallback response.

Right now the model could not differentiate between “I want to restrict” and “I don’t want to restrict”. Is there any way that we can accomplish this level of NLU understanding.

Regards
Jyothish G

@jyothish.g,

As of now, the KG doesn’t have the capability to understand the opposite questions i.e., negation handling.

This will be taken up as a feature request.

We are discussing with our NLP team if we have any workaround as of now.

We will update you soon.

Regards,
Yoga Ramya

Thanks for the update. Please let me know any update on this regard.

The negation support added to FAQs now in bots platform. If we train the question " I want to restrict the user access to the portal", below is the way it works

For user input “I want to restrict the user access to the portal” the faq score will be 100%, where as
For user input “I don’t want to restrict the user access to the portal” the FAQ score will be 78.5%

In this case, since the utterance has negation, the score will be reduced substatially and by adjusting the thresholds, this problem will be resolved.