Should not identify the opposite phrases from the trained phrases

Hi there,

We have a bot that answers FAQ questions. We are using the knowledge graph for the same. There is a case where the platform identifies the phrase wrongly if user provided the opposite of the trained question.

For example: If we have trained a question “I want to restrict the user access to the portal”. We have an answer that is added to the question in knowledge graph along with the alternative questions. We are also added Tags and path level synonyms for better cover and so far the model is doing good.

But if user asks “I don’t want to restrict the user access to the portal”, the model still gives the response for the “how to restrict” type of question that we have originally trained. If user asking such opposite meaning questions, we wanted the bot to give the fallback response.

Right now the model could not differentiate between “I want to restrict” and “I don’t want to restrict”. Is there any way that we can accomplish this level of NLU understanding.

Jyothish G


As of now, the KG doesn’t have the capability to understand the opposite questions i.e., negation handling.

This will be taken up as a feature request.

We are discussing with our NLP team if we have any workaround as of now.

We will update you soon.

Yoga Ramya

Thanks for the update. Please let me know any update on this regard.