We have a bot that answers FAQ questions. We are using the knowledge graph for the same. There is a case where the platform identifies the phrase wrongly if user provided the opposite of the trained question.
For example: If we have trained a question “I want to restrict the user access to the portal”. We have an answer that is added to the question in knowledge graph along with the alternative questions. We are also added Tags and path level synonyms for better cover and so far the model is doing good.
But if user asks “I don’t want to restrict the user access to the portal”, the model still gives the response for the “how to restrict” type of question that we have originally trained. If user asking such opposite meaning questions, we wanted the bot to give the fallback response.
Right now the model could not differentiate between “I want to restrict” and “I don’t want to restrict”. Is there any way that we can accomplish this level of NLU understanding.