As user can ask anything to chatbot, how to define ML utterances that only those defined in training data set should trigger dialog task, others should not. Currently what is happening one word matching incorrectly recognizing the intent, adding negative pattern might not be a solution as how many negative patterns to add. For example:
ML Training dataset - status of issue 7777888
User Utterance - I am having issue connecting to system - This triggers the dialog task of the above ML dataset.
As you see both sentences are so different only matching word in those two sentences are βissueβ. How to avoid this type of false positives? Can it be handled using ML threshold parameters? Should stop words be included in ML utterances? Looking for guidance on how to define ML utterances to avoid these type of false positives?