How does Kore.AI's architecture support GPT-4 and plug-ins?

Kore.AI has “universal bots” that allow you to attach linked bots, but I’m not clear on the limitations of this yet.

We want to be able to make 1 chatbot that does everything for every app at the company, and we want the front-facing part of it to be conversational like ChatGPT. Ideally, if we switched over to Kore.AI, we’d be able to have a conversational LLM at the front of the model, doing general conversations, and then if there was an utterance that the model could answer with a dialogflow from a linked bot, it would go down that path instead.

Does Kore.AI allow us to mimic the behavior of GPT-4 and plugins / Langchain? Or do all dialog flows have to be created and trained through the XO Platform?

Also, is there a practical limit on the number of linked bots and utterances we can do with Kore.AI?

If there are any docs that describe this, please share.

Hello Jacob,

For universal bots, you can attach the number of bots. But the bot scoping has be defined properly. If we mark every bot as a fall back bot, the request will go to each bot for intent detection and make the request processing very slow. Hence the less the bots scoped the faster will be the processing.

Pls try with Gen AI Dialog and Gen AI Prompt.

Regards
Venkat