
Description
Bring Your Own LLM enables brands to integrate their own LLMs into Conversational Cloud for enhanced control, compliance, and cost. This is especially great if your company is doing brand-specific fine-tuning; they can use that power the Generative AI features in our platform. It supports a broad range of models, including those from OpenAI, Microsoft Azure, Google Vertex, Meta, Cohere, and Anthropic, ensuring extensive compatibility.
How does this capability fit into orchestration?
The future state of Bring Your Own LLM will be to strategically orchestrate the use of different LLMs according to task requirements. This means brands won’t need to deploy the most advanced, and often more expensive, models for every task. For example, while complex tasks may require the capabilities of GPT-4, others can be efficiently handled by GPT-3.5. This selective deployment ensures cost-effectiveness without compromising on performance.
Benefits
- Enhances data security and compliance: Allows brands to maintain strict control over their data security and compliance, ensuring that all data handling meets their high standards and adheres to regulatory requirements.
- Avoid vendor lock-in: As many enterprises invest heavily in LLMs across their companies, our aim is to remain AI agnostic. We ensure flexibility by supporting whichever model our clients prefer to drive the Generative AI capabilities on our platform, preventing dependency on a single vendor.
- Allows for custom SLAs: Allows brands to establish their own SLAs to ensure reliability and consistency in performance to ensure that their Generative AI-powered offers a more predictable experience that matches their operational requirements.
Related Links
Status
Planned