Company
Telus
Title
Enterprise-Scale LLM Platform with Multi-Model Support and Copilot Customization
Industry
Telecommunications
Year
2024
Summary (short)
Telus developed Fuel X, an enterprise-scale LLM platform that provides centralized management of multiple AI models and services. The platform enables creation of customized copilots for different use cases, with over 30,000 custom copilots built and 35,000 active users. Key features include flexible model switching, enterprise security, RAG capabilities, and integration with workplace tools like Slack and Google Chat. Results show significant impact, including 46% self-resolution rate for internal support queries and 21% reduction in agent interactions.
Telus has developed an impressive enterprise-scale LLM platform called Fuel X that showcases many important aspects of running LLMs in production. This case study provides valuable insights into how a large telecommunications company approached the challenges of deploying generative AI across their organization. ## Platform Overview and Architecture Fuel X operates as a centralized management layer sitting above foundation models and AI services. The platform consists of two main components: * Fuel X Core: Handles centralized management, integrations, orchestration across models, moderation, and validation * Fuel X Apps: User-facing applications including web interface, Slack, and Google Chat integrations The architecture emphasizes flexibility and security while maintaining control. They support multiple cloud providers and model types, including OpenAI on Azure, Claude on AWS Bedrock, and other providers. A proxy layer enables load balancing and fallback mechanisms across models. Key technical features include: * Vector database (Turbo Puffer) for RAG capabilities with Canadian data residency * Function calling using a planner-executor architecture for tool selection * Streaming responses for better user experience * Asynchronous processing where possible to optimize performance * SSO integration and enterprise security controls * Configurable guardrails for different use cases ## Copilot Implementation A major innovation is their copilot framework that allows users to create customized AI assistants. Each copilot can have: * Custom system prompts * Associated knowledge bases * Specific model selections * Configurable guardrails * Access controls The platform has enabled over 30,000 custom copilots serving 35,000+ active users. This demonstrates significant adoption across different use cases and user types, from developers to lawyers to network engineers. ## Production Use Cases and Results Several production copilots showcase the platform's capabilities: * Spock (Internal Support): Handles technical difficulties and device support, achieving 46% self-resolution rate and 21% reduction in agent interactions * One Source: Customer service agent copilot for faster information retrieval * Milo: Store representative assistant for retail locations * T US J: Generic copilot with internet search and image generation capabilities ## Responsible AI and Security Telus has put significant emphasis on responsible AI implementation: * Dedicated responsible AI team * 500+ trained data stewards * Thousands trained in prompt engineering * Responsible AI framework and data enablement processes * "Human-in-the-loop" approach with purple team testing * ISO certification for privacy by design * Won the Responsible AI Institute's Outstanding Organization 2023 award ## Technical Challenges and Solutions The platform addresses several key challenges: * Model Selection: Flexible architecture allows switching between models based on use case requirements * Performance Optimization: Asynchronous processing where possible, streaming responses * Security: Enterprise-grade security with configurable guardrails * Data Residency: Canadian data hosting requirements met through strategic infrastructure choices * Integration: Meets users in their existing workflows (Slack, Google Chat) ## Monitoring and Evaluation The platform includes comprehensive monitoring capabilities: * Response time tracking * Cost analysis * Answer quality evaluation using LLM-based comparison against ground truth * Usage analytics * Custom monitoring solutions for different organizational needs ## Developer Experience For developers, the platform provides: * Experimentation environment (Fuel Lab) * Model comparison capabilities * API access for custom applications * Function calling framework * Document intelligence features including OCR * Image generation integration This case study demonstrates a mature approach to enterprise LLM deployment, balancing flexibility, security, and usability while maintaining responsible AI practices. The platform's success is evidenced by its wide adoption and measurable impact on business operations.

Start your new ML Project today with ZenML Pro

Join 1,000s of members already deploying models with ZenML.