Tech
Numbers Station
Company
Numbers Station
Title
Building Production-Ready SQL and Charting Agents with RAG Integration
Industry
Tech
Year
Summary (short)
Numbers Station addresses the challenge of overwhelming data team requests in enterprises by developing an AI-powered self-service analytics platform. Their solution combines LLM agents with RAG and a comprehensive knowledge layer to enable accurate SQL query generation, chart creation, and multi-agent workflows. The platform demonstrated significant improvements in real-world benchmarks compared to vanilla LLM approaches, reducing setup time from weeks to hours while maintaining high accuracy through contextual knowledge integration.
Numbers Station has developed a sophisticated approach to bringing LLMs into production for enterprise data analytics, focusing specifically on the challenges of self-service analytics in organizations with modern data stacks. This case study provides valuable insights into the practical implementation of LLM-powered systems in production environments, with particular attention to the challenges and solutions around accuracy, scalability, and user experience. The company emerged from academic research at Stanford, focusing on representation learning for structured data and knowledge graphs. This background significantly influenced their approach to solving enterprise analytics challenges, particularly in how they handle and represent structured data for LLM consumption. ## Core Problem and Context The presentation highlights a critical challenge in enterprise data analytics: despite significant investments in modern data infrastructure, data teams are overwhelmed with routine requests from users. These requests range from dashboard location to SQL query assistance, creating a bottleneck where data teams spend more time on support tasks than strategic initiatives. The emergence of LLMs has paradoxically intensified this challenge by raising user expectations while many organizations struggle to implement these technologies effectively in production. ## Technical Architecture and Implementation The platform's architecture consists of several key components: * Connectors to Modern Data Stack Tools: Integration with data warehouses, dashboarding tools, data catalogs, and even communication platforms like Slack * Unified Knowledge Layer: A crucial component that aggregates and structures organizational knowledge, including table schemas, metrics definitions, and business context * RAG Implementation: Enables contextual retrieval of relevant information to improve LLM accuracy * AI Agent Framework: Customizable framework supporting tool interaction and multi-agent coordination The technical implementation demonstrates several important LLMOps best practices: ### Agent Development Evolution The presentation shows a thoughtful progression in agent development: 1. Basic prompting with few-shot learning 2. Addition of execution tools 3. Implementation of non-deterministic control flow 4. Integration of RAG for improved accuracy 5. Multi-agent coordination ### RAG Implementation Details Their RAG implementation is particularly noteworthy for production environments: * Uses both text and embedding indices * Implements similarity search for context retrieval * Carefully manages prompt context windows * Integrates business-specific knowledge and definitions ### Multi-Agent Architecture The system implements a hierarchical multi-agent architecture: * Planner agent coordinates overall workflow * Specialized agents handle specific tasks (SQL, charting, etc.) * Shared context maintenance across agents * Flexible routing and inter-agent communication ## Production Considerations The presentation addresses several critical aspects of running LLM systems in production: ### Accuracy and Reliability * Implementation of retry mechanisms for error handling * Integration of business context through the knowledge layer * Confidence scoring and explicit uncertainty communication * Validation against real-world customer benchmarks ### Scalability and Performance * Optimization of agent routing to minimize unnecessary model calls * Hierarchical organization of agents to manage complexity * Careful consideration of cost implications in multi-agent systems ### Enterprise Integration * Respect for existing permission systems rather than rebuilding * Integration with existing enterprise tools and workflows * Support for various data sources and formats ### Deployment and Setup * Evolution from weeks-long setup to hours * Automated ingestion and processing of metadata * Reduced need for manual intervention while maintaining quality ## Real-World Results and Lessons Learned The case study reveals several important insights about running LLM systems in production: * The importance of grounding LLM outputs in organizational knowledge * The value of progressive system architecture evolution * The critical role of metadata and context management * The balance between automation and human oversight ### Key Success Factors * Strong focus on accuracy and reliability * Integration with existing enterprise systems * Careful management of user expectations * Flexible and extensible architecture ### Challenges and Solutions * Managing user expectations vs. system capabilities * Handling enterprise-specific terminology and context * Balancing automation with accuracy * Scaling agent interactions efficiently The implementation demonstrates a mature approach to LLMOps, showing how careful architecture and implementation choices can create reliable, production-grade AI systems. The focus on practical concerns like accuracy, scalability, and enterprise integration provides valuable lessons for others implementing LLM systems in production environments.

Start your new ML Project today with ZenML Pro

Join 1,000s of members already deploying models with ZenML.