Company
Swiggy
Title
Building a Comprehensive LLM Platform for Food Delivery Services
Industry
E-commerce
Year
2024
Summary (short)
Swiggy implemented various generative AI solutions to enhance their food delivery platform, focusing on catalog enrichment, review summarization, and vendor support. They developed a platformized approach with a middle layer for GenAI capabilities, addressing challenges like hallucination and latency through careful model selection, fine-tuning, and RAG implementations. The initiative showed promising results in improving customer experience and operational efficiency across multiple use cases including image generation, text descriptions, and restaurant partner support.
# Swiggy's Journey into Production LLM Systems Swiggy, a major food delivery platform, implemented a comprehensive LLM strategy throughout 2023-2024, demonstrating a methodical approach to deploying AI systems in production. Their journey showcases important aspects of LLMOps including model selection, fine-tuning, deployment strategies, and platform development. # Initial Strategy and Risk Assessment - Established a dedicated generative AI task force combining Data Science, Engineering, and Strategy teams - Conducted extensive research involving discussions with 30+ startups, founders, VCs, and corporations - Implemented a Demand-Risk framework for prioritizing AI initiatives - Focused on two main categories: Discovery & Search and Automation # Technical Architecture and Implementation ## Model Selection and Customization - Used different models for various use cases based on requirements: ## Platform Development - Created a middle layer for generative AI integration - Implemented features for: # Key Use Cases and Implementation Details ## Catalog Enrichment - Image Generation Pipeline: - Text Description Generation: ## Review Summarization System - Leveraged GPT-4 with custom prompts - Developed internal evaluation metrics - Implemented A/B testing with 2000+ restaurants - Focused on trust-building and expectation management ## Restaurant Partner Support - Developed RAG-based system for FAQ handling - Implemented multilingual support (Hindi and English) - Integrated with WhatsApp for accessibility - Built scalable architecture for growing vendor base # Production Challenges and Solutions ## Performance Optimization - Addressed latency requirements: - Implemented custom models for real-time use cases - Used third-party APIs instead of direct OpenAI integration for better governance ## Quality Control - Developed strategies for hallucination mitigation: - Implemented guardrails for user input - Created evaluation frameworks for generated content ## Security and Privacy - Established data usage agreements with service providers - Implemented PII masking and protection - Created security protocols for sensitive information # Lessons Learned and Best Practices ## Development Timeline - 3-4 months of iteration required for high-ROI use cases - Importance of bandwidth conservation and focus - Need for stakeholder expectation management ## Model Selection Strategy - GPT preferred for non-real-time use cases - Custom LLMs for real-time applications - Third-party API integration for better governance ## Implementation Guidelines - Extensive internal testing required - Importance of guardrails for result sanity - Need for multiple iterations in production deployment - Focus on sustained ROI over quick wins # Future Developments ## Planned Improvements - Expanding successful catalog-related use cases - Enhancing tooling for GenAI ops - Refining neural search capabilities - Scaling content generation track - Increasing adoption of vendor support system ## Infrastructure Evolution - Continuing development of lightweight tooling - Expanding Data Science Platform endpoints - Improving integration capabilities - Enhancing monitoring and evaluation systems The case study demonstrates a comprehensive approach to LLMOps, from initial planning and risk assessment through to production deployment and continuous improvement. It highlights the importance of proper infrastructure, careful model selection, and robust evaluation frameworks in successful LLM deployments.

Start your new ML Project today with ZenML Pro

Join 1,000s of members already deploying models with ZenML.