DoorDash implemented a generative AI-powered self-service contact center solution using Amazon Bedrock, Amazon Connect, and Anthropic's Claude to handle hundreds of thousands of daily support calls. The solution leverages RAG with Knowledge Bases for Amazon Bedrock to provide accurate responses to Dasher inquiries, achieving response latency of 2.5 seconds or less. The implementation reduced development time by 50% and increased testing capacity 50x through automated evaluation frameworks.
# DoorDash Contact Center LLMOps Case Study
## Overview and Context
DoorDash, a major e-commerce delivery platform, faced the challenge of handling hundreds of thousands of daily support requests from consumers, merchants, and delivery partners (Dashers). Their existing interactive voice response (IVR) system, while effective, still directed most calls to live agents, presenting an opportunity for improvement through generative AI.
## Technical Implementation
### Foundation Model Selection and Integration
- Chose Anthropic's Claude models through Amazon Bedrock for several key reasons:
- Amazon Bedrock provided:
### RAG Implementation
- Utilized Knowledge Bases for Amazon Bedrock to implement RAG workflow
- Incorporated data from public help center documentation
- Knowledge Bases handled:
### Testing and Evaluation Framework
- Built comprehensive testing system using Amazon SageMaker
- Key testing improvements:
### Architecture and Integration
- Integrated with existing Amazon Connect contact center infrastructure
- Leveraged Amazon Lex for initial voice processing
- Implemented secure data handling with no personally identifiable information access
- Built modular architecture allowing for future expansion
## Deployment and Operations
### Development Process
- Collaborated with AWS Generative AI Innovation Center (GenAIIC)
- 8-week implementation timeline
- 50% reduction in development time using Amazon Bedrock
- Iterative design and implementation approach
- Production A/B testing before full rollout
### Performance Metrics
- Response latency: 2.5 seconds or less
- Volume: Handling hundreds of thousands of calls daily
- Significant reduction in live agent escalations
- Material reduction in call volumes for Dasher support
- Thousands fewer daily escalations to live agents
### Production Monitoring
- Continuous evaluation of response quality
- Tracking of escalation rates
- Monitoring of system latency
- Analysis of user satisfaction metrics
## Challenges and Solutions
### Technical Challenges
- Voice application response time requirements
- Need for accurate, non-hallucinated responses
- Integration with existing systems
- Scale requirements for high call volumes
### Implementation Solutions
- Careful model selection focusing on latency and accuracy
- Robust testing framework development
- Gradual rollout with A/B testing
- Integration with existing knowledge bases
## Future Developments
### Planned Enhancements
- Expanding knowledge base coverage
- Adding new functionality beyond Q&A
- Integrating with event-driven logistics workflow
- Enabling action execution on behalf of users
## Key Learnings and Best Practices
### Technical Insights
- Importance of comprehensive testing frameworks
- Value of RAG for accuracy improvement
- Critical nature of response latency in voice applications
- Benefits of modular architecture for scaling
### Implementation Strategy
- Start with focused use case (Dasher support)
- Iterate based on testing results
- Maintain strong security controls
- Plan for scalability from the start
### Success Factors
- Strong foundation model selection
- Robust testing framework
- Integration with existing systems
- Focus on user experience metrics
- Clear success criteria
## Impact and Results
### Quantitative Outcomes
- 50x increase in testing capacity
- 50% reduction in development time
- 2.5 second or less response latency
- Thousands of reduced agent escalations daily
### Qualitative Benefits
- Improved Dasher satisfaction
- Better resource allocation for complex issues
- Enhanced self-service capabilities
- Scalable solution for future growth
Start your new ML Project today with ZenML Pro
Join 1,000s of members already deploying models with ZenML.