DeliveryHero's Woowa Brothers division developed an AI API Gateway to address the challenges of managing multiple GenAI providers and streamlining development processes. The gateway serves as a central infrastructure component to handle credential management, prompt management, and system stability while supporting various GenAI services like AWS Bedrock, Azure OpenAI, and GCP Imagen. The initiative was driven by extensive user interviews and aims to democratize AI usage across the organization while maintaining security and efficiency.
DeliveryHero's Woowa Brothers division presents an interesting case study in implementing LLMOps infrastructure to support the growing adoption of generative AI across their organization. This case study demonstrates how a major e-commerce platform approaches the challenges of managing multiple LLM services and providers in a production environment, with a particular focus on creating standardized infrastructure to support widespread adoption.
# Background and Context
The AI Platform Team at Woowa Brothers identified a critical need to create an efficient environment for GenAI technology usage across their organization. Their approach is particularly noteworthy because it started with extensive user research and interviews to identify the most pressing needs, rather than jumping directly into technical solutions. This human-centered approach to LLMOps infrastructure development ensures that the resulting system addresses real user needs rather than theoretical problems.
# Technical Infrastructure Development
The team identified several key components needed for effective GenAI deployment:
* API Gateway: Serves as the cornerstone of their GenAI platform, focusing on reducing redundant development work while maintaining system stability and security
* Provider Integration: Support for multiple GenAI providers including AWS Bedrock, Azure OpenAI, and GCP Imagen
* Credential Management: Centralized management of API keys and authentication
* Prompt Management: Unified system for storing and managing prompts across different services
The decision to prioritize the API Gateway development was driven by several practical considerations:
* The existing fragmentation of prompts and invocation logic across different services
* The need to standardize access to multiple GenAI providers
* Security requirements for credential management
* The desire to reduce duplicate development effort across teams
# Use Case Implementation
One concrete example of their GenAI implementation is in image processing for their e-commerce platform, specifically in improving menu images through out-painting techniques. This demonstrates how they're applying GenAI in practical, business-focused ways rather than just experimental applications.
# Key Infrastructure Components
The team identified several critical features that would be needed for a comprehensive LLMOps platform:
* Experimental Feedback Systems: For collecting and incorporating user feedback to improve GenAI output quality
* Hybrid Search Capabilities: Combining traditional lexical search with modern semantic search approaches
* LLM Serving Infrastructure: For efficient operation and management of both in-house and open-source LLMs
* Prompt Experimentation Environment: Supporting systematic prompt development and optimization
* RAG Pipeline: For improving response quality through external knowledge integration
# Architectural Considerations
The API Gateway was designed with several key considerations in mind:
* Centralization: Acting as a hub for managing credentials and prompts
* Scalability: Supporting multiple GenAI providers and services
* Security: Ensuring proper credential management and access control
* Standardization: Creating consistent interfaces for different GenAI services
# Challenges and Solutions
The team faced several challenges in implementing their LLMOps infrastructure:
* Managing Multiple Providers: The need to support various GenAI services while maintaining consistency
* Security Concerns: Handling sensitive API credentials and access controls
* Standardization: Creating unified interfaces for different types of GenAI services
* User Adoption: Making the system accessible to users without deep AI/ML expertise
# Future Roadmap
The team's approach shows a clear progression in their LLMOps implementation:
* Initial Focus: API Gateway and basic infrastructure
* Planned Extensions: Development of RAG Pipeline, Prompt Experimentation, and Feedback systems
* Long-term Vision: Creating a comprehensive platform for GenAI service development
# Impact and Results
While specific metrics aren't provided in the source text, the implementation appears to be supporting several key business objectives:
* Democratization of AI usage across the organization
* Improved efficiency in service development
* Enhanced security and stability in GenAI service deployment
* Standardized approach to managing multiple GenAI providers
# Critical Analysis
The approach taken by Woowa Brothers demonstrates several best practices in LLMOps implementation:
* Starting with user research to identify actual needs
* Prioritizing infrastructure components based on immediate business impact
* Planning for scalability and future expansion
* Focusing on practical business applications rather than just technical capabilities
However, there are some potential limitations to consider:
* The complexity of managing multiple GenAI providers could create additional overhead
* The centralized approach might create bottlenecks if not properly scaled
* The system's success will depend heavily on user adoption and training
This case study provides valuable insights into how large organizations can approach LLMOps implementation in a systematic and user-focused way, while maintaining flexibility for future growth and adaptation.
Start your new ML Project today with ZenML Pro
Join 1,000s of members already deploying models with ZenML.