Company
Mastercard
Title
Responsible LLM Adoption for Fraud Detection with RAG Architecture
Industry
Finance
Year
2024
Summary (short)
Mastercard successfully implemented LLMs in their fraud detection systems, achieving up to 300% improvement in detection rates. They approached this by focusing on responsible AI adoption, implementing RAG (Retrieval Augmented Generation) architecture to handle their large amounts of unstructured data, and carefully considering access controls and security measures. The case study demonstrates how enterprise-scale LLM deployment requires careful consideration of technical debt, infrastructure scaling, and responsible AI principles.
This case study examines Mastercard's journey in implementing LLMs for fraud detection, highlighting their approach to responsible AI adoption and the technical challenges faced in deploying LLMs at enterprise scale. The presentation provides valuable insights into the practical considerations and architectural decisions made by a major financial institution in implementing LLM technology. # Company Background and Problem Space Mastercard, like many enterprises, faced the challenge of utilizing their vast amounts of unstructured data, which makes up more than 80% of their organizational data. The traditional AI approaches, which excelled at handling structured data through supervised learning and deep learning, weren't sufficient for handling this unstructured information effectively. The company needed a solution that could leverage this data while maintaining their strict security and compliance requirements. # Technical Implementation ## RAG Architecture Implementation The company adopted a RAG (Retrieval Augmented Generation) architecture as their primary approach to LLM deployment. This choice was driven by several key advantages: * Improved factual recall and reduced hallucination * Ability to maintain up-to-date information through swappable vector indices * Better attribution and traceability of model outputs * Enhanced ability to customize responses based on domain-specific data Their implementation went beyond the basic RAG setup that many organizations use. Instead of treating the retriever and generator as separate components, they implemented a more sophisticated approach based on the original RAG paper from Meta AI (formerly Facebook AI Research). This involved training both components in parallel, utilizing open-source model parameters to fine-tune the generator for producing factual information based on retrieved content. ## Infrastructure and Scaling Considerations The presentation highlighted several critical infrastructure requirements for successful LLM deployment: * Access to various foundation models * Specialized environment for customizing contextual LLMs * User-friendly tools for building and deploying applications * Scalable ML infrastructure capable of handling dynamic load requirements A particular challenge noted was that the GPU compute and RAM requirements for inference were actually greater than those needed for training the models, necessitating careful infrastructure planning. # Responsible AI Implementation Mastercard's implementation was guided by their seven core principles of responsible AI, with particular emphasis on: * Privacy protection * Security measures * System reliability * Access control preservation * Clear governance structure They developed specialized models for specific tasks rather than implementing a global LLM system, ensuring that access controls and data privacy were maintained at a granular level. This approach allowed them to maintain existing enterprise security policies while leveraging LLM capabilities. # Technical Challenges and Solutions ## Data Management The company faced several challenges in managing their domain-specific data: * Preserving existing access controls when integrating with LLM systems * Maintaining data freshness and accuracy * Ensuring proper attribution and traceability of information ## Production Implementation Challenges The implementation revealed that ML code represents only about 5% of the total effort required in building end-to-end LLM applications. The remaining 95% involved: * Building robust data pipelines * Implementing security measures * Developing monitoring systems * Creating deployment infrastructure * Establishing governance frameworks # Results and Impact The most significant measurable outcome was the 300% improvement in fraud detection capabilities in certain cases, as announced by Mastercard's president. This achievement demonstrated that despite the technical debt and challenges involved, the implementation of LLMs could deliver substantial business value when properly executed. # Lessons Learned and Best Practices Several key insights emerged from this implementation: * The importance of treating LLM implementation as a comprehensive engineering challenge rather than just a model deployment exercise * The need for robust infrastructure that can scale with demand * The critical nature of maintaining security and access controls when implementing LLMs in an enterprise setting * The value of using RAG architecture for improving model reliability and reducing hallucination The case study also emphasized that while implementing LLMs presents significant technical challenges and introduces technical debt, the transformative potential of the technology can justify these complications when implemented responsibly and with proper consideration for enterprise requirements. # Current State and Future Considerations Mastercard continues to develop their LLM capabilities while maintaining their focus on responsible AI principles. They acknowledge that the field is rapidly evolving and that their current implementation will need to adapt to new developments in LLM technology and changing regulatory requirements. The case study serves as an excellent example of how large enterprises can successfully implement LLM technology while maintaining high standards for security, privacy, and responsible AI practices. It demonstrates that while the technical challenges are significant, they can be overcome with proper planning, infrastructure, and governance frameworks.

Start your new ML Project today with ZenML Pro

Join 1,000s of members already deploying models with ZenML.