Company
Tinder
Title
Scaling Trust and Safety Using LLMs at Tinder
Industry
Tech
Year
Summary (short)
Tinder implemented a comprehensive LLM-based trust and safety system to combat various forms of harmful content at scale. The solution involves fine-tuning open-source LLMs using LoRA (Low-Rank Adaptation) for different types of violation detection, from spam to hate speech. Using the Lorax framework, they can efficiently serve multiple fine-tuned models on a single GPU, achieving real-time inference with high precision and recall while maintaining cost-effectiveness. The system demonstrates superior generalization capabilities against adversarial behavior compared to traditional ML approaches.
This case study explores how Tinder, the world's largest dating app, leverages LLMs to enhance their trust and safety operations at scale. The presentation is delivered by Vibor (VB), a senior AI engineer at Tinder who has been working on trust and safety for five years. ## Context and Challenge Trust and Safety (TNS) at Tinder involves preventing risk, reducing risk, detecting harm, and mitigating harm to protect both users and the platform. As a dating app with massive scale, Tinder faces numerous types of violative behavior, ranging from relatively minor issues like sharing social media handles (against platform policy) to severe problems like hate speech, harassment, and sophisticated scams like "pig butchering" schemes. The emergence of generative AI has created new challenges in the trust and safety space: * Content pollution through rapid generation of spam and misinformation * Potential copyright issues inherited from consumer GenAI tools * Increased accessibility of deepfake technology enabling impersonation and catfishing * Scaled-up spam and scam operations through automated profile and message generation ## Technical Solution Architecture ### Data Generation and Training Set Creation One of the most challenging aspects of the solution is creating high-quality training datasets. The team developed a hybrid approach: * Leveraging GPT-4 for initial data generation and annotation * Using clever prompting to make predictions on internal analytics data * Applying heuristics to restrict LLM calls to likely candidates * Manual verification and judgment for ambiguous cases * Requirements are relatively modest - hundreds to thousands of examples rather than millions ### Model Development and Fine-tuning Instead of directly using API-based LLMs like GPT-4 in production (which would be cost-prohibitive and have latency/throughput issues), Tinder opts for fine-tuning open-source LLMs. They utilize Parameter Efficient Fine-Tuning (PEFT) techniques, specifically LoRA (Low-Rank Adaptation), which offers several advantages: * Minimal additional weight parameters (megabytes vs. full model) * Quick fine-tuning on limited GPU resources * Ability to use larger base models effectively * Compatible with inference optimizations The team leverages the mature open-source ecosystem, particularly Hugging Face libraries, which simplifies the fine-tuning process to a few hundred lines of code. They've had success with: * Notebook-based training pipelines * Config-file-based libraries like Axolotl and LLaMA Factory * Managed solutions like H2O Studio and Predibase for rapid experimentation ### Production Deployment The production system is built around Lorax, an open-source framework that enables efficient serving of thousands of fine-tuned models on a single GPU. Key aspects of the production implementation include: * Efficient adapter management: * Multiple LoRA adapters can be served jointly through smart shuffling and batching * Negligible marginal cost for serving additional adapters on the same base model * Simple deployment process involving storing weights and modifying Lorax client requests * Performance characteristics: * Supports 7 billion parameter models * Handles tens of queries per second * ~100ms latency on A10 GPUs * Optimized for classification tasks requiring minimal token generation * Optimization strategies: * Request gating using heuristics for high-frequency domains * Cascade classification through model distillation * Smaller base models optimized for recall as initial filters ## Results and Benefits The implementation has shown significant improvements over traditional approaches: * Near 100% recall in simpler tasks like social handle detection * Substantial precision and recall improvements in complex semantic tasks * Superior generalization performance against adversarial behavior * Better resilience against evasion tactics (typos, number substitution, innuendos) * Models maintain effectiveness longer compared to traditional ML approaches ## Future Directions Tinder is actively exploring several areas for enhancement: * Integration with non-textual modalities (e.g., using LAVA for explicit image detection) * Expanding coverage for long-tail TNS violations * Automating training and retraining pipelines * Building more sophisticated defensive mechanisms against emerging threats ## Technical Insights The case study offers several valuable insights for LLMOps practitioners: * The power of combining traditional TNS operations with AI-in-the-loop processes * Benefits of parameter-efficient fine-tuning for production deployment * Importance of efficient serving strategies for cost-effective scaling * Value of leveraging open-source models and tools while maintaining control over critical components The solution demonstrates how modern LLM technologies can be practically applied to real-world trust and safety challenges while maintaining performance, cost-effectiveness, and scalability. It's particularly noteworthy how Tinder has balanced the use of cutting-edge AI technologies with practical operational constraints and the need for human oversight in sensitive trust and safety operations.

Start your new ML Project today with ZenML Pro

Join 1,000s of members already deploying models with ZenML.