Company
Thumbtack
Title
Building and Implementing a Company-Wide GenAI Strategy
Industry
Tech
Year
2024
Summary (short)
Thumbtack developed and implemented a comprehensive generative AI strategy focusing on three key areas: enhancing their consumer product with LLMs for improved search and data analysis, transforming internal operations through AI-powered business processes, and boosting employee productivity. They established new infrastructure and policies for secure LLM deployment, demonstrated value through early wins in policy violation detection, and successfully drove company-wide adoption through executive sponsorship and careful expectation management.
Thumbtack's journey into implementing LLMs in production presents a comprehensive case study of how a mid-sized technology company approached the challenges of adopting generative AI across their organization. This case study is particularly interesting as it showcases both the technical and organizational challenges of implementing LLMs at scale. The company's approach began in summer 2023 when they recognized the transformative potential of new LLM developments like GPT-4 and LLama 2. Rather than pursuing ad-hoc implementations, they developed a structured strategy addressing three main areas: consumer product enhancement, operational transformation, and employee productivity improvement. From a technical implementation perspective, several key aspects of their LLMOps journey stand out: Infrastructure Development: * The company had already established a dedicated ML infrastructure team in 2022, which proved crucial for their LLM deployment strategy * They developed a new layer around their existing inference framework specifically designed to handle LLM-related concerns * The infrastructure was built to support both internally hosted open source models (like LLama 2) and external API services (like OpenAI) * Special attention was paid to PII protection, with systems built to scrub personally identifiable information before interaction with LLMs * They created shared clusters for open source LLMs to enable rapid experimentation by data scientists Production Implementation Strategy: * They took a pragmatic approach to deployment, starting with less risky use cases where hallucinations would have minimal impact * Initial deployments focused on augmenting existing systems rather than complete replacements * They implemented LLMs in their search system to better understand customer queries and improve professional matching * The team developed systems for policy violation detection where LLMs could augment existing rule-based systems * They created frameworks for using LLMs to analyze unstructured data, particularly useful for mining product reviews and customer-pro interactions Risk Management and Security: * New policies and processes were developed specifically for LLM usage governance * They implemented strict privacy controls and PII protection measures * The infrastructure was designed to enable secure access to both internal and external LLM services * Quality assurance systems were put in place, particularly important for customer-facing applications Organizational Implementation: * They established pilot programs, such as GitHub Copilot adoption, to demonstrate value and gather metrics * The team implemented AI agent assist systems for customer support * They developed automated quality assurance systems * Internal productivity tools were enhanced with AI assistants * Training systems were developed to help conversational AI understand company products and policies What makes this case study particularly valuable is its focus on the practical aspects of LLM deployment. The team was careful to balance excitement about the technology with practical concerns about hallucinations, security, and privacy. They recognized that different use cases required different approaches - some could tolerate hallucinations (like exploratory data analysis) while others required strict controls (like customer-facing applications). The implementation strategy showed sophisticated understanding of LLMOps best practices: * They maintained optionality between different LLM providers and deployment approaches * They built reusable infrastructure components rather than one-off solutions * They integrated LLMs into existing ML infrastructure rather than creating parallel systems * They implemented proper governance and security controls from the start * They focused on measurable business outcomes rather than just technical capabilities The results of their implementation have been positive, with significant efficiency gains in policy violation detection and improved capabilities in data analysis. However, they maintain a pragmatic view of the technology, actively working to ground expectations and focus on practical applications rather than hype. Key challenges they faced and addressed included: * Balancing rapid exploration with existing product strategy * Preventing duplication of effort across teams * Making build-vs-buy decisions for AI infrastructure * Managing hallucination risks * Creating appropriate governance frameworks * Ensuring privacy and security compliance * Building team capability and confidence Looking forward, Thumbtack's experience suggests that successful LLMOps requires a combination of technical infrastructure, careful governance, and organizational change management. Their approach of starting with less risky applications and gradually expanding scope appears to be a successful strategy for implementing LLMs in production environments. This case study provides valuable insights for other organizations looking to implement LLMs, particularly in showing how to balance innovation with practical concerns and how to build the necessary technical and organizational infrastructure for successful LLM deployment at scale.

Start your new ML Project today with ZenML Pro

Join 1,000s of members already deploying models with ZenML.