This case study explores how Vericant, a company specializing in third-party video interviews for educational institutions, successfully implemented an AI-powered solution to enhance their existing video interview platform. The company, which was later acquired by ETS (Educational Testing Service), demonstrates how even organizations with limited AI expertise can effectively deploy LLM-based solutions in production with minimal resources and technical overhead.
The Context and Business Challenge:
Vericant's core business involves facilitating video interviews for high schools and universities, particularly focusing on international student admissions. Their primary value proposition is making interviews scalable for educational institutions. However, they identified a key pain point: while they could efficiently conduct many interviews, admissions teams still needed to watch entire 15-minute videos to evaluate candidates, creating a bottleneck in the process.
The AI Solution Development Process:
The CEO, Guy Savon, took a remarkably pragmatic approach to implementing an LLM solution, which offers valuable insights into practical LLMOps implementation:
Initial Approach:
* The project began with a focus on basic but high-value features: generating short summaries, extracting key points, and identifying main topics from interview transcripts
* They deliberately branded the feature as "AI Insights" and labeled it as beta to set appropriate expectations
* The implementation primarily used OpenAI's GPT models through their playground interface
Prompt Engineering and Evaluation Framework:
The team developed a systematic approach to prompt engineering and evaluation:
* Created an initial system prompt that positioned the AI as an assistant to admissions officers
* Included specific instructions about tone, language use, and how to refer to students
* Developed the prompt iteratively based on testing results
* For each transcript, they generated three different outputs using the same prompt to assess consistency
* Created a structured evaluation framework using Google Sheets
* Implemented a quantitative scoring system (ranging from "amazing" to "unsatisfactory")
* Had team members watch original videos and compare them with AI outputs
* Collected specific feedback about issues and areas for improvement
Quality Assurance Process:
The evaluation process was particularly noteworthy for its thoroughness:
* Team members reviewed each AI output against the original video
* Used a color-coding system (dark green, green, yellow, red) to visualize quality levels
* Gathered detailed comments about specific issues
* Iterated on the prompt based on evaluation results until achieving consistent high-quality outputs
Implementation Strategy:
Several key aspects of their implementation strategy stand out:
* Developed the solution with minimal engineering resources
* Used existing tools (OpenAI Playground, Google Sheets) rather than building custom infrastructure
* Implemented the solution part-time over a few weeks
* Focused on quick deployment rather than perfection
* Integrated the AI insights directly into their existing admission officer portal
Results and Impact:
The implementation has been successful in several ways:
* Reduced interview review time from 15 minutes to 20-30 seconds
* Maintained high quality through careful prompt engineering and evaluation
* Created a new business opportunity by addressing the scalability of interview processing
* Positioned Vericant as an early adopter of AI in their industry
Key LLMOps Lessons:
The case study offers several valuable lessons for organizations implementing LLMs in production:
* Start small but think strategically about scaling
* Use existing tools and platforms rather than building from scratch
* Implement systematic evaluation frameworks
* Focus on iterative improvement
* Be transparent about AI usage and set appropriate expectations
* Consider user experience and integration with existing workflows
The project also demonstrates important principles about LLM deployment:
* The importance of proper prompt engineering
* The value of systematic evaluation frameworks
* The benefit of rapid iteration and testing
* The role of human validation in ensuring quality
* The importance of setting appropriate user expectations through branding and messaging
Future Directions:
The company has identified several opportunities for expansion:
* Development of additional AI-powered features
* Further refinement of the evaluation framework
* Expansion of the AI insights capabilities
* Potential for more automated processing of interviews
This case study is particularly valuable because it demonstrates how organizations can successfully implement LLM solutions without extensive AI expertise or resources. The focus on systematic evaluation, iterative improvement, and practical implementation provides a useful template for other organizations looking to deploy LLMs in production environments.