Last updated: May 1, 2023
In this blog post, we'll explore how ZenML can be used in conjunction with OpenAI's GPT-4 to analyze and version data from a Supabase database. We'll use the you-tldr.com website as an example, showcasing how the site populates an analytics table in Supabase and how ZenML asynchronously picks up the latest video data for analysis.
Large language models (LLMs) like GPT-4 have revolutionized natural language processing, offering unparalleled capabilities for knowledge generation and reasoning. However, incorporating custom, private data into these models remains a challenge. ZenML, an extensible, open-source MLOps framework, can help overcome this limitation by versioning data and allowing for comparisons between summaries rather than raw data.
The key advantage of this project is its ability to analyze enterprise datasets and generate summaries over time. By integrating ZenML, GPT-4, and Supabase, we can create a versatile system applicable to various use cases. For example, one compelling application is in customer support. Imagine using this pipeline to analyze and summarize customer feedback, support tickets, or product reviews. The summaries could help identify common pain points, trends, or areas for improvement, providing valuable insights for product development teams to prioritize features and enhancements based on real customer feedback.
The you-tldr.com Case Study
you-tldr.com is a website that provides concise summaries of YouTube videos. The site populates an analytics table in Supabase with information about the kinds of videos that users are choosing to summarize. We'll demonstrate how ZenML can be used to analyze this data using GPT-4 and version the summaries for comparison.
Creating the ZenML pipeline
Populating a Supabase Analytics Table
The you-tldr.com website updates the Supabase analytics table with the latest video titles. This table serves as the data source for our ZenML pipeline, which will use GPT-4 to generate summaries of visitor activity over the last 24 hours. Here is a snapshot of the data:
Asynchronously Reading Data in a ZenML pipeline
Once the analytics table is populated, ZenML asynchronously picks up the latest video data and processes it using GPT-4.
The first step is to read data from Supabase. Here is the code for this:
The <code>supabase_reader</code> step is a custom ZenML step that reads data from a Supabase database and returns a list of strings. This step is part of the ZenML pipeline that processes the latest video data from the you-tldr.com website.
The supabase_reader step takes SupabaseReaderParams as input, which includes parameters such as the table name, summary column, filter date column, filter interval hours, and limit. It then connects to the Supabase database using the provided credentials and constructs a query to filter the data based on the specified parameters. In this case, it filters the data for the last 24 hours and orders it in descending order. Finally, it returns a list of strings containing the summary column data.
This step can be easily adapted for other use cases by modifying the input parameters and the query construction. For example, you could use this step to read data from a different table, filter based on different criteria, or return different columns. By customizing this step, you can leverage the power of ZenML and Supabase to process and analyze data from various sources.
Using the GPT-4 API to Summarize Data
The second step in the pipeline passes the latest data to OpenAI GPT-4 to summarize it. It also compares it to the last summary that was created, ensuring we have some historic comparison.
The <code>gpt_4_summarizer</code> step is a custom ZenML step that leverages GPT-4 to generate summaries of the input data. It takes <code>SummarizerParams</code> and a list of documents as input, along with the <code>StepContext</code> to access previous pipeline runs.
The step first retrieves the secret API key for OpenAI and checks for any previous analysis. If a previous analysis is found, it is included in the input to GPT-4. This allows GPT-4 to consider the previous analysis while generating the new summary.
The step then calls the GPT-4 API with the specified parameters and input data, including the documents and the previous analysis. The API returns a summary, which is then returned as the output of the step.
This custom step can be adapted for other use cases by modifying the input parameters, the GPT-4 model, or the API call. For example, you could use this step to generate summaries for different types of data, use a different language model, or customize the API call to suit your specific needs.
Daily reports on Slack
The last step in the pipeline posts the latest summary every day to a shared Slack channel. This allows us to keep an overview of the changes happening to the data over time. For example, yesterday the report said:
The latest data from the database indicates the following key insights:
- There is a diverse range of video topics, including technology, economics, philosophy, aviation, health, politics, and entertainment.
- Educational and informative content seems to be popular among users, with videos on subjects like economics, engineering, and artificial intelligence.
- There is a noticeable interest in videos featuring influential figures, such as Steve Jobs and Noam Chomsky.
- Users are also watching content related to personal development and self-improvement, such as videos on Landmark Forum and focusing the unconscious mind.
- There is a significant presence of videos discussing current events and global issues, such as the Russia-Ukraine conflict and climate change.
- Entertainment content includes gaming videos, comedy sketches, and movie reviews.
- Some users are watching videos in languages other than English, indicating a diverse user base.
So now we know so much more about the usage of the website! Perhaps we could now cater it more for the needs of students watching educational content, or maybe have some affiliate marketing campaigns with self-improvement content creators online!
Overcoming GPT-4 Limitations with ZenML
One of the key insights of this case study is that by versioning the summaries over multiple pipelines, we can overcome GPT-4's context limitations. Instead of comparing raw data, we can compare summaries generated by GPT-4, allowing for more meaningful insights and analysis.
Seamlessly Transition to Production with ZenML
This project showcases the potential of combining ZenML, GPT-4, and Supabase to analyze enterprise datasets and generate summaries over time, applicable to various use cases. For instance, consider integrating a user table with the summaries generated by this pipeline. The final step could automatically create personalized marketing emails based on the summarized data, enabling targeted and effective customer communication.
ZenML simplifies scaling this pipeline by allowing seamless deployment on production-ready orchestrators like Airflow or Kubeflow. With native versioning on cloud storage and experiment tracking through ZenML's integration with MLflow, you can start locally and effortlessly transition to robust and efficient MLOps pipelines in production, unlocking valuable insights from your enterprise data.
Conclusion
This case study demonstrates the power of combining ZenML with GPT-4 to analyze and version data from a Supabase database. By using ZenML to manage data versioning and GPT-4 for analysis, we can overcome the limitations of LLMs and gain valuable insights from our data. If you're interested in leveraging the latest technology for your own projects, consider using ZenML in conjunction with Supabase and OpenAI to unlock the full potential of your data.
The full code base for the project can be found on GitHub. Leave a star if you use it, it supports our open-source work!