Hello there! Hamza here again, back with the second edition of the ZenML newsletter. Last time, was a lot of housekeeping, but this time I'll cut right to the chase. Lot's of exciting updates from ZenML, including a computer vision webinar, a brand-new LLMOps guide, and more!
2024: The most exciting year for MLOps yet
Like last time, I thought I'd start on some reflection. It's been such a crazy 20 months in regards to the GenAI market push, that MLOps has been seemingly forgotten - the lost child of a bigone era. We used to be the cool kid on the block - now look at us.
All of this is just the hype cycle playing out of course. I feel like the whole GenAI wave actually accelerated MLOps faster than expected out of the trough of disillusionment. It's more obvious than ever that companies will be driven by AI in the future, and then it's an equal-parts no-brainer that an underlying MLOps standard needs to be adopted within a company to sharpen your AI competitive edge.
Let's take the adoption of LLMs as as example. As I wrote about before, it's easy to prototype a LLM use-case, but it's another thing to adopt it in production. The story often goes as follows:
- Try a quick PoC with RAG.
- Realize it's good in 80% of use-cases, but not quite there yet.
- Think about improvements - start from reranking and go all the way to automated evaluation and finetuning.
- Find yourself in chaos because you just ran through a whirlwind of manual processes. Go back. Automate and repeat.
Sound familiar? Yes, that's no different than anything else we've seen whenever somebody has tried to put ML into production. And yes, that's where you need MLOps and ML engineering expertise. It's just that we've been through almost two years of PoCs, and just starting to see all this play out. As more and more ML practitioners start to optimize and specialize their models for their particular use-cases, we'll see the plateau of productivity in regards to MLOps. And I can't wait!
Help us to improve ZenML
Your opinion is crucial to us! Join a user testing session and help us improve ZenML. Together, we can build something incredible.
ZenML Product Corner
LLMOps Guide
Want to see how RAG works in ~50 lines of code, with no Langchain or other wrappers? Or maybe how you can evaluate RAG applications in ~65 lines of code! Check out our new LLMOps guide to learn more about how to use LLMs with ZenML
Easy cloud stacks
A stack is the configuration of tools and infrastructure that your pipelines can run on. Wouldn't it be nice to easily follow a tutorial on how to set a stack up for your cloud environment? Well, we released the cloud guide just for this purpose! Check it out in the docs.
Lambda Labs support in ZenML
We released our integration with Lambda Labs - Good timing with their recent $500m raise! 😁Automate your MLOps pipeliens using Lambda's awesome GPU resources as an orchestration service.
Fresh from the community
Introducing ZenML Studio
The ZenML VSCode extension seamlessly integrates with ZenML to enhance your MLOps workflow within VSCode. It is designed to accurately mirror the current state of your ZenML environment within your IDE, ensuring a smooth and integrated experience. Huge shoutout to community member Marwan Zaarab who pushed this through <3!
Install ZenML Studio for VS Code now
You build it, you run it
Do you believe in "you build it, you run it" for MLOps? The ADEO team is pushing the boundaries of reliability with their approach to machine learning development, recently published in a technical deep-dive blog by Mathieu Deleu. They're handling upwards of twenty million visits a week on their websites, and we're super proud of how ZenML plays a key role in their MLOps stack. 💼📈
If you have any questions or need assistance, feel free to join our Slack community.
Happy Developing!
Hamza Tahir