In modern machine learning workflows, understanding the nuances of your pipeline performance is essential for making informed decisions. Today, we're introducing the Experiment Comparison Tool, an addition to ZenML's Pro tier that brings in-dashboard experiment tracking capabilities to your MLOps workflow.
The complexity of machine learning pipelines generates rich metadata at every step—from model performance metrics to system telemetry. The Experiment Comparison Tool transforms this wealth of information into actionable insights, enabling teams to make data-driven decisions with confidence.
Core Capabilities
Our approach to metadata and metrics analysis combines power with simplicity, offering two complementary views that cater to different analytical needs.
Comprehensive Cross-Pipeline Run Analysis
The tool supports comparison of up to 20 pipeline runs simultaneously, accommodating any numerical (float
or int
) metadata your pipelines generate. This flexibility means you can track everything from model accuracy to custom performance indicators, all within a unified interface.
Structured Tabular Analysis
The tabular view provides a methodical approach to run comparison:
- Clear tabular presentation of sequential pipeline runs
- Automatic comparative analysis between runs
- Key run metadata (model, stack, start time, run executor) presented alongside your custom metadata
Advanced Parallel Coordinates View
The parallel coordinates visualization gives you the ability to do multi-dimensional analysis. It supports interactive parameter exploration as well as filtering and grouping. As long as the values are numeric, we'll plot them out for you here.
Share Visualizations
The Experiment Comparison Tool is designed with team collaboration in mind. Each visualization configuration is preserved in the URL, allowing you to share specific analysis views with team members. This feature ensures consistent analysis across your organization and helps you have meaningful discussions about pipeline performance, all grounded in your run metadata.
When to Use ZenML's Experiment Comparison Tool
The MLOps ecosystem offers several mature experiment tracking solutions like MLflow and Weights & Biases, each serving distinct needs in the machine learning workflow. While these tools excel at model-centric workflows—tracking training metrics, comparing architectures, and conducting hyperparameter optimization—ZenML's Experiment Comparison tool takes a different approach by focusing on pipeline-level insights and operational metrics.
The tool shines when you need to understand your ML pipelines holistically, tracking and comparing operational metrics, system telemetry, and pipeline-specific metadata alongside your model metrics. Since it's integrated directly into your pipeline orchestration workflow, you can analyze everything from processing times and resource utilization to data preprocessing statistics without adding new tools to your stack.
The Experiment Comparison tool complements rather than replaces traditional experiment trackers. Many teams use both: ZenML for pipeline-level insights and operational metrics, alongside specialized tools for in-depth model experimentation and development.
Getting Started
The Experiment Comparison Tool is available now for Pro-tier users. Implementation is straightforward:
- Configure your pipelines to log numerical metadata
- Access the dashboard through your Pro account
- Select your pipeline runs
- Explore both visualization modes to uncover insights
We already have full documentation on logging metadata available in our standard documentation. Your code to log the metadata or metrics might look something like this, for example:
Looking Forward
The Experiment Comparison Tool represents our initial step into experiment tracking functionality. While the current implementation provides a solid foundation for pipeline analysis, we're keen to understand how teams use it in practice. Your feedback will be instrumental in shaping the tool's evolution, ensuring it continues to meet the real-world needs of MLOps teams.
Ready to enhance your pipeline analysis? Explore the Experiment Comparison Tool in your Pro-tier dashboard today.
If you have any feedback about the comparison tool, please let us know over in the Slack community!