Integrations
MLflow
and
ZenML logo in purple, representing machine learning pipelines and MLOps framework.
Seamlessly track and visualize ZenML pipeline experiments with MLflow
MLflow
All integrations

MLflow

Seamlessly track and visualize ZenML pipeline experiments with MLflow
Add to ZenML
COMPARE
related resources
No items found.

Seamlessly track and visualize ZenML pipeline experiments with MLflow

Integrate the power of MLflow's experiment tracking capabilities directly into your ZenML pipelines. Effortlessly log and visualize models, parameters, metrics, and artifacts produced by your pipeline steps, enhancing reproducibility and collaboration across your ML workflows.

Features with ZenML

  • Seamless integration of MLflow tracking within ZenML steps
  • Automatically link ZenML runs to MLflow experiments for easy navigation
  • Leverage MLflow's intuitive UI to visualize and compare pipeline results
  • Supports various MLflow deployment scenarios for flexibility
  • Secure configuration options using ZenML Secrets

Main Features

  • Comprehensive experiment tracking and logging
  • Intuitive UI for visualizing and comparing runs
  • Support for a wide range of ML frameworks and languages
  • Flexible deployment options (local, remote server, Databricks)
  • Model registry for streamlined model versioning and deployment

How to use ZenML with
MLflow
import numpy as np
from sklearn.ensemble import RandomForestClassifier
from sklearn.base import BaseEstimator
from sklearn.datasets import load_iris
from zenml import pipeline, step
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
import mlflow


@step(experiment_tracker="mlflow_tracker")
def train_model() -> BaseEstimator:
    mlflow.autolog()
    iris = load_iris()
    X_train, X_test, y_train, y_test = train_test_split(
        iris.data, iris.target, test_size=0.2, random_state=42
    )
    model = RandomForestClassifier()
    model.fit(X_train, y_train)
    y_pred = model.predict(X_test)
    mlflow.log_param("n_estimators", model.n_estimators)
    mlflow.log_metric("train_accuracy", accuracy_score(y_test, y_pred))
    return model


@pipeline(enable_cache=False)
def training_pipeline():
    train_model()

training_pipeline()

This code snippet demonstrates how to use the MLflow experiment tracker within a ZenML pipeline step. The @step decorator is used to specify the MLflow tracker, and MLflow's autolog() captures relevant information automatically. Additional parameters and metrics are logged explicitly. The pipeline is then run with the tracking enabled, allowing results to be viewed in the MLflow UI.

Additional Resources
GitHub: ZenML MLflow Integration Example
ZenML MLflow experiment tracker docs
Official MLflow Documentation

Seamlessly track and visualize ZenML pipeline experiments with MLflow

Integrate the power of MLflow's experiment tracking capabilities directly into your ZenML pipelines. Effortlessly log and visualize models, parameters, metrics, and artifacts produced by your pipeline steps, enhancing reproducibility and collaboration across your ML workflows.
MLflow

Start Your Free Trial Now

No new paradigms - Bring your own tools and infrastructure
No data leaves your servers, we only track metadata
Free trial included - no strings attached, cancel anytime
Dashboard displaying machine learning models, including versions, authors, and tags. Relevant to model monitoring and ML pipelines.

Connect Your ML Pipelines to a World of Tools

Expand your ML pipelines with Apache Airflow and other 50+ ZenML Integrations
Deepchecks
Feast
Google Cloud Vertex AI Pipelines
Evidently
Argilla
Sagemaker Pipelines
PyTorch
Discord
Elastic Container Registry
Hugging Face
Databricks