Integrations
Seldon
and
ZenML logo in purple, representing machine learning pipelines and MLOps framework.
Deploy production-grade ML models on Kubernetes with Seldon Core and ZenML
Seldon
All integrations

Seldon

Deploy production-grade ML models on Kubernetes with Seldon Core and ZenML
Add to ZenML
Category
Deployer
COMPARE
related resources
No items found.

Deploy production-grade ML models on Kubernetes with Seldon Core and ZenML

Integrate Seldon Core's powerful model serving capabilities into your ZenML pipelines for seamless deployment of ML models to Kubernetes. This integration enables advanced deployment strategies, model explainability, outlier detection, and efficient management of complex ML workflows in production environments.

Features with ZenML

  • Seamless Model Deployment to Kubernetes
    Effortlessly deploy your ZenML pipeline models to Seldon Core on Kubernetes for production-grade serving.
  • Advanced Deployment Strategies
    Leverage Seldon Core's advanced deployment features like A/B testing, canary releases, and multi-armed bandits within ZenML pipelines.
  • Streamlined Model Monitoring
    Monitor your deployed models' performance, detect outliers, and explain predictions, all integrated with ZenML's tracking capabilities.
  • Customizable Inference Servers
    Deploy custom model serving logic using pre-built inference servers for popular ML frameworks or bring your own custom code.

Main Features

  • Microservice-based architecture for model serving
  • Built-in model explainability and outlier detection
  • Advanced deployment strategies (A/B testing, canary releases, etc.)
  • REST and gRPC inference endpoints
  • Integration with Kubernetes native tools like Istio and Prometheus

How to use ZenML with
Seldon

from zenml.integrations.seldon.steps import seldon_model_deployer_step
from zenml.integrations.seldon.services import SeldonDeploymentConfig
from zenml import pipeline

@pipeline
def seldon_deployment_pipeline():
    model = ...
    seldon_model_deployer_step(
        model=model,
        service_config=SeldonDeploymentConfig(
            model_name="my-model",
            replicas=1,
            implementation="SKLEARN_SERVER",
            resources=SeldonResourceRequirements(
                requests={"cpu": "100m", "memory": "100Mi"},
                limits={"cpu": "1", "memory": "1Gi"}
            )
        ),
    )

This code example demonstrates how to deploy a model to Seldon Core using the seldon_model_deployer_step within a ZenML pipeline. The model is configured with a deployment name, number of replicas, server implementation type, and resource requirements. The step seamlessly integrates the model deployment process into the ZenML pipeline flow.

Additional Resources
Seldon Core GitHub repository
ZenML Seldon deployment guide
Seldon Core documentation

Deploy production-grade ML models on Kubernetes with Seldon Core and ZenML

Integrate Seldon Core's powerful model serving capabilities into your ZenML pipelines for seamless deployment of ML models to Kubernetes. This integration enables advanced deployment strategies, model explainability, outlier detection, and efficient management of complex ML workflows in production environments.
Seldon

Start Your Free Trial Now

No new paradigms - Bring your own tools and infrastructure
No data leaves your servers, we only track metadata
Free trial included - no strings attached, cancel anytime
Dashboard displaying machine learning models, including versions, authors, and tags. Relevant to model monitoring and ML pipelines.

Connect Your ML Pipelines to a World of Tools

Expand your ML pipelines with Apache Airflow and other 50+ ZenML Integrations
Label Studio
Hugging Face (Inference Endpoints)
Facets
scikit-learn (sklearn)
Prodigy
Feast
Discord
Google Cloud Storage (GCS)
Pigeon
Modal
Elastic Container Registry