Effortless Orchestration of ZenML Pipelines on Lightning AI's Scalable Infrastructure
Lightning AI Studio is a platform that simplifies the development and deployment of AI applications. The Lightning AI orchestrator is an integration provided by ZenML that allows you to run your pipelines on Lightning AI's infrastructure, leveraging its scalable compute resources and managed environment.
Features with ZenML
- Seamless execution of ZenML pipelines on Lightning AI's scalable infrastructure
- Effortless provisioning and scaling of compute resources for ZenML pipelines
- Simplified deployment and management of ML workflows without infrastructure hassles
- Automatic optimization of Lightning AI clusters for cost-effective execution
- Fine-grained resource allocation for each pipeline step
- Dynamic scaling to efficiently manage workloads and control costs
- Performance optimizations tailored for ML tasks, ensuring efficient pipeline execution
- Integrated monitoring and management through the Lightning AI UI
- Seamless integration with ZenML's Dashboard for pipeline and artifact monitoring
- Leveraging Lightning AI's robust infrastructure for reliable and efficient pipeline execution
Main Features
- Managed infrastructure for machine learning
- Scalable compute resources, including GPU support
- Optimized environment for ML workloads
- Integrated development and deployment capabilities
- Collaborative workspace for teams
How to use ZenML with
Lightning AI
from zenml.integrations.lightning.flavors.lightning_orchestrator_flavor import LightningOrchestratorSettings
lightning_settings = LightningOrchestratorSettings(
main_studio_name="my_studio",
machine_type="gpu",
async_mode=True,
custom_commands=["pip install -r requirements.txt"]
)
@pipeline(
settings={
"orchestrator.lightning": lightning_settings
}
)
def my_pipeline():
data = load_data()
model = train_model(data)
evaluate_model(model, data)
This code snippet demonstrates how to configure the Lightning AI orchestrator within a ZenML pipeline. By specifying the LightningOrchestratorSettings
, you can customize the execution environment, including the studio name, machine type, async mode, and custom setup commands. The pipeline is then decorated with these settings, ensuring that it runs on Lightning AI's infrastructure when executed.
Additional Resources
Read the documentation
Lightning AI Orchestrator Documentation