LogoLogo
ProductResourcesGitHubStart free
  • Documentation
  • Learn
  • ZenML Pro
  • Stacks
  • API Reference
  • SDK Reference
  • Getting Started
    • Welcome to ZenML
    • Installation
    • Hello World
    • Core Concepts
    • System Architecture
  • Deploying ZenML
    • Deploy
      • Deploy with Docker
      • Deploy with Helm
      • Deploy using HuggingFace Spaces
      • Deploy with custom images
      • Secret management
      • Custom secret stores
    • Connect
      • with your User (interactive)
      • with an API Token
      • with a Service Account
    • Manage
      • Best practices for upgrading
      • Using ZenML server in production
      • Troubleshoot your ZenML server
      • Migration guide
        • Migration guide 0.13.2 → 0.20.0
        • Migration guide 0.23.0 → 0.30.0
        • Migration guide 0.39.1 → 0.41.0
        • Migration guide 0.58.2 → 0.60.0
  • Concepts
    • Steps & Pipelines
      • Configuration
      • Scheduling
      • Logging
      • Advanced Features
      • YAML Configuration
    • Artifacts
      • Materializers
      • Visualizations
    • Stack & Components
    • Service Connectors
    • Containerization
    • Code Repositories
    • Secrets
    • Tags
    • Metadata
    • Models
    • Templates
    • Dashboard
  • Reference
    • Community & content
    • Environment Variables
    • llms.txt
    • FAQ
    • Global settings
    • Legacy docs
Powered by GitBook
On this page
  • About ZenML
  • Platform Support
  • Common Use Cases and How-To's
  • Community and Support

Was this helpful?

Edit on GitHub
  1. Reference

FAQ

Find answers to the most frequently asked questions about ZenML.

Previousllms.txtNextGlobal settings

Last updated 1 month ago

Was this helpful?

This page addresses common questions about ZenML, including general information about the project and how to accomplish specific tasks.

About ZenML

Why did you build ZenML?

We built it because we scratched our own itch while deploying multiple machine-learning models in production over the past three years. Our team struggled to find a simple yet production-ready solution whilst developing large-scale ML pipelines. We built a solution for it that we are now proud to share with all of you! Read more about this backstory .

Is ZenML just another orchestrator like Airflow, Kubeflow, Flyte, etc?

Not really! An orchestrator in MLOps is the system component that is responsible for executing and managing the execution of an ML pipeline. ZenML is a framework that allows you to run your pipelines on whatever orchestrator you like, and we coordinate with all the other parts of an ML system in production. There are that ZenML supports out-of-the-box, but you are encouraged to in order to gain more control as to exactly how your pipelines are executed!

Can I use the tool X? How does the tool Y integrate with ZenML?

Take a look at our (in particular the ), which contains instructions and sample code to support each integration that ZenML supports out of the box. You can also check out to see active examples of many of our integrations in action.

The ZenML team and community are constantly working to include more tools and integrations to the above list (check out the for more details). You can you'd like and add your ideas to the roadmap.

Most importantly, ZenML is extensible, and we encourage you to use it with whatever other tools you require as part of your ML process and system(s). Check out with extending ZenML to learn more!

Which license does ZenML use?

ZenML is distributed under the terms of the Apache License Version 2.0. A complete version of the license is available in the in this repository. Any contribution made to this project will be licensed under the Apache License Version 2.0.

Platform Support

Do you support Windows?

ZenML officially supports Windows if you're using WSL. Much of ZenML will also work on Windows outside a WSL environment, but we don't officially support it, and some features don't work (notably anything that requires spinning up a server process).

Do you support Macs running on Apple Silicon?

Yes, ZenML does support Macs running on Apple Silicon. You just need to make sure that you set the following environment variable:

export OBJC_DISABLE_INITIALIZE_FORK_SAFETY=YES

This is a known issue with how forking works on Macs running on Apple Silicon, and it will enable you to use ZenML and the server. This environment variable is needed if you are working with a local server on your Mac, but if you're just using ZenML as a client / CLI and connecting to a deployed server, then you don't need to set it.

Common Use Cases and How-To's

How do I contribute to ZenML's open-source codebase?

How do I add custom components to ZenML?

How do I mitigate dependency clashes with ZenML?

How do I deploy cloud infrastructure and/or MLOps stacks?

ZenML is designed to be stack-agnostic, so you can use it with any cloud infrastructure or MLOps stack. Each of the documentation pages for stack components explain how to deploy these components on the most popular cloud providers.

How do I deploy ZenML on my internal company cluster?

How do I implement hyperparameter tuning?

How do I reset things when something goes wrong?

How do I create dynamic pipelines and steps?

How do I use templates and starter code with ZenML?

How do I upgrade my ZenML client and/or server?

How do I use a specific stack component?

Community and Support

How can I speak with the community?

We develop ZenML together with our community! To get involved, the best way to get started is to select any issue from the .

Please read for more information. For small features and bug fixes, please open a pull request as described in the guide. For anything bigger, it is worth or so we can best discuss and support your plans.

Please start by on implementing a custom stack component, which offers some general advice on what you'll need to do.

From there, each of the custom stack component types has a dedicated section about adding your own custom components. For example, to add a custom orchestrator, you would .

Check out on some ways you can try to solve these dependency and versioning issues.

Read , in which several options are presented.

on implementing this is the place to learn more.

To reset your ZenML client, you can run zenml clean which will wipe your local metadata database and reset your client. Note that this is a destructive action, so feel free to before doing this if you are unsure.

Please read our to start with. You might also find the code examples in which is related to this topic.

allow you to get going quickly with ZenML. We recommend the Starter template (starter) for most use cases, which gives you a basic scaffold and structure around which you can write your own code. You can also build templates for others inside a Git repository and use them with ZenML's templates functionality.

Upgrading your ZenML client package is as simple as running pip install --upgrade zenml in your terminal. For upgrading your ZenML server, please refer to , which covers most of the ways you might do this as well as common troubleshooting steps.

For information on how to use a specific stack component, please refer to , which contains all our tips and advice on how to use each integration and component with ZenML.

The first point of contact should be . Ask your questions about bugs or specific use cases, and someone from the core team will respond.

on our blog here
standard orchestrators
write your own orchestrator
documentation
component guide
our integration test code
roadmap
upvote features
our documentation on how to get started
LICENSE.md
good-first-issue label
our Contribution Guide
posting a message in Slack
creating an issue
reading the general documentation page
visit this page
our dedicated documentation page
the documentation on self-hosted ZenML deployments
Our dedicated documentation guide
reach out to us on Slack
general information on how to compose steps + pipelines together
our guide to implementing hyperparameter tuning
Project templates
the dedicated documentation section
the component guide
our Slack group
ZenML Scarf