Deploy flows with Python
Learn how to use the Python SDK to deploy flows to run in work pools.
Prefect offers a flexible way to deploy flows to dynamic infrastructure using the Python SDK. This approach allows you to target specific work pools and utilize dynamically provisioned infrastructure.
Deploying flows to dynamic infrastructure offers a flexible and programmatic approach to managing your
Prefect workflows. While flow.serve
remains a valid and straightforward method for deploying flows to persistent
infrastructure, there are scenarios where you might need to leverage dynamic, scalable resources. In these cases,
flow.deploy
provides a powerful alternative that allows you to target specific work pools and utilize dynamically
provisioned infrastructure.
When to consider flow.deploy
over flow.serve
The flow.serve
method is an excellent choice for many use cases, especially when you have readily available, persistent infrastructure. However, you might want to consider using flow.deploy
in the following situations:
- Cost optimization: Dynamic infrastructure can help reduce costs by scaling resources up or down based on demand.
- Resource scarcity: If you have limited persistent infrastructure, dynamic provisioning can help manage resource allocation more efficiently.
- Varying workloads: For workflows with inconsistent resource needs, dynamic infrastructure can adapt to changing requirements.
- Cloud-native deployments: When working with cloud providers that offer serverless or auto-scaling options.
By using flow.deploy
, you can take advantage of Prefect’s work pool system, which provides a layer of abstraction between your flows and the underlying infrastructure. This approach offers greater flexibility and efficiency in running your workflows when dynamic infrastructure is needed.
Let’s explore how to create a deployment using the Python SDK and leverage dynamic infrastructure through work pools.
Prerequisites
Before deploying your flow using flow.deploy
, ensure you have the following:
-
A running Prefect server or Prefect Cloud workspace: You can either run a Prefect server locally or use a Prefect Cloud workspace. To start a local server, run
prefect server start
. To use Prefect Cloud, sign up for an account at app.prefect.cloud and follow the Connect to Prefect Cloud guide. -
A Prefect flow: You should have a flow defined in your Python script. If you haven’t created a flow yet, refer to the Write Flows guide.
-
A work pool: You need a work pool to manage the infrastructure for running your flow. If you haven’t created a work pool, you can do so through the Prefect UI or using the Prefect CLI. For more information, see the Work Pools guide.
For examples in this guide, we’ll use a Docker work pool created by running:
-
Docker: Docker will be used to build and store the image containing your flow code. You can download and install Docker from the official Docker website. If you don’t want to use Docker, you can see other options in the Use remote code storage section.
-
(Optional) A Docker registry account: While not strictly necessary for local development, having an account with a Docker registry (such as Docker Hub) is recommended for storing and sharing your Docker images.
With these prerequisites in place, you’re ready to deploy your flow using flow.deploy
.
Deploy a flow with flow.deploy
To deploy a flow using flow.deploy
and Docker, follow these steps:
Write a flow
Ensure your flow is defined in a Python file. Let’s use a simple example:
Add deployment configuration
Add a call to flow.deploy
to tell Prefect how to deploy your flow.
Deploy!
Run your script to deploy your flow.
Running this script will:
- Build a Docker image containing your flow code and dependencies.
- Create a deployment associated with the specified work pool and image.
Building a Docker image for our flow allows us to have a consistent environment for our flow to run in. Workers for our work pool will use the image to run our flow.
In this example, we set push=False
to skip pushing the image to a registry. This is useful for local development, and you can push your image to a registry such as Docker Hub in a production environment.
Where’s the Dockerfile?
In the above example, we didn’t specify a Dockerfile. By default, Prefect will generate a Dockerfile for us that copies the flow code into an image and installs any additional dependencies.
If you want to write and use your own Dockerfile, you can do so by passing a dockerfile
parameter to flow.deploy
.
Trigger a run
Now that we have our flow deployed, we can trigger a run via either the Prefect CLI or UI.
First, we need to start a worker to run our flow:
Then, we can trigger a run of our flow using the Prefect CLI:
After a few seconds, you should see logs from your worker showing that the flow run has started and see the state update in the UI.
Deploy with a schedule
To deploy a flow with a schedule, you can use one of the following options:
-
interval
Defines the interval at which the flow should run. Accepts an integer or float value representing the number of seconds between runs or a
datetime.timedelta
object. -
cron
Defines when a flow should run using a cron string.
-
rrule
Defines a complex schedule using an
rrule
string. -
schedules
Defines multiple schedules for a deployment. This option provides flexibility for:
- Setting up various recurring schedules
- Implementing complex scheduling logic
- Applying timezone offsets to schedules
Learn more about schedules here.
Use remote code storage
In addition to storing your code in a Docker image, Prefect also supports deploying your code to remote storage. This approach allows you to store your flow code in a remote location, such as a Git repository or cloud storage service.
Using remote storage for your code has several advantages:
- Faster iterations: You can update your flow code without rebuilding Docker images.
- Reduced storage requirements: You don’t need to store large Docker images for each code version.
- Flexibility: You can use different storage backends based on your needs and infrastructure.
Using an existing remote Git repository like GitHub, GitLab, or Bitbucket works really well as remote code storage because:
- Your code is already there.
- You can deploy to multiple environments via branches and tags.
- You can roll back to previous versions of your flows.
To deploy using remote storage, you’ll need to specify where your code is stored by using flow.from_source
to first load your flow code from a remote location.
Here’s an example of loading a flow from a Git repository and deploying it:
The source
parameter can accept a variety of remote storage options including:
- Git repositories
- S3 buckets (using the
s3://
scheme) - Google Cloud Storage buckets (using the
gs://
scheme) - Azure Blob Storage (using the
az://
scheme)
The entrypoint
parameter is the path to the flow function within your repository combined with the name of the flow function.
Learn more about remote code storage here.
Set default parameters
You can set default parameters for a deployment using the parameters
keyword argument in flow.deploy
.
Note these parameters can still be overridden on a per-deployment basis.
Set job variables
You can set default job variables for a deployment using the job_variables
keyword argument in flow.deploy
.
The job variables provided will override the values set on the work pool.
Job variables can be used to customize environment variables, resources limits, and other infrastructure options, allowing fine-grained control over your infrastructure on a per-deployment or per-flow-run basis. Any variable defined in the base job template of the associated work pool can be overridden by a job variable.
You can learn more about job variables here.
Deploy multiple flows
To deploy multiple flows at once, use the deploy
function.
When we run the above script, it will build a single Docker image for both deployments. This approach offers the following benefits:
- Saves time and resources by avoiding redundant image builds.
- Simplifies management by maintaining a single image for multiple flows.
Additional resources
Was this page helpful?