Creating Stateful Pipelines

JFrog Pipelines Documentation

ft:sourceType
Paligo

The execution of each step in a pipeline may be performed on different nodes. For this reason, you are not guaranteed that changes to the environment that are made within a step will persist to subsequent steps.

Stateful pipelines remember information generated by steps and make it available to dependent steps or successive runs of the same pipeline. This is a crucial component in order to achieve end-to-end Continuous Delivery.

Some example use-cases are the following:

  • A step creates information about the commitSha and image/file version that was built, which is then consumed by another step to deploy the correct version into the test environment.

  • A step creates a VPC for the Staging environment. It stores information like VPC info, subnets, security groups, etc. which is required by another step that deploys to the Staging environment.

  • You have a provisioning step that uses Terraform to create the Test environment. At a later stage in your pipeline, you have a deprovisioning step that destroys the test environment. Both these steps need to read and update the Terraform state file so they are aware of the current state of the Test environment.

Types of State

JFrog Pipelines supports three types of state:

Each of these states is characterized by the scope of information persistence.

Run State

A pipeline's run state is persistent only between steps within the same run. Information stored in one step is available to subsequent steps in the run of that pipeline. After the run is complete, state may be downloaded when viewing the steps in that run but will not be available in later runs or other pipelines.

Note

To preserve state across steps, use the utility functions for run state management.

Pipelines supports two types of run state information that can be preserved between steps.

Key-Value Pairs

Using the add_run_variables utility function, you can store a key-value pair to the run state. That key-value pair will automatically be available to all subsequent steps in the run as an environment variable. It will not be available to steps that do not have the step that added the variable as an input, either directly or through other steps or resources.

image2019-8-22_10-20-47.png
Files

Using the add_run_files utility function, a step can store a file in the run state. Any subsequent step can then use the restore_run_files function to retrieve the file from the run state. Files are available to steps in the same run whether or not the step that added the files is an input to the later step. Run state may be downloaded for an individual step, consisting of the files either uploaded or downloaded by that step.

run_state_files_1.png
Pipeline State

A pipeline state is persistent for all runs of the same pipeline. Information stored by a step during a pipeline's run is available to subsequent runs of that pipeline.

Note

To preserve state across pipelines, you may use the utility functions for pipeline state management.

Pipelines supports two types of run state information that can be preserved between steps.

Key-Value Pairs

Using the add_pipeline_variables utility function, you can store a key-value pair to the pipeline state. That key-value pair will automatically be available to all subsequent runs as an environment variable.

image2019-8-22_12-34-23.png
Files

Using the add_pipeline_files utility function, a step can store a file to the pipeline state. Any step can then use the restore_pipeline_files function to retrieve the file from the pipeline state.

image2019-8-22_12-40-27.png
Resource-based State

Using the write_output utility function, key-values can be stored as a property in any output resource. Every step that has the resource as an input can access the key-value information in its scripts as an environment variable.

image2019-8-22_12-56-38.png

The environment variable for the value is of the form res_<resource name>_<key name>.

Resource-based state information is persistent across pipelines, so it can be used as a mechanism for passing information from one pipeline to the next.