Creating and Running a Pipeline

JFrog Pipelines Documentation

Products
JFrog Pipelines
Content Type
User Guide
ft:sourceType
Paligo

Note

Before you start, an administrator user must perform the required procedures that connect Pipelines to the machines and services that enable a pipeline to run.Administrator Users

The main steps for creating and running a pipeline are as follows:

Step 1: Create a Node Pool

Who can perform this step?

Administrators only.

Other users can go to Application | Pipelines | Node Pools to view the node pools that an administrator has assigned to them.

Description

To run a pipeline, you must provide Pipelines with machines for steps to execute on. In Pipelines, these machines are called nodes, and they are organized into node pools.

Pipelines must be configured with at least one node pool that contains at least one node. One node pool is set as the default node pool and available to all users.

You have a variety of choices in how node pools can be configured. Your nodes can be static (a VM with a fixed IP address) or dynamic (on-demand in a cloud service).

Step(s)

To add a node pool and nodes, from the Administration tab, go to Pipelines | Node Pools, and click Add Node Pool.

Dive Deeper

For information about adding a static or dynamic node pool, see Managing Pipelines Node Pools.

Step 2: Add Integrations

Who can perform this step?

Administrators only.

Other users can go to ApplicationPipelines | Integrations to view the integrations that an administrator has assigned to them.

Description

For Pipelines to connect to other services, such as GitHub, Artifactory, or Kubernetes, integrations must be added for those services. The integration must be provided with the URL endpoint for those services and credentials for a user account on that service, along with any other relevant parameters.

Step(s)

To add the integration, from the Administration tab, go to PipelinesIntegrations, and click Add an Integration.

1.png

Here, we add a GitHub Integration – but you can add an integration for the VCS system you prefer to use, whether that's GitHub EnterpriseGitLabBitbucket, or Bitbucket Server Integration. For the full list of all the integrations you can add, see Pipelines Integrations.

MyGitHubIntegration.png

After your integration is successfully added, it is listed among the available integrations.

3.png

Dive Deeper

For more information, see Managing Pipelines Integrations.

Step 3: Create the Pipeline DSL

Who can perform this step?

Administrators and Developers.

Description

Pipelines are defined using Pipelines DSL, stored in one or more YAML files of key-value pairs, known as a pipeline config.

A Pipelines DSL looks like this:

resources:
  - name: myFirstRepo
    type: GitRepo
    configuration:
      # SCM integration where the repository is located
      gitProvider: {{ .Values.myRepo.gitProvider }} # this will be replaced from values.yml
      # Repository path, including org name/repo name
      path: {{ .Values.myRepo.path }} # this will be replaced from values.yml
      branches:
        # Specifies which branches will trigger dependent steps
        include: master
 
  - name: myPropertyBag
    type: PropertyBag
    configuration:
      commitSha: 1
      runID: 1
 
pipelines:
  - name: my_first_pipeline
    steps:
      - name: p1_s1
        type: Bash
        configuration:
          inputResources:
            # Sets up step to be triggered when there are commit events to myFirstRepo
            - name: myFirstRepo
        execution:
          onExecute:
            # Data from input resources is available as env variables in the step
            - echo $res_myFirstRepo_commitSha
            # The next two commands add variables to run state, which is available to all downstream steps in this run
            - add_run_variables current_runid=$run_id
            - add_run_variables commitSha=$res_myFirstRepo_commitSha
            # This variable is written to pipeline state in p1_s3.
            # So this will be empty during first run and will be set to prior run number in subsequent runs
            - echo "Previous run ID is $prev_runid"
 
      - name: p1_s2
        type: Bash
        configuration:
          inputSteps:
            - name: p1_s1
        execution:
          onExecute:
            # Demonstrates the availability of an env variable written to run state during p1_s1
            - echo $current_runid
 
      - name: p1_s3
        type: Bash
        configuration:
          inputSteps:
            - name: p1_s2
          outputResources:
            - name: myPropertyBag
        execution:
          onExecute:
            - echo $current_runid
            # Writes current run number to pipeline state
            - add_pipeline_variables prev_runid=$run_id
            # Uses an utility function to update the output resource with the commitSha that triggered this run
            # Dependent pipelines can be configured to trigger when this resource is updated
            - write_output myPropertyBag commitSha=$commitSha runID=$current_runid
 
  - name: my_second_pipeline
    steps:
      - name: p2_s1
        type: Bash
        configuration:
          inputResources:
            # Sets up step to be triggered when myPropertyBag is updated
            - name: myPropertyBag
        execution:
          onExecute:
            # Retrieves the commitSha from input resource
            - echo "CommitSha is $res_myPropertyBag_commitSha"

Step(s)

Create a directory named .jfrog-pipelines in your source VCS repository, such as Git, and then commit the .yml file to this directory.

Dive Deeper

For more information, see Defining a Pipeline.

Step 4: Add a Pipeline Source

Who can perform this step?

Administrators only.

Other users can go to Application | Pipelines | Pipelines Sources to view the pipeline sources that an administrator has assigned to them.

Description

For Pipelines to read and sync the Pipelines DSL from the source VCS repository, you must tell it where to find it by adding a pipeline source. This is best performed only after the Pipelines DSL file is checked into the source repository, so that Pipelines can sync the file immediately.

Step(s)

To add a pipeline source, from the Administration tab, go to Pipelines | Pipelines Sources, and click Add Pipeline Source.

add_pipeline_source_1.png

Once the pipeline source is successfully added, Pipelines will sync the file to load the DSL file and create the declared resources and pipelines.

image2020-3-6_9-24-34.png

Dive Deeper

For more information, see Managing Pipeline Sources.

Step 5: Run the Pipeline

Who can perform this step?

Administrators and Developers.

Description

Trigger either a manual and automatic run of the pipeline.

Step(s)

Note

The following are the steps for running a pipeline using the new UI. If you want the perform the same steps using the old UI, see the expandable section below.

  1. To browse pipelines loaded from configured pipelines sources, in the Application tab, go to Pipelines | My Pipelines.

    After your Git repo has been added as a pipeline source, you can see your pipeline in My Pipelines.

    myFirstPipeline_24jun22.png
  2. Click the name of the pipeline to see a real time, interactive, diagram of the pipeline and the results of its most current run.

    myFirstPipeline2_24jun22.png

    A pipeline can be defined to trigger execution when a new commit is made to the Git repo. You can also execute the pipeline by manually click the Run button or triggering the first step.