Pipelines HA Installation

JFrog Installation & Setup Documentation

Content Type
Installation & Setup
ft:sourceType
Paligo

The following describes how to set up a Pipelines HA cluster with two or more nodes. For more information, see the System Architecture.

Pipelines makes use of the Artifactory filestore for performing storage functions such as step caching.

Operating Systems and Platform Support

The following table lists the supported operating systems and the versions.

Product

Debian

RHEL

Ubuntu

Windows Server

Amazon Linux

Pipelines

8.x

20.04, 22.04

Build nodes only

Amazon Linux 2023

Operating Systems - End of Support

As part of JFrog commitment to maintain the security and reliability of the JFrog Platform, Artifactory will officially run with Node.js 20.x on all installation types from Artifactory 7.77.3.

Node.js 20.x provided with Linux Archive/Debian/RPM installations (non-containerized distributions) is not supported on the following operating systems.

Hence, these operating systems will no longer supported from Artifactory version 7.77.3.

Supported Platforms

The following table lists the supported platforms.

Product

x86-64

ARM64

Kubernetes

OpenShift

Pipelines

1.19+

Installation on Kubernetes environments is through Helm Charts. Supported Helm version is Helm 3+.

ARM64 Support

From version 7.41.4, Artifactory supports installation on ARM64 architecture through Helm and Docker installations. You must set up an external database as the Artifactory database since Artifactory does not support the bundled database with the ARM64 installation. Artifactory installation pulls the ARM64 image automatically when you run the Helm or Docker installation on the ARM64 platform.

ARM64 support is also available for Xray, Pipelines (in Helm installation), and Insight. ARM64 support is not available for Distribution.

Database and Third-Party Applications in Pipelines

For a single node installation of Pipelines, the PostgreSQL database is by default installed on the same node as Pipelines. It may be optionally configured as an external database.

Pipelines HA requires an external database for common use by all HA nodes.

Pipelines supports the following versions of PostgreSQL.

  • 10.x

  • 11.x

  • 12.x

  • 13.x

Pipelines uses Redis 6.2.6 and bundles a Redis package along with the installer.

RabbitMQ 3.8 is installed as part of the Pipeline installation for every node In case of HA architecture, it uses queue mirroring between the different RabbitMQ nodes. You should not use any other external RabbitMQ instance.

Pipelines Network Ports

Pipelines uses the 8082 port and 30001 (30200 from Pipelines 1.0 to 1.10.0) by default for external communication.

Pipelines uses the following internal ports by default for communication with JFrog Platform microservices.

Microservice

Port

Pipelines API

8082

Pipelines WWW (UI)

30001

SSH access to the instance

22

Database (PostgreSQL) access

5432

RabbitMQ

30200

RabbitMQ Admin

30201

Vault

30100

Redis Cluster

6379, 16379, 6380, 16380, 6381, 16381

Prerequisites

All nodes within the same Pipelines HA installation must run the same Pipelines version.

Licensing

Pipelines HA is supported with an Enterprise Plus License. Each node in the cluster must be activated with a different license.

Database

Pipelines HA requires an external PostgreSQL database. Make sure you have completed setting up your external database before proceeding to install the first node. The database connection details are used for each node installation.

There are several ways to setup PostgreSQL for redundancy. Including: HA, Load Balancing and Replication. For more information, see the PostgreSQL documentation..

RabbitMQ

RabbitMQ is installed as part of the Pipelines installation for every node. In HA architecture, it uses queue mirroring between the different RabbitMQ nodes.

Network
  • All the Pipelines HA components (cluster nodes, database server and RabbitMQ) must be within the same fast LAN.

  • All the HA nodes must communicate with each other through dedicated TCP ports.

  • Network communications between the cluster nodes must be enabled for each of the cluster nodes.

Install HA Using the Pipelines Command Line Installer
Install the First Node
  1. Extract the installer from the downloaded .rpm, .deb, or .tar.gz file, as shown for the single node installation.

  2. Perform the install procedure in the first node using the Pipelines command line installer.

    $ sudo pipelines install \
        --base-url <jfrog-url> \
        --artifactory-joinkey <join-key> \
        --db-connection-string postgres://<user>:<pass>@<ip>:<port>/<db> \
        --installer-ip <new-instance-ip> \
        --api-url http://<new-instance-ip>:8082/pipelines/api \

    Note: You will need to fetch your jfrogURL (custom base URL) and join key to link your Pipelines installation to the Platform.

    Warning

    You must have a JFrog Platform Custom Base URL to be set for Pipelines to work. If the custom URL is not set for the JFrog Platform, you can provide a custom URL with the Pipelines installation with the --base-url-ui <JFrog Platform Custom URL> option. You can also set the custom URL option through the JFrog Platform UI. For more information, see General Settings. If a custom URL is already set and you provide a URL with the Pipelines installation, the custom URL you provide with the Pipelines installation overrides the existing JFrog Platform Custom Base URL.General System Settings

  3. You may perform a health check on the node to confirm it is operating properly.

Install Additional Nodes

Repeat the following procedure for each additional node.

  1. In the new node instance, extract the installer from the downloaded .rpm, .deb, or .tar.gz file, as performed for the first node.

  2. Copy from the first node instance the file $JFROG_HOME /pipelines/var/etc/system.yaml to the same location in the new instance.

  3. Perform the install procedure in the new node using the Pipelines command line installer.

  4. You may perform a health check on the node to confirm it is operating properly.

Configure the Load Balancer

Once all additional nodes have been installed with an identical version of Pipelines, the load balancer must be configured to distribute requests made through a common base URI.

For example, if you want Pipelines to be accessible as mypipelines.jfrog.io over HTTPS, then the port mapping should be configured as follows:

URI

LB(nginx/ELB)

Backend Instance(s)

https://mypipelines.jfrog.io

[Port: 30001][TCP][SSL termination]

[PORT: 30001]

https://mypipelines-api.jfrog.io

[Port: 8082][HTTP][SSL termination]

[PORT: 8082]

https://mypipelines-msg.jfrog.io

[PORT: 30200][TCP][SSL termination]

[PORT: 30200]

https://mypipelines-msg.jfrog.io

[PORT: 30201][TCP][SSL termination]

[PORT: 30201]

Update Nodes

On each node (including the first), run the Pipelines command line installer again to update your installation for the load balanced URI:

$ sudo pipelines install \
    --api-url https://mypipelines-api.jfrog.io/pipelines/api \

Pipelines should now be available in your JFrog Platform at https://myartifactory.jfrog.io.

Helm HA Installation
Prerequisites

Before deploying Pipelines using Helm Chart, you will need to have the following in place:

  • An installed Artifactory

  • A pre-created repository jfrogpipelines in Artifactory of type Generic with a maven-2-default layout, and a deployed Nginx-ingress controller

For more information, see Helm Charts for Advanced Users.

Important

Currently, it is not possible to connect a JFrog product (e.g., Pipelines) that is within a Kubernetes cluster with another JFrog product (e.g., Artifactory) that is outside of the cluster, as this is considered a separate network. Therefore, JFrog products cannot be joined together if one of them is in a cluster.

High Availability

For an HA Pipelines installation, set the replicaCount in the values.yaml file to >1 (the recommended is 3). It is highly recommended to also configure RabbitMQ and Redis subcharts to run in high availability modes. Start Pipelines with 3 replicas per service and 3 replicas for RabbitMQ.

  1. Add the JFrog Helm repository to your Helm client.

    helm repo add jfrog https://charts.jfrog.io
  2. Update the repository.

    helm repo update
  3. Next, create a unique master key; Pipelines requires a unique master key to be used by all micro-services in the same cluster. By default the chart has one set, the pipelines.masterKey, in the values.yaml file (unlike other installations, Helm Chart configurations are made to the values.yaml and are then applied to the system.yaml).

    Note

    For production grade installations it is strongly recommended to use a custom master key. If you initially use the default master key it will be very hard to change the master key at a later stage This key is for demo purpose and should not be used in a production environment.

  4. Generate a unique key and pass it to the template during installation/upgrade.

    # Create a key
    export MASTER_KEY=$(openssl rand -hex 32)
    echo ${MASTER_KEY}
     
    # Pass the created master key to Helm
    helm upgrade --install --set pipelines.masterKey=${MASTER_KEY} --namespace pipelines jfrog/pipelines

    Alternatively, you can create a secret containing the master key manually and pass it to the template during installation/upgrade.

    # Create a key
    export MASTER_KEY=$(openssl rand -hex 32)
    echo ${MASTER_KEY}
     
    # Create a secret containing the key. The key in the secret must be named master-key
    kubectl create secret generic my-secret --from-literal=master-key=${MASTER_KEY}
     
    # Pass the created secret to Helm
    helm upgrade --install pipelines --set pipelines.masterKeySecretName=my-secret --namespace pipelines jfrog/pipelines

    Note

    In either case, make sure to pass the same master key on all future calls to helm install and helm upgrade. In the first case, this means always passing --set pipelines.masterKey=${MASTER_KEY}. In the second, this means always passing --set pipelines.masterKeySecretName=my-secret and ensuring the contents of the secret remain unchanged.

  5. To connect Pipelines to your Artifactory installation, you will need to use a Join Key. To provide a Join Key, jfrogUrl, and jfrogUrlUI to your Pipelines installation, retrieve the connection details of your Artifactory installation from the UI in the following way (for more information see Viewing the Join Key.Introduction to the General Security Settings

    pipelines:
      ## Artifactory URL - Mandatory
      ## If Artifactory and Pipelines are in same namespace, jfrogUrl is Artifactory service name, otherwise its external URL of Artifactory
      jfrogUrl: ""  
       
      ## Artifactory UI URL - Optional
      ## This must be the external URL of Artifactory, for example: https://artifactory.example.com
      ## If you provide a value here, it overrides the JFrog Platform Custom URL. A custom URL is necessary for Pipelines to function.
      ##If JFrog Platform Custom URL does not exist,You must provide a value with the installation or configure the custom URL through UI after installation.
      jfrogUrlUI: ""    ## Join Key to connect to Artifactory    ## Join Key to connect to Artifactory
      
      ## IMPORTANT: You should NOT use the example joinKey for a production deployment!
      joinKey: EEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE
     
      ## Pipelines requires a unique master key
      ## You can generate one with the command: "openssl rand -hex 32"
      ## IMPORTANT: You should NOT use the example masterKey for a production deployment!
      masterKey: FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF
  6. You can choose to set the Redis password in the values.yaml file and pass it with the installation by setting redis.usePassword set as true.

    The following snippet shows the configuration.

    redis:
      password: "password"
      usePassword: true
  7. Get the Pipelines helm chart to get the required configuration files.

    helm fetch jfrog/pipelines --untar
  8. Configure the installation by editing the local copies of the values-ingress.yaml and values-ingress-passwords.yaml with the required configuration values.

    1. Edit the URLs in the values-ingress.yamlfile (Artifactory URL, Ingress hosts, Ingress tls secrets).

    2. Set the passwords uiUserPassword, postgresqlPassword and auth.password in the local copies.

    3. Set the masterKey and joinKey in the values-ingress-passwords.yaml.

      Note

      Unlike other installations, Helm Chart configurations are made to the values.yaml and are then applied to the system.yaml.

      Follow these steps to apply the configuration changes.

      1. Make the changes to values.yaml.

      2. Run the command.

        helm upgrade -- install pipelines --namespace pipelines -f values.yaml

  9. Install Pipelines.

    kubectl create ns pipelines
    helm upgrade --install pipelines --namespace pipelines jfrog/pipelines -f pipelines/values-ingress.yaml -f pipelines/values-ingress-passwords.yaml
  10. Access Pipelines from your browser at: http://<jfrogUrl>/ui/, then go to the Pipelines tab in the Application module in the UI.

  11. Check the status of your deployed helm releases.

    helm status pipelines

Note

For advanced installation options, see Helm Charts Installers for Advanced Users.