You install Pipelines with the Pipelines command line installer for Docker and Docker Compose, which automates the performance of most installation procedures. The command line installer is also used to change configuration settings, to restart, and to upgradePipelines.
Before you proceed with the installation, review the system requirements.
Pipelines makes use of the Artifactory filestore for performing storage functions such as step caching.
Operating Systems and Platform Support
The following table lists the supported operating systems and the versions.
Build nodes only
Amazon Linux 2023
Operating Systems - End of Support
As part of JFrog commitment to maintain the security and reliability of the JFrog Platform, Artifactory will officially run with Node.js 20.x on all installation types from Artifactory 7.77.3.
Node.js 20.x provided with Linux Archive/Debian/RPM installations (non-containerized distributions) is not supported on the following operating systems.
Ubuntu 18.04 (has reached End of Standard Support on May 31, 2023).
RHEL 7.x - Red Hat Enterprise Linux 7.x (has reached End of Full Support on 2019)
Centos 7.x (has reached End of Active Support on 2020).
Suse Linux Enterprise Server (SLES) 12 SP5 ( will reach End of Standard Support on October 31, 2024)
Amazon Linux 2 (no support for Node.js 20x)
Hence, these operating systems will no longer supported from Artifactory version 7.77.3.
The following table lists the supported platforms.
Installation on Kubernetes environments is through Helm Charts. Supported Helm version is Helm 3+.
From version 7.41.4, Artifactory supports installation on ARM64 architecture through Helm and Docker installations. You must set up an external database as the Artifactory database since Artifactory does not support the bundled database with the ARM64 installation. Artifactory installation pulls the ARM64 image automatically when you run the Helm or Docker installation on the ARM64 platform.
ARM64 support is also available for Xray, Distribution, and Insight. ARM64 support is not available for Pipelines.
Database and Third-Party Applications in Pipelines
For a single node installation of Pipelines, the PostgreSQL database is by default installed on the same node as Pipelines. It may be optionally configured as an external database.
Pipelines HA requires an external database for common use by all HA nodes.
Pipelines supports the following versions of PostgreSQL.
Pipelines uses Redis 6.2.6 and bundles a Redis package along with the installer.
RabbitMQ 3.8 is installed as part of the Pipeline installation for every node In case of HA architecture, it uses queue mirroring between the different RabbitMQ nodes. You should not use any other external RabbitMQ instance.
Pipelines Network Ports
Pipelines uses the 8082 port and 30001 (30200 from Pipelines 1.0 to 1.10.0) by default for external communication.
Pipelines uses the following internal ports by default for communication with JFrog Platform microservices.
Pipelines WWW (UI)
SSH access to the instance
Database (PostgreSQL) access
6379, 16379, 6380, 16380, 6381, 16381
In addition, review the Docker requirements.
For Docker and Docker Compose installations, JFrog services require Docker 20.10.10 and above, and Docker Compose v1.24 and above to be installed on the machine on which you want to run on.
Ensure that your environment has OpenSSL before you start the installation.
Run the following command to install OpenSSL.
sudo apt-get install openssl
sudo yum install -y openssl
Ensure that you meet the following requirements before you run the command line installer.
Python version 2.7 or above
Set the proxy for Docker Daemon if you behind an HTTP or HTTPS proxy server. For more information, see https://docs.docker.com/config/daemon/systemd/ in Docker documentation.
The command line installer performs the following procedures.
Installs of the third-party dependencies (PostgreSQL database, RabbitMQ messaging, included in the archive).
Installs of Pipelines.
Connects to Artifactory (using
Configures the following parameters of the Pipelines service.
Network IP/URL assignments for build node access
Network IP/URL assignments for REST APIs and supplemental UI
Connection to the optional external database
Registry of default build images
Starts the service.
Extract the installer from the downloaded
.tar.gzfile (see System Directories for the value of the JFrog root directory
Linux Archive (tar.gz)
mkdir -p installer && tar -C installer -xvzf pipelines-<version>.tar.gz cd installer/pipelines-<version> ./pipelines --help # prints all the available command line installer options
sudo rpm -Uvh pipelines-<version>.rpm cd $JFROG_HOME\pipelines\installer
sudo dpkg -i pipelines-<version>.deb cd $JFROG_HOME\pipelines\installer
Installing Pipelines using RPM or Debian makes the Pipelines command line installer command accessible from any directory.
Installing using the generic Linux installer (.tar) involves extracting the pipelines files (where the
pipelinesexecutable comes into play with
Run the installer.
sudo pipelines install \ --base-url <jfrog-url> \ --artifactory-joinkey <join-key> \ --installer-ip <new-instance-ip> --api-url http://<external-ip>:8082/pipelines/api
You must have a JFrog Platform Custom Base URL to be set for Pipelines to work. If the custom URL is not set for the JFrog Platform, you can provide a custom URL with the Pipelines installation with the --base-url-ui <JFrog Platform Custom URL> option. You can also set the custom URL option through the JFrog Platform UI. For more information, see General Settings. If a custom URL is already set and you provide a URL with the Pipelines installation, the custom URL you provide with the Pipelines installation overrides the existing JFrog Platform Custom Base URL.
Perform a health check on the node to verify that it is operating properly
curl -XGET http://localhost:8046/router/api/v1/topology/health | jq '.'
If you reboot the machine after installation, run the following command to start Pipelines.
Once the installation is complete, you can access Pipelines as part of the JFrog Platform Deployment.
Access the JFrog Platform from your browser. For example, at:
For Pipelines functions, go to the Pipelines tab in the Application module.
Start configuring Pipelines to create build node pools, add integrations, and add pipeline sources.
It may be necessary to restart Pipelines on a node. For example, if the VM is restarted, Pipelines needs to be restarted for it to start running again.
If Pipelines was installed with sudo and the default
$JFROG_HOME , run
sudo pipelines restart. Otherwise, run
pipelines restart as the user that installed Pipelines and/or with the same
$JFROG_HOME environment variable.
You can use the command line options of the Pipelines command line installer to orchestrate a custom configuration of Pipelines. These can be used during the install process using
You can also run
pipelines install to change configuration settings.
sudo pipelines install [flags]
You can list all available options using the following command.
pipelines help Usage: ./pipelines <command> [flags] Examples: ./pipelines install Commands: install Run Pipelines installation --installer-ip Internal IP of the host [mandatory] --base-url-ui Unified UI URL [mandatory] --artifactory-joinkey Join key to connect with Artifactory [mandatory] --api-url Use provided url for API [mandatory] --base-url Internal Artifactory URL --global-password Set one password for all services (db, rabbitmq, pipelines). Can be changed later --install-user User (and Group) that owns the installer generated files and folders (vagrant by default) e.g. obie, obie:obie, 1001:obie, 1002:1002 --artifactory-proxy Proxy server to use for connecting to Artifactory --artifactory-proxy-username User for the proxy server --artifactory-proxy-password Password for proxy server --artifactory-service-id Service Id to register with Artifactory. Format 'jft@<id>' --image-registry-url Docker registry url for Pipelines component images --image-registry-creds Path to a file containing Docker credentials for the image registry as an alternative to --image-registry-url --build-image-registry-url Docker registry url for images used by default in Pipelines steps --state-bucket Root bucket name for storing state --skip-image-pull Do not pull images --db-connection-string A connection string to use an existing Postgres database --vault-url URL to connect to an existing Vault --vault-root-token Root token of the existing Vault specified with --vault-url --no-verify-ssl If true, pass -no-verify-ssl flag to services --global-http-proxy HTTP proxy to be used in Pipelines in place of any proxy information fetched from Artifactory --global-https-proxy HTTPS proxy to be used in Pipelines in place of any proxy information fetched from Artifactory --global-no-proxy No proxy settings to be used in Pipelines in place of any proxy information fetched from Artifactory --access-control-allow-origins API will return these as allowed origins. A comma-separated list of origins should be provided. --disable-call-home Disable call home functionality --enable-call-home Enables call home functionality if previously disabled --rabbitmq-health-check-interval RabbitMQ health check interval in mins --artifactory-health-check-interval Artifactory health check interval in mins --db-health-check-interval Database health check interval in mins --config-backup-count Number of upgrades for which to keep backup configurations upgrade Upgrade current installation restart Restart Pipelines clean Remove Pipelines components and files info Print information about current installation on console -j | --json Print info as json version Print current installation control and build plane versions -j | --json Print info as json help Print this message
Custom Installation Directory
The environment variable
JFROG_HOME determines the location of all the configuration files and data stored for the installer. In any installation command, export this variable to the location of your choice.
If the installation is not being run by a user with sudo permissions, grant the user full read/write permissions on the
All installer commands (like
JFROG_HOME environment variable to run successfully.
To avoid exporting the variable for each command, you can be add it to the global environment file in
/etc/environmentor user specific environments in
Artifactory Connection Details
Pipelines requires a working Artifactory server and a suitable license.
The Pipelines connection to Artifactory requires the following parameters.
baseUrlUI - URL to the machine where JFrog Artifactory is deployed, or the load balancer pointing to it. It is recommended to use DNS names rather than direct IPs. For example:
Optionally, you may also set an internal URL baseUrl for connecting to Artifactory. You may need to do this if you have set up your JFrog Platform Deployment with a load balancer.
--base-url-uioption to set both the baseUrlUI and the internal baseUrl the same URL, or use both
--base-url-uito set them to individual URLs.
--base-url-uiis the user accessible URL for Artifactory whereas the
--base-urlis the "internal" route for Artifactory. These URLs will be the same if the entire JFrog Platform is on a private network and users use Artifactory via a private IP. In most cases, the
--base-url-uiwill be a URL, and the
--base-urlwill be an internal IP, internal domain name, or load-balancer IP. \
In the Helm installer, the
--base-urlshould be the internal service URL Artifactory. This is more efficient and reduces the number of network hops, thus decreasing the probability of breaking the connection during large file upload/downloads.
joinKey - This is the "secret" key required by Artifactory for registering and authenticating the Pipelines server.
You can fetch the Artifactory
joinKey(join Key) from the JPD UI in the Administration module > User Management > Settings > Join Key.
--artifactory-joinkeyoption to set the joinKey through the command line installer.
External Connections in Pipelines Installation
Pipelines requires network configuration that enables the exchange of messages between the Pipelines CI server (in the controlplane) and all possible build nodes (in the buildplane). Build nodes must be able to command the controlplane through Pipelines REST APIs. Since build nodes may run outside the private network or VPC where Pipelines is installed (for example, in a public cloud), the ports for these channels should be exposed as public IP addresses. This exposure is mitigated by secure protocols and authentication.
These recommended port settings and exposures ensure full functionality of all documented features and usage of all supported build node types and sources. Custom configurations may support a limited subset of Pipelines functionality.
Contact JFrog support for assistance in implementing a custom install.
REQUIRED: Base IP of the Pipelines instance.
IP or URL for REST APIs. For example:
1Must be accessible to users (same accessibility as base-url-ui)
2Must be accessible from build nodes (external for cloud VMs)
Instance IP (internal): 10.128.0.16
Pipelines external IP: 22.214.171.124
The API external URL provides access to the Pipelines REST APIs from outside the installation host. This URL must be defined on first install using the --api-url option; there is no default assignment by the command line installer.
The preferred form of this URL is an IP address with the port, followed by the path /pipelines/api. For example:
Alternately, if you have set up your service URL as a domain name through a NAT gateway or load balancer:
When using an external PostgreSQL database, if the database user used in the connection string is not a master user, ensure that the database master user creates the Postgres
pg_trgm extension before you install Pipelines.
CREATE EXTENSION IF NOT EXISTS pg_trgm;
By default, Pipelines installs and connects to a PostgreSQL database that runs in the same instance as the Pipelines service. However, it can be configured to use an external PostgreSQL database if needed. This is required for a high availability installation, so that all HA nodes of Pipelines reference a common, shared database.
Use the following commands to create a Pipelines user and database with appropriate permissions before proceeding to install Pipelines. Modify the relevant values to match your specific environment.
CREATE USER pipelines WITH PASSWORD ‘password’; CREATE DATABASE pipelinesdb WITH OWNER=pipelines ENCODING=’UTF8'; GRANT ALL PRIVILEGES ON DATABASE pipelinesdb TO pipelines;
After you have verified that the script is correct, you need to run it to create the database and proceed with configuring the database.
When installing Pipelines, you must specify the connection string using the following argument in command line installer. The database connection details are used for each node installation.
$ sudo pipelines install --db-connection-string postgres://<user>:<pass>@<ip>:<port>/<db>
By default, Pipelines installs and connects to a vault that runs in the same instance as the Pipelines service. However, it can be configured to use an external vault if needed.
When installing Pipelines, you must specify the
vault-url string and the root token using the following argument in command line installer.
To use vault securely, you must set the
disablemlock setting in the
false (see Hashicorp Vault recommendations).
The Pipelines installer fetches the platform default configuration from the connected Artifactory instance and injects it into all microservices and static execution nodes. This process ensures that any outgoing connections use the same proxy settings as are being used by the parent Artifactory instance.
Dynamic build nodes are not automatically configured with the proxy settings at this time, nor are username or password settings for the proxy supported in Pipelines.
To configure a different proxy for Pipelines, or a no_proxy setting other than localhost,127.0.0.1 with http_proxy and https_proxy configured in Artifactory, use the following arguments during installation with the command line installer.
$ sudo pipelines install --global-http-proxy <proxy-server> --global-https-proxy <proxy-server> --global-no-proxy <no-proxy>
The installer allows users to set up state providers using command line installer. State is used by Pipelines to store
Test and coverage reports
Users can also use state indirectly to download console logs and artifacts from the UI.
--state-bucket <bucket name> to configure the Artifactory repository to use for storing state. If this setting is left blank, a name is automatically generated.
--install-user <username>:<groupname>argument of the command line installer controls the user and group settings for the files created by installer. By default, the installation runs as the currently logged in user, as defined by
$HOME environment variable
The user and group provided as the argumentsmustexist before running the installation
ssh-keypair for the usermustexist in
The public key should be in file
The private key should be in file
$USER_HOME/.ssh/id_rsawith permissions set to 600
The user being usedmusthave permissions on
The user being usedmustbe part of
dockergroup on the host to execute Docker commands
The following dependencies must be installed: Python, jq, yq, curl, nc, psql, and Docker Compose
Install Image Registry
--image-registry-url argument of the command line installer specifies the endpoint where the Docker images for Pipelines services are stored that will be installed by the Pipelines command line installer. By default, the command line installer will install Pipelines from the JFrog distribution registry at
releases-docker.jfrog.io. This should not be changed without instruction from JFrog.
To change the registry for runtime build images, use
--build_image_registry-url as described below.
Changing the Default Build Image Registry
The standard set of runtime build images are stored at
releases-docker.jfrog.io and the Pipelines command line installer sets this registry location by default.
You may want to copy the build images to a local Docker registry either to improve image pull times or avoid requiring access to a remote registry. After copying the images to the new location, you'll need to update Pipelines to use this location. This can be done during installation or as part of an upgrade. Assuming that you have simply moved all of the default images, this just requires setting the
--build-image-registry-url option to the new registry to update all the default images for Pipelines when running either
pipelines upgrade or
--build-image-registry-url to the new registry, ensure that the new registry allows anonymous access for pulling.
Alternatively, if you want to use multiple registries or change the names of the default images, you can edit the Pipelines System YAMLfile and then run
pipelines upgrade without the
--build-image-registry-url option to start using the new image settings.