Installing Xray

JFrog Installation & Setup Documentation

ft:sourceType
Paligo

You can install and configure JFrog Xray, single node and high availability. For additional information on high availability, see high availability

Before installing Xray 3.x, you must first install JFrog Artifactory 7.x.

Note

Use a dedicated server with no other software running to alleviate performance bottlenecks, avoid port conflicts, and avoid setting uncommon configurations.

Warning

If you install Xray on the same server as Artifactory, you might have port conflicts and performance issues. Artifactory and Xray use port 8082 for communication with the external network. In addition, the ports 8082, 8046, 8047, and 8049 are used by both Xray and Artifactory for the Router microservice.

To learn about the JFrog Platform Deployment, see System Architecture.

Admin Permissions for Installation

We recommend that you run the installation as a root user or provide sudo access to a non-root user.

You need to have admin permissions on the installation machine in the following cases.

  • Native installer - always requires admin permissions

  • Archive installer - requires admin permissions only during installation

  • Docker installer - does not require admin permissions

Before you proceed, see System Requirements for information on supported platforms, supported browsers, and other requirements.

Xray system requirements depend on the size of your environment.

Number of Indexed Artifacts

Processor

Memory

Disk Space

Up to 100k indexed artifacts,  and 1K artifacts/builds per day

Xray and DB: 6 cores

JAS: 6 cores

Xray and DB: 24 GB

JAS: 24 GB

Xray and DB: 500 GB (SSD, 3000 IOPS)

JAS: 500 GB (SSD, 3000 IOPS)

Up to 1M indexed artifacts, and 10k artifacts/builds per day

Xray (x2 nodes): 4 cores

DB: 8 cores

JAS (x2 nodes) : 8 cores

Xray (x2 nodes): 8 GB

DB: 32 GB

JAS (x2 nodes): 24 GB

Xray (x2 nodes): 300 GB

DB: 500 GB (SSD, 3000 IOPS)

JAS (x2 nodes): 300 GB

Up to 2M indexed artifacts,  and 20k artifacts/builds per day

Xray (x3 nodes): 6 cores

DB: 16 cores

JAS (x4 nodes): 8 cores

Xray (x3 nodes): 12 GB

DB: 32 GB

JAS (x4 nodes): 24 GB

Xray (x3 nodes): 300 GB

DB: 1 TB (SSD, 3000 IOPS)

JAS (x4 nodes): 300 GB

Up to 10M indexed artifacts,  and 50k artifacts/builds per day

Xray (x3 nodes): 8 cores

DB: 16 cores

JAS (x8 nodes): 8 cores

Xray (x3 nodes): 24 GB

DB: 64 GB 

JAS (x8 nodes): 24 GB

Xray (x3 nodes): 300 GB

DB: 2.5 TB (SSD, 3000 IOPS)

JAS (x8 nodes): 300 GB

Over 10M indexed artifacts, and 50k artifacts/builds per day

Contact JFrog Support for sizing requirements.

Note

The number of nodes in the table refers to high availability (HA) setups, not disaster recovery.

Node Recommendations

Use a dedicated node for Xray with no other software running to alleviate performance bottlenecks, avoid port conflicts, and avoid setting uncommon configurations.

Storage Recommendations

In most cases, our recommendation is to use an SSD drive for Xray to have better performance and it is not recommended to use an NFS drive, as it is a disk I/O-intensive service, a slow NFS server can suffer from I/O bottlenecks and NFS is mostly used for storage replication.

Xray stores node specific files, such as configuration and temporary files, to the disk. These files are exclusively used by Xray and not shared with other services. Since the local storage used for Xray services are temporary, it does not require replication between the different nodes in a multi-node/HA deployment.

File Handle Allocation Limit

Use the following command to determine the current file handle allocation limit.

cat /proc/sys/fs/file-max

Then, set the following parameters in your /etc/security/limits .conf file to the lower of 100,000 or the file handle allocation limit determined above.

The example shows how the relevant parameters in the /etc/security/limits .conf file are set to 100000. The actual setting for your installation may be different depending file handle allocation limit in your system.

root hard nofile 100000
root soft nofile 100000
xray hard nofile 100000
xray soft nofile 100000
postgres hard nofile 100000
postgres soft nofile 100000

Product

Debian

Centos*

RHEL

Ubuntu

Windows Server

Helm Charts

SLES

Xray

10.x, 11.x

7.x

7.x, 8.x

18.04, 20.04, 22.04

(error)

3.x

(error)

Breaking Change Affecting RPM/Yum/Linux Archive Installations on Centos 7.x*

As part of our commitment to our customers to maintain the security and reliability of your JFrog Platform, from Artifactory version v7.43.x, JFrog Artifactory will officially run with JDK 17 and Tomcat 9.x on all installation types. Note that JDK 17 and Tomcat 9.x are not supported on all Centos 7.x versions.

CentOS 8.x Support

CentOS 8.x reached its end-of-life in December 2021. CentOS 8.x support for JFrog products has been deprecated by the end of June 2022.

Database and Third-Party Applications in Xray

Every artifact and build indexed by Xray is broken down into multiple components. These components and the relationships between each other are represented in a checksum based components graph. Xray uses PostgreSQL to store and query this components graph.

Xray supports the following versions of PostgreSQL.

  • 10.x

  • 11.x

  • 12.x

  • 13.x (from version 3.18)

RabbitMQ is installed as part of the Xray installation for every node. In case of HA architecture, Xray uses queue mirroring between the different RabbitMQ nodes. External RabbitMQ instances are not officially supported; the recommended method of installation is to use the bundled RabbitMQ.

Xray has multiple flows, such as scanning, impact analysis, and database sync. These flows require processing completed by the different Xray microservices. Flows contain multiple steps that are completed by the Xray services. Xray uses RabbitMQ to manage these different flows and track synchronous and asynchronous communication between the microservices.

Xray also uses Erlang and DB-Util third-party applications. These packages are bundled with the Xray installation.

Xray Network Ports

Xray uses the 8082 port by default for external communication.

Xray uses the following internal ports by default for communication with JFrog Platform microservices.

Microservice

Port

Xray Server

8000

Analysis

7000

Indexer

7002

Persist

7003

Router

8082, 8046, 8047, and 8049

RabbitMQ

4369, 5671, 5672, 15672, and 25672

PostgreSQL (if you use the bundled PostgreSQL database)

5432

Observability

8036

8037

gRPC

Installation Steps

The installation procedure involves the following main steps.

  1. Download Xray as per your required installer type (Docker Compose, RPM, Debian).

  2. Install Xray either as a single node installation, or high availability cluster.

    1. Install third party dependencies (PostgreSQL database, included in the archive)

    2. Install Xray

  3. Configure Xray basic settings.

    1. Connect to an Artifactory instance (requires a joinKey and a jfrogUrl).

    2. Optional: Configure the PostgreSQL database connection details if you have set Postgres as an external database.

  4. Start the Service using the start scripts or OS service management.

  5. Check the Service Log to check the status of the service.

Default Home Directory / $JFROG_HOME

The default Xray home directory is defined according to the installation type. For additional details see the Product Directory Structure page.

$JFROG_HOME represents the JFrog root directory containing the deployed product.

Xray Single Node Installation

The following installation methods are supported.

Xray HA Installation

The following describes how to set up an Xray HA cluster with two or more nodes. For more information, see the System Architecture.

The following installation methods are supported.

Prerequisites

All nodes within the same Xray HA installation must be running the same Xray version.

Database

Xray HA requires an external PostgreSQL database. Make sure to install the database before proceeding to install the first node. There are several ways to set up PostgreSQL for redundancy, such as Load Balancing and Replication. For more information, see PostgreSQL.

RabbitMQ is automatically installed as part of the Xray installation for every node. In an HA architecture, RabbitMQ uses queue mirroring between the different RabbitMQ nodes, which are set up automatically.

Licensing

Xray HA is supported with an Enterprise License. Each node in the cluster must be activated with a different license.

Network

  • All the Xray HA components (Xray cluster nodes, database server and RabbitMQ) must be within the same fast LAN.

  • All the HA nodes must communicate with each other through dedicated TCP ports.