Why does my Xray is not indexing my artifacts? How to troubleshoot such incident?

Subject
How to analyze and understand Xray indexing process?

Description
This article will help you troubleshoot and understand the index process.

Instructions
We highly recommend to start this troubleshooting via Xray UI navigate to the Admin tab ⇥ “System monitoring” and “System messages” sections.

These sections may give you a useful indication if something went wrong with the overall Xray services, which requests failed, if it’s a disk space issue, etc..
You can also check if all the services are up and running using the xray.sh script by adding the ‘status all’ or ‘ps’ flags (depends on the installation type you are running).
For example:
$ xray.sh status all
$ xray.sh ps

 

If you observe an issue with the services, you can start check the Xray server.log to see what caused the issue. If Xray itself is in a healthy state, hence, the services are up and running, you may need to deep dive into why the indexing is not working properly.
At this point, you can start by checking the RabbitMQ queues. The RabbitMQ holds and manages all the messages in Xray queues. These, include events, indexing, persist and analyzing messages. Events can also be events that are being sent from Artifactory instances. Indexing is the process of indexing artifacts. 

In order to check the queues in the RabbitMQ, please follow below steps:
1. You can access the queues through the RabbitMQ console using:  http://localhost:15672/#/queues
In case you are not able to acess the RabbitMQ UI, you can try to create an SSH tunnel using: ssh -L15672:127.0.0.1:15672 root@<machine ip>
2. Check the RabbitMQ event, index, or persist queues for messages (refer to the example screenshot below)
When you are connected, it will allow you to understand if you have a bottleneck in one of the services or if you have any operation failure messages.

According to the queue details, you will be able to narrow down the search to the relevant log and will have better understanding in which part of the process Xray failed.

Another possible reason for Xray not indexing is that Xray has reached the configured limit for disk usage (by the default is 80%).
In order to resolve this, there is a need to increase the disk usage limit by increasing the value of maxDiskDataUsage in Xray's xray_config.yaml configuration file.

Moreover, we can also investigate the Xray Indexing from the Artifactory side, by adding the following loggers to ARTIFACTORY_HOME/etc/logback.xml:

<appender name="xray" class="ch.qos.logback.core.rolling.RollingFileAppender">
<File>${artifactory.home}/logs/xray.log</File>
<encoder>
<pattern>%date ${artifactory.contextId}[%thread] [%-5p] (%-20c{3}:%L) – %m%n</pattern>
</encoder>
<rollingPolicy class="ch.qos.logback.core.rolling.FixedWindowRollingPolicy">
<FileNamePattern>${artifactory.home}/logs/xray.%i.log</FileNamePattern>
<maxIndex>13</maxIndex>
</rollingPolicy>
<triggeringPolicy class="ch.qos.logback.core.rolling.SizeBasedTriggeringPolicy">
<MaxFileSize>10MB</MaxFileSize>
</triggeringPolicy>
</appender>
 
<appender name="xrayDao" class="ch.qos.logback.core.rolling.RollingFileAppender">
<File>${artifactory.home}/logs/xrayDao.log</File>
<encoder>
<pattern>%date ${artifactory.contextId}[%thread] [%-5p] (%-20c{3}:%L) – %m%n</pattern>
</encoder>
<rollingPolicy class="ch.qos.logback.core.rolling.FixedWindowRollingPolicy">
<FileNamePattern>${artifactory.home}/logs/xrayDao.%i.log</FileNamePattern>
<maxIndex>13</maxIndex>
</rollingPolicy>
<triggeringPolicy class="ch.qos.logback.core.rolling.SizeBasedTriggeringPolicy">
<MaxFileSize>10MB</MaxFileSize>
</triggeringPolicy>
</appender>
 
<logger name="org.artifactory.addon.xray">
<level value="trace" />
<appender-ref ref="xray"/>
</logger>
 
<logger name="org.artifactory.storage.db.xray.dao">
<level value="trace" />
<appender-ref ref="xrayDao"/>
</logger>
 
*no restart is required for the loggers to apply.
 

If the above does not help, you may contact support@jfrog.com for further assistance.