Relevant Versions: Artifactory 5 & 6.
Artifactory comes with a predefined set of default configurations and parameters.
If you believe your Artifactory server is under-utilized, or in order to allow it to handle more processes at a given moment, it is possible to tune Artifactory to support a higher load.
While it is always possible to scale horizontally by adding additional nodes to your HA cluster, here we will focus on a more vertical scale.
Recommendation: The more crucial Artifactory becomes in your organization, the more crucial will be to have a monitoring system to look over Artifactory.
You may read further at Monitoring and Optimizing Artifactory Performance.
To modify the JVM memory allocation, please refer to the corresponding instructions for Linux, Solaris or Mac, or Windows. Be advised to follow our hardware recommendations.
When increasing the JVM memory allocation, make sure you leave a few GBs of RAM to the host OS.
We can alter the maximum connections Artifactory can open to the DB. This will be configured at the
$ARTIFACTORY_HOME/etc/db.properties configuration file.
Important: The above parameters are being used by Artifactory, Access, and the Database Locking mechanism.
This means once the above example is used, Artifactory as a server will open up to 900 DB connections.
Therefore we need to make sure the DB can accommodate the total number of connections all Artifactory nodes can open.
As a rule of thumb we will require from the DB a number of connections based on:
Total # of connections = (number of nodes) * 3 * (pool.max.active) + 50;
Tomcat HTTP Connections / Threads
Artifactory manages several thread pools. One of them is the HTTP thread pool, which is the number of concurrent incoming HTTP connections Artifactory can handle.
This is configured in
$ARTIFACTORY_HOME/tomcat/conf/server.xml. There are separate HTTP connection pools for Artifactory and Access.
<Connector port="8081" sendReasonPhrase="true" relaxedPathChars='' relaxedQueryChars='' maxThreads="200"/>
<Connector port="8040" sendReasonPhrase="true" maxThreads="50"/>
<Connector port="8081" sendReasonPhrase="true" relaxedPathChars='' relaxedQueryChars='' maxThreads="1024"/>
<Connector port="8040" sendReasonPhrase="true" maxThreads="250"/>
Connector port” 8081 is for Artifactory, ”
Connector port” 8040 is for Access.
**When updating the Access
maxThreads, it is required to update the
$ARTIFACTORY_HOME/etc/artifactory.system.properties file with:
artifactory.access.client.max.connections = <VALUE>
Artifactory async Thread Pool
One of the most important thread pools in Artifactory is the “async” thread pool. This one defines the number of processes that can run in parallel.
In addition to configuring the total number of parallel processes, we can also modify the maximum number of processes that can be queued.
This is configured in
(this means the machines CPU cores times 4)
artifactory.async.corePoolSize = (4 * Runtime.getRuntime().availableProcessors())
artifactory.async.poolMaxQueueSize = 10000
(Shouldn’t be more than 8x the machine CPU cores)
artifactory.async.corePoolSize = 128
artifactory.async.poolMaxQueueSize = 100000
By default, the Artifactory Garbage Collection is configured to run every 4 hours.
The GC is a very resource-consuming operation, and if you see correlations between the running period of the GC to slow performance, we would recommend you to alter the Artifactory GC (not related JVM GC) to run at non-peak hours.
Artifactory manages a separate connection pool for outgoing HTTP requests.
This connection pool is limited by default to 50 concurrent connections, and up to 50 concurrent connections per unique route.
This is configured in
artifactory.http.client.max.total.connections = 50
artifactory.http.client.max.connections.per.route = 50
artifactory.http.client.max.total.connections = 100
artifactory.http.client.max.connections.per.route = 80
Artifactory supports different backend storage configurations to store the Artifactory filestore.
For scenarios where the configured storage is not local, a performance benefit can be using a large Cache-FS provider mounted locally.
Cached files will be served quickly, and therefore having a large Cache FS provider will be a performance gain.