ARTIFACTORY: Troubleshooting Artifactory Upload Slowness Caused by NFS Disk's _pre Folder

ARTIFACTORY: Troubleshooting Artifactory Upload Slowness Caused by NFS Disk's _pre Folder

Products
Frog_Artifactory
Content Type
Use_Case
AuthorFullName__c
David Shin
articleNumber
000006643
FirstPublishedDate
2025-09-29T12:34:25Z
lastModifiedDate
2025-09-29
VersionNumber
1
Introduction 

Use Case: Artifactory Upload Slowness with Slow NFS Main Storage

Notice:
This case is relevant for Artifactory using a Slow NFS Main Filestore chained with a Fast Local/SAN cache-fs Provider.


If you’ve identified a performance bottleneck during high-volume small file uploads to Artifactory by using  nfsiostat tool on the Artifactory server.


The slow upload times are linked to files lingering in the _pre directory of the main file store, which resides on a slow Network File System (NFS) mount.


In this chained binary provider setup, both the main (file-system) and the cache (cache-fs) layers create their own _pre folders for temporary file staging. When skipDuringUpload is set to false (the default),  Artifactory performs concurrent write operations to both temporary locations.  This concurrent write to the slow NFS mount for the main provider's _pre folder introduces a significant bottleneck and redundancy
  • Main Storage (Slow NFS): /mnt/slow/artifactory/data (file-system provider).
  • Cache Storage (Fast SAN/NAS): /mnt/fast/artifactory/cache (cache-fs provider).
The goal is to move the main provider's temporary staging (the slow _pre) to the faster disk and set <skipDuringUpload>true</skipDuringUpload> to prevent the redundant cache write.


Resolution: Leveraging Faster Cache Storage for _pre and Optimizing Writes 

To resolve the slowness bottleneck, You must consolidate the staging (_pre)to the fast disk and set skipDuringUpload to true on the cache-fs provider.


Prerequisite: Eliminate Redundant Write Overhead by Setting skipDuringUpload to true

This step prevents Artifactory from performing a redundant write operation to the cache-fs temporary folder during upload, regardless of where the main _pre folder is located.

Locate the cache-fs provider in binarystore.xml and make the change:
 <provider id="cache-fs" type="cache-fs">
        <maxCacheSize>7000000000</maxCacheSize>
        <cacheProviderDir>/mnt/fast/artifactory/cache</cacheProviderDir>
        <skipDuringUpload>true</skipDuringUpload> 
 </provider>

Method 1: Using the <tempDir> Configuration (Recommended for Artifactory 7.98.2+)
1. Modify binarystore.xml: Add the <tempDir> configuration under the repo provider, pointing it to a directory on your fast storage.
<provider id="file-system" type="file-system">
    <fileStoreDir>/mnt/slow/artifactory/data</fileStoreDir>
    <tempDir>/mnt/fast/artifactory/cache/temp-pre</tempDir> 
</provider>

 

2.   Restart Artifactory. The main provider's temporary staging will now use the fast disk location specified in <tempDir>.

Method 2:  Creating a Symbolic Link (For Older Versions)
This method overrides the default location of the main provider's _pre folder using an operating system symlink.
a. Shutdown the Artifactory Service.
b. Remove the Existing Slow _pre Folder: Delete the slow _pre folder from the main file store location:
rm -rf /mnt/slow/artifactory/data/_pre
c. Create a New _pre Directory on Fast Storage (if needed): Create the target directory for the symlink on the fast mount
mkdir -p /mnt/fast/artifactory/cache/fast-pre-link

 

d. Create a Symbolic Link (Symlink): Link the new fast directory to the main file store’s expected _pre location:
ln -s /mnt/fast/artifactory/cache/fast-pre-link /mnt/slow/artifactory/data/_pre

 

e. Verify the Symlink: Confirm that the symlink was created successfully:
ls -al /mnt/slow/artifactory/data

# Output should show: _pre -> /mnt/fast/artifactory/cache/fast-pre-link

 

f. Start the Artifactory Service. The uploads will now stage their temporary files in the fast location, and combined with Step 1, this eliminates the write bottleneck.