ARTIFACTORY: How to migrate from Eventual to Direct upload mechanism

ARTIFACTORY: How to migrate from Eventual to Direct upload mechanism

AuthorFullName__c
David Livshin
articleNumber
000006038
ft:sourceType
Salesforce
FirstPublishedDate
2024-02-29T12:20:00Z
lastModifiedDate
2025-01-07
VersionNumber
4

Warning! Do not use this method for migration between Artifactory version 7.59 and 7.63.11.
Relevant from Artifactory version 7.7

JFrog recommends using the direct upload mechanism for connecting your binarystore to cloud providers like s3, GCP, and Azure.

The Direct Upload mechanism enables you to upload without the need to maintain persistent storage for the eventual directory. This mechanism allows for faster uploads, since there is less contention for disk IO, particularly when Artifactory is hosted with the same provider as the storage. For more information see here.

This guide will help you transition with minimal downtime from an eventually based cloud provider setup to a direct one, for example from the "cluster-s3-storage-v3” template to the “s3-storage-v3-direct” template.
 

The Challenge

The key to the transition is that the eventual mechanism saves all files uploaded to Artifactory locally in the _pre then _queue or _add folders inside of the $JFROG_HOME/artifactory/data/artifactory/eventual directory. 
Note: The _queue directory is used for the eventual cluster (Ha clusters) (read more about this here). 
The _add and _delete repositories are used for single node eventual configuration, read more here.

When Artifactory establishes a connection to the cloud provider it will transfer the binaries there and remove them from these directories. If we change our binarystore.xml file to the new template whilst binaries remain in these directory, these binaries will never make it to the cloud storage provider.
 

Solution
 

We could just prevent anyone from uploading to our Artifactory instance allowing Artifactory to clear the _queue or _add directories and then make the switch. This is a viable strategy if you can deal with downtime.

However, we offer another solution:
We can set the property passThrough as true in the binarystore.xml (under $JFROG_HOME/artifactory/var/etc/artifactory/). This property will make the eventual configuration act like the direct upload mechanism saving all new binaries directly on the remote provider, bypassing the eventual mechanism completely. This allows the current _queue or _add directory to clear without adding new binaires to this location.
 
Examples
For the general eventual configuration.

<provider id="eventual" type="eventual">
   <passThrough>true</passThrough>
</provider>


In template of : cluster-s3-storage-v3

<provider id="eventual-cluster-s3" type="eventual-cluster">
   <passThrough>true</passThrough>
</provider>


In template of : cluster-google-storage-v2

<provider id="eventual-cluster-google-storage" type="eventual-cluster">
   <passThrough>true</passThrough>
</provider>


In template of : cluster-azure-blob-storage-v2

<provider id="eventual-cluster-azure-blob-storage-v2" type="eventual-cluster">
   <passThrough>true</passThrough>
</provider>

After making this change a restart of Artifactory is required.

If you have an HA setup you can make this change on one node. Restart, wait for it to come back up and then restart the other nodes. The configuration will automatically propagate and just needs a restart to be applied to each node.
 

Conclusion

Once the _queue or _add directory is clear on all nodes, you can proceed to fully implement the direct upload mechanism for your provider and again restart.