How to migrate from “s3” to “cluster-s3”?

Ariel Kabov
2019-09-01 06:44

Relevant Versions: Artifactory 5 & 6.

This article is to follow the steps and the procedure of switching from an “s3” to a “cluster-s3” like chain template.
For an in-depth explanation about them and their differences, head to:
https://jfrog.com/knowledge-base/what-is-the-difference-between-s3-and-cluster-s3-filestore-chains/

*This article focuses on S3 as an example, while the mentioned information is also relevant for differences between “google-storage” & “cluster-google-storage”, “azure-blob-storage” & “cluster-azure-blob-storage” and “s3-storage-v3” & “cluster-s3-storage-v3”.

The main difference between the “s3” and “cluster-s3” is that “s3” uses the Eventual provider, while “cluster-s3” uses the Eventual-Cluster provider. To be more accurate, the steps mentioned below are relevant to all cases when migrating from a chain that uses the Eventual provider to a chain that uses the Eventual-Cluster provider.

During a transition from a Standalone server to an HA cluster? It will be easier to first migrate to “cluster-s3” and only then add additional nodes to the cluster. 

Migrating a standalone Artifactory server

  1. Make sure Artifactory Garbage Collection will not run during this procedure.
  2. While Artifactory is still running, update the $ARTIFACTORY_HOME/etc/binarystore.xml with the desired final configuration.
  3. Take the server off the network to block new uploads.
  4. Monitor the “_add” directory(By default under $ARTIFACTORY_HOME/data/eventual/_add/).
  5. Once the “_add” directory is empty, restart Artifactory.
  6. Return server to network and restore GC configurations.
  7. Confirm that both $EVENTUAL_DIR/_add/ and $EVENTUAL_DIR/_delete/ are empty. If one of these isn’t empty, run the below referenced “Convert script”.

Migrating an HA Artifactory cluster

  1. Make sure Artifactory Garbage Collection will not run during this procedure.
  2. While Artifactory is still running, update the $ARTIFACTORY_HOME/etc/binarystore.xml of the primary node with the desired final configuration. Confirm the configuration change has successfully propagated to all other cluster nodes.
  3. For each node, if configured, remove the “artifactory.ha.data.dir” entry from the $ARTIFACTORY_HOME/etc/ha-node.properties.
  4. Take off the network and shut down all member nodes.
  5. Take the primary node off the network to block new uploads.
  6. Monitor the “_add” directory (at the previously configured $artifactory.ha.data.dir/eventual/_add/).
  7. Once the “_add” directory is empty, restart the primary node.
  8. Return the primary node to the network and restore GC configurations.
  9. Start each node at a time, and add it back to the network.
  10. Confirm that both $artifactory.ha.data.dir/eventual/_add/ and $artifactory.ha.data.dir/eventual/_delete/ are empty. If one of these isn’t empty, run the below referenced “Convert script” (on any of the nodes).
  11. The shared mount is probably not needed anymore, you can remove/disable it.

The Convert Script

Prior to the migration, when Artifactory used the “eventual” provider, it had 2 directories through which events were sent to the Cloud provider, $EVENTUAL/_add/ and $EVENTUAL/_delete/.

Once “eventual-cluster” is used, Artifactory doesn’t check the above directories, rather the $EVENTUAL/_queue/ directory. In case there are leftovers in either the “_add” or the “_delete”, we will need to move them to the “_queue” directory in order to process these events.
The main difficulty here is that the expected file name format is different.

To overcome this, here is a script you can run that will move/copy the files to the target directory in the expected format.
After running the script, Artifactory will automatically pick up the events from the “_queue” folder. No additional restart is required.

eventual2cluster
Important! Prior to running the script make sure to review the README notes.