Hello everyone, and welcome to another video. I'm Patrick with JFrog Support, and today I would like to cover how you can migrate Artifactory's data and minimize downtime.
In today's video, we will be going over two different use cases: one where you will be migrating your data from one self-hosted Artifactory deployment to another self-hosted Artifactory deployment, and migrating from a self-hosted Artifactory to the JFrog Cloud. Both involve the JFrog CLI, and both are intended to minimize downtime by doing a network cutover.
We will be doing today's demo on the self-hosted to cloud migration. First, let's go over some of the prerequisites and discuss the self-hosted to self-hosted migration.
For both the self-hosted and cloud migrations, you will need an installation of the JFrog CLI, your original source Artifactory, and the new empty target Artifactory. The key difference in the target being: it is either going to be a new self-hosted host or it will be a new Artifactory Cloud deployment. If it's a self-hosted Artifactory, each of these deployments needs a unique license key. Please contact your sales representative if you need a new key.
You also need to ensure, in both cases, that there's a networking connection between the source and target Artifactory deployments. When you're migrating from a self-hosted instance, you need to export the Artifactory configurations. There is a dedicated menu for this in the Administration panel. This will export all of the repository settings, users, groups, and security permissions, as well as further customization settings. Make sure you check both the "Exclude content" and "Exclude metadata" checkboxes. This will minimize the size of the export. On the target, import this bundle to create all of these items on the empty new target installation.
Next, we need to install the Cloud Migration Data Transfer Kit — even for a self-hosted to self-hosted migration. There are two plugin files: the data-transfer.jar and the data-transfer.groovy. They need to go in the Artifactory home plugins directory found under the etc/artifactory folder. Make sure to put the JAR file into the lib directory. If there isn't a lib directory yet, create it. Then make sure both files and the lib folder are owned by the Artifactory Linux user.
Finally, you can load the plugins by using this cURL command here, which causes the Artifactory to load the plugin. Once you have done that, you are ready for the file transfer. You can add both the source Artifactory and target Artifactory to the JFrog CLI. Once they have been set up this way, you call the transfer-files command to initiate the file transfer using that information. This will take some time, especially when it is run for the first time. Once that first major synchronization event occurs, you can rerun the transfer-files command to synchronize any differences. Details about the re-sync can be found later in this video.
Let's move on to a self-hosted to cloud migration. In order to begin this migration, you first need to head to your Cloud portal. Over in the Settings menu, you will find the "Transfer Artifactory Configuration" panel. There is a warning on this page that I will need to discuss.
When you activate this cloud migration setting, it will open up a possibility where, during the import, you will be wiping the configurations on the Artifactory Cloud. There is a way to merge configurations, but the risk exists, and it is why this warning is in place.
Once you acknowledge the warning, you need to check this green radio button. What this does is, behind the scenes, scale down the cloud infrastructure — which is normally a highly available cluster — down to just one running pod. This pod then has a special "Allow Migrations" setting which opens up the transfer-config command. We have to reboot this singular pod in order to complete the migration.
After you have set up the cloud Artifactory to accept the migration, you then need to set up the CLI. Just like a self-hosted migration, you can add the source server and target server by following the prompts in the JFrog CLI. Then we need to test the transfer-config command. This pre-check command is highly recommended. It will not transfer your configurations or wipe out anything in the cloud just yet. Instead, it will test your source server configurations and alert you to any problems before the transfer-config runs for real.
Assuming there are no warnings to correct from the pre-checks phase, you are safe to move on to the transfer-config command itself. Remember that when you run this command, an irreversible migration is done in the cloud, wiping any previously existing settings. As mentioned before, if you need to merge configurations because there are already important repositories or users in the cloud, look in our documentation for the transfer-config-merge directive.
We then need to install the same plugin as done in a self-hosted to self-hosted migration. We add the data-transfer.jar and data-transfer.groovy into the source Artifactory's plugins directory. As before, you need to make sure it is owned by the Artifactory Linux user and group, and then we reload the plugins via the REST API, just like the self-hosted migration.
After the configurations have been migrated, it is time to migrate the files. Just like on-prem, this involves the transfer-files command, source server and target server directives. You may run the pre-checks check here as well, to ensure that the data on-premises is not going to cause any issues during the migration.
When you run the transfer, you can add in an include or exclude pattern in order to move certain repositories independent of one another. Once you run the command, you will see a loading bar when the command is run with the foreground process. There is also a background process directive. The CLI will manage the migration, which will run behind the scenes, independent of your login to the Linux terminal. You can check the migration, start it, and stop it using the CLI.
Here is some detail into how the file transfer works: The CLI will push files that are found in each repository one by one. For each of these repositories, there are three phases the CLI goes through.
First, it will attempt to push all of the files found in the repository to the cloud Artifactory. Once that static list has been migrated, another run of the transfer-files command will push files that have been created or modified after step one. The CLI will be tracking files it has already moved up to the cloud.
There is also phase three, which will retry any files which had a logged error when the first or second attempt was made. This also will move any previously untransferred files up to the cloud. Because of this behavior, after the first sync is completed, you can just rerun the command to upload any new files that have been added since the command was started.
I have some pre-recorded demos of the CLI migration technique. Let's take a look.
In this quick demo, I'll be demonstrating transferring your configuration files from an on-prem Artifactory up to a cloud Artifactory. First, I want to demonstrate the pre-check step. What this step does is check all of the configurations in my on-prem Artifactory to make sure the cloud transfer is going to work. Do note that this pre-check command, while important and necessary, doesn't catch all of the issues that can arise in a transfer of configurations. It's just checking the basics.
Since everything looks good though, it's time to move things up to the cloud. I'm going to be transferring from my source server to my target server. My source server is just a pretty simple, small Artifactory I've used for other demos and other classes. Let's see if I can move this stuff to the cloud.
That'll be a yes. Here I am, ready to transfer. Here comes the import. Behind the scenes, what this is doing is it's doing a standard Artifactory system export, and then it is doing a system import after uploading the configuration bundle to the cloud.
As you can see, it has imported all of my repository configurations. Let me head over to my cloud Artifactory. There we go. So I've been able to log in here, and you can see that, between this Artifactory and its repositories, I have a virtual NPM, a couple of locals, and a couple of remotes. This matches my on-prem Artifactory — see, I've got the locals here and the remotes. So that's what the config transfer does. It takes all of this information and uploads it to the cloud.
Hello again. In this part two, we will be investigating the data transfer phase of the JFrog CLI. As you can see on my screen right here, I have the pre-checks command loaded up. What this check does is similar to the config transfer pre-check, where it's going to verify the basics and make sure that the connection between my on-prem Artifactory and the cloud is working.
This is important because, unlike the transfer-config, which involved moving the configurations through the CLI essentially up to the cloud, this is going to happen almost entirely on the source Artifactory system. The source Artifactory, thanks to a user plugin installed in an earlier step, is going to move the data up to the cloud for us, with commands coming from the CLI.
I hope that all made sense. Let's go ahead and begin the transfer.
Now that the pre-checks have succeeded, I'm going to go ahead and remove that tag,