Introduction
When deploying an Artifactory HA using a Helm chart, you have two primary options:
1. Deploy using the artifactory HA chart
2. Deploy using the artifactory chart with artifactory.replicaCount set to greater than one.
Both methods will effectively create an Artifactory HA cluster, allowing you to easily scale up or down as long as you possess the necessary licenses. However, if you start with the Artifactory chart and set replicaCount to 1, intending to later transition to an HA cluster, you may encounter some challenges.
The issue
It's important to note that deploying with artifactory.replicaCount set to 1 via the Artifactory chart results in a standalone instance.
If you attempt to scale up the replica using the kubectl scale command to convert the deployment into an HA configuration, you may face issues such as:
Stopping Artifactory since the local server is running as PRO/OSS but found other servers in the registry.
This occurs because the scale command tries to launch another standalone pod, leading to two standalone instances attempting to connect to the same database.
To successfully scale up or down, you'll need to first update the replicaCount in your values.yaml to a value greater than 1 before using the scale command.
Additionally, when operating with a standalone instance and a replica of 1, you typically don't need to worry about the binary store provider in use. However, scaling to an HA cluster necessitates careful selection of the binary provider. Failing to choose an appropriate binary provider may result in erratic "500 binary not found" errors and build failures.
Resolution
To avoid these errors. Check that you have updated the following settings in your values.yaml and run helm upgrade to upgrade your release.
artifactory: replicaCount: 2 # Any value greater than 1 will designate this as an HA cluster persistence: type: s3-storage-v3-direct # Must be one of the provider types compatible with HA. Note that this type will not function if you are overriding `binarystore.xml` configuration in `values.yaml` or a secret.
For an HA cluster, your options include:
- file-system (only if all HA pods share the same PVC)
- cluster-file-system
- nfs ( Essentially file-system with one shared pvc )
- google-storage-v2-direct (Recommended for Google Cloud Storage)
- cluster-google-storage-v2
- s3-storage-v3-direct (Recommended for AWS S3)
- cluster-s3-storage-v3
- azure-blob-storage-direct
- azure-blob-storage-v2-direct (Recommended for Azure Blob Storage)
- cluster-azure-blob-storage
If you are using your own custom binary provider instead of the above pre-defined types, make sure the provider works for an HA cluster.
After running helm upgrade, the following command should return true:
$ kubectl describe pod <pod_name> -n <namespace> | grep -A 1 "JF_SHARED_NODE_HAENABLED"