ARTIFACTORY: How to attach NFS mount point to the Artifactory helm deployment to configure filestore and backup directory

ARTIFACTORY: How to attach NFS mount point to the Artifactory helm deployment to configure filestore and backup directory

AuthorFullName__c
Prajyot Pawase
articleNumber
000005414
ft:sourceType
Salesforce
FirstPublishedDate
2022-09-16T09:33:01Z
lastModifiedDate
2024-03-10T07:44:53Z
VersionNumber
4

In order to configure to use an NFS to store the checksum and backups when running Artifactory on a kubernetes cluster installed using helm charts, we have to create the PV and PVC manually in the Kubernetes cluster using the NFS. 

Then we need to map those created PVC to the Artifactory helm deployment using the values.yaml. 

Below the steps to configure mount the NFS mount point to the Artifactory helm deployment. In this example, we have used a VM in GCP with CentOs

Step 1: Create the GCP machine with CentOS (You may use any OS type)  

Step 2: Login to the server and Install the below package for the NFS server using the yum command. (It is also possible to use an existing NFS server).
 

yum install -y nfs-utils

Step 3: Once the packages are installed, enable and start NFS services.
systemctl start nfs-server rpcbind
systemctl enable nfs-server rpcbind

Step 4: Create an NFS Share on the server and Allow the NFS client to read and write to the created directory.
mkdir /nfsfileshare1
chmod 777 /nfsfileshare1/

Step 5: As a next step modify the /etc/exports file to make an entry of the directory /nfsfileshare1 that you want to share.

1. vi /etc/exports

2. Add the NFS share something like below to the file.
/nfsfileshare1 10.110.20.111(rw,sync,no_root_squash)

a. /nfsfileshare1: shared directory

b. 10.110.20.111: IP address of client machine. We can also use the hostname instead of an IP address. It is also possible to define the range of clients with subnets like 10.110.20.111/24. For this, I have used ‘*’ so that all clients can access the NFS server.

c. rw: Writable permission to shared folder

d. sync: All changes according to the filesystem are immediately flushed to disk; the respective write operations are being waited for.

e. no_root_squash: By default, any file request made by user root on the client machine is treated as by user nobody on the server. (Exactly which UID the request is mapped to depends on the UID of user “nobody” on the server, not the client.) If no_root_squash is selected, then root on the client machine will have the same level of access to the files on the system as root on the server.

Step 6: Export the shared directories using the following command and to complete the NFS Server configuration use the below command.
exportfs -r

Step 7: We may need to configure the firewall on the NFS server to allow NFS clients to access the NFS share. To do that, run the following commands on the NFS server.
firewall-cmd --permanent --add-service mountd
firewall-cmd --permanent --add-service rpc-bind
firewall-cmd --permanent --add-service nfs
firewall-cmd --reload

Step 8: Log in to the Kubernetes cluster and create PV using the below YAML and kubectl command using the NFS Server. On this example we have used a GKE cluster

$ kubectl apply -f pv-nfs1.yaml -namespace artifactory

pv-nfs1.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
 name: artifactory-ha-data-pv-nfs1
 labels:
   id: artifactory-ha-data-pv-nfs1
   type: nfs-volume
spec:
 capacity:
   storage: 50Gi
 accessModes:
 - ReadWriteOnce
 persistentVolumeReclaimPolicy: Retain
 nfs:
   server: "10.110.20.111"
   path: "/nfsfileshare1"
   readOnly: false

Step 9: Create the PVC from the created PV in step 8 using the below YAML and kubectl command.

$ kubectl apply -f pv-nfs1.yaml -namespace artifactory

pvc-nfs1.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
 name: artifactory-ha-nfs-pvc-nfs1
 labels:
   type: nfs-volume
spec:
 accessModes:
 - ReadWriteOnce
 storageClassName: ""
 resources:
   requests:
     storage: 50Gi
 selector:
   matchLabels:
     id: artifactory-ha-data-pv-nfs1

Step 10: Create Another PV using the below YAML and kubectl command file from the created NFS Server to configure the Artifactory backup. I have created another NFS server so that I can store the backup on different NFS.

$ kubectl apply -f pv-nfs2.yaml -namespace artifactory

pv-nfs2.yaml:
apiVersion: v1
kind: PersistentVolume
metadata:
 name: artifactory-ha-data-pv-nfs2
 labels:
   id: artifactory-ha-data-pv-nfs2
   type: nfs-volume
spec:
 capacity:
   storage: 50Gi
 accessModes:
 - ReadWriteOnce
 persistentVolumeReclaimPolicy: Retain
 nfs:
   server: "10.110.20.112"
   path: "/nfsfileshare2"
   readOnly: false

Step 11: Create the PVC from the created PV in step 10 using the below YAML and kubectl command.

$ kubectl apply -f pv-nfs2.yaml -namespace artifactory

pvc-nfs2.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
 name: artifactory-ha-nfs-pvc-nfs2
 labels:
   type: nfs-volume
spec:
 accessModes:
 - ReadWriteOnce
 storageClassName: ""
 resources:
   requests:
     storage: 50Gi
 selector:
   matchLabels:
     id: artifactory-ha-data-pv-nfs2

Step 12: Create the values.yaml with the below snippet.

values.yaml
postgresql:
 enabled: true
 postgresqlPassword: Password@123
nginx:
 enabled: true
databaseUpgradeReady: true
unifiedUpgradeAllowed: true
artifactory:
 masterKeySecretName: my-masterkey-secret
 joinKeySecretName: my-joinkey-secret
 copyOnEveryStartup:
     - source: /artifactory_bootstrap/binarystore.xml
       target: etc/artifactory/ 
 customVolumes: |
   - name: "nfs-vol1"
     persistentVolumeClaim:
       claimName: "artifactory-ha-nfs-pvc-nfs1"
   - name: "nfs-vol2"
     persistentVolumeClaim:
       claimName: "artifactory-ha-nfs-pvc-nfs2"
 customVolumeMounts: |
   - name: "nfs-vol1"
     mountPath: "/var/opt/jfrog/artifactory/JFROG/filestore"
   - name: "nfs-vol2"
     mountPath: "/var/opt/jfrog/artifactory/JFROG/backup"
 persistence:
   type: "file-system"
   binarystoreXml:
     <config version="v1">
       <chain template="file-system"/>
           <provider id="file-system" type="file-system">
               <baseDataDir>/opt/jfrog/artifactory/var/JFROG</baseDataDir>
               <fileStoreDir>filestore</fileStoreDir>
           </provider>
       </config>

Step 13: Execute the below helm command to install Artifactory using helm charts.

$ helm upgrade --install artifactory -f values.yaml -namespace artifactory jfrog-charts/artifactory

After successful installation, you can check the filestore configured by login into Artifactory and navigating to the Administration→Monitoring→Storage.

Also, Configure the Artifactory backup on the ‘/var/opt/jfrog/artifactory/JFROG/backup’ directory that we have mounted on the Artifactory. For more information about Backups, Please refer to our Backups confluence page.