The filestore is where binaries are physically stored. Artifactory supports a wide range of storage back ends; in this section, we have detailed some of the advanced options for Artifactory storage; for more information, see Artifactory Filestore options.
Setting the Artifactory Persistency Storage Type
In the Helm chart, set the type of storage you want with artifactory.persistence.type
and pass the required configuration settings. The default storage in this chart is the file-system
replication, where the data is replicated to all nodes.
Important
All storage configurations, except Network File System (NFS) come with a default artifactory.persistence.redundancy
parameter. This is used to set how many replicas of a binary should be stored in the cluster's nodes. Once this value is set on initial deployment, you can not update it using Helm. It is recommended to set this to a number greater than half of your cluster's size, and to never scale your cluster down to a size smaller than this number.
To use your selected bucket as the HA's filestore, pass the filestore's parameters to the Helm installation/upgrade.
Setting up the Network File System (NFS) Storage
To use an NFS server as your cluster's storage, you will need to do the following.
Set up an NFS server and get its IP as
NFS_IP
.Create
data
andbackup
directories on the NFS exported directory with write permissions to all.Pass NFS parameters to the Helm installation/upgrade as follows.
artifactory: persistence: type: nfs nfs: ip: ${NFS_IP}
Configuring the NFS Persistence Type
In some cases, it is not possible for the Helm Chart to set up your NFS mounts automatically for Artifactory. In these cases (for example the AWS EFS), you will use the artifactory.persistence.type=file-system,
even though your underlying persistence is actually a network file system.
The same thing applies when using a slow storage device (such as cheap disks) as your main storage solution for Artifactory; this means that serving highly-used files from the network file system/slow storage can take time, which is why you would want a cache filesystem that is stored locally on disk (on fast disks such as SSD).
Create a
values.yaml
file.Set up your volume mount to your fast storage device as follows.
artifactory: ## Set up your volume mount to your fast storage device customVolumes: | - name: my-cache-fast-storage persistentVolumeClaim: claimName: my-cache-fast-storage-pvc ## Enable caching and configure the cache directory customVolumeMounts: | - name: my-cache-fast-storage mountPath: /my-fast-cache-mount ## Install the helm chart with the values file you created persistence: cacheProviderDir: /my-fast-cache-mount fileSystem: cache: enabled: true
Install Artifactory with the values file you created.
Artifactory
helm upgrade --install artifactory jfrog/artifactory --namespace artifactory -f values.yaml
Artifactory HA
helm upgrade --install artifactory-ha jfrog/artifactory-ha --namespace artifactory-ha -f values.yaml
Google Storage
You can use Google Storage bucket as the cluster's filestore by passing the Google Storage parameters below to helm install
and helm upgrade
. For more information, see Google Storage Binary Provider.
artifactory: persistence: type: google-storage-v2-direct googleStorage: bucketName: "artifactory-gcp"
To use a GCP service account, Artifactory requires a gcp.credentials.json
file in the same directory as the binaraystore.xml file.
This can be generated by running the following:
gcloud iam service-accounts keys create <file_name> --iam-account <service_account_name>
This will produce the following, which can be saved to a file or copied into your values.yaml
.
{ "type": "service_account", "project_id": "<project_id>", "private_key_id": "?????", "private_key": "-----BEGIN PRIVATE KEY-----\n????????==\n-----END PRIVATE KEY-----\n", "client_email": "???@j<project_id>.iam.gserviceaccount.com", "client_id": "???????", "auth_uri": "https://accounts.google.com/o/oauth2/auth", "token_uri": "https://oauth2.googleapis.com/token", "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs", "client_x509_cert_url": "https://www.googleapis.com/robot/v1....." }
One option is to create your own s ecret and to pass it to your helm install
in a custom values.yaml
.
# Create the Kubernetes secret from the file you created earlier. # IMPORTANT: The file must be called "gcp.credentials.json" because this is used later as the secret key! kubectl create secret generic artifactory-gcp-creds --from-file=./gcp.credentials.json
Set this secret in your custom values.yaml
.
artifactory: persistence: googleStorage gcpServiceAccount: enabled: true customSecretName: artifactory-gcp-creds
Another option is to put your generated config directly in your custom values.yaml
and then a secret will be created from that.
artifactory: persistence: googleStorage gcpServiceAccount: enabled: true config: | { "type": "service_account", "project_id": "<project_id>", "private_key_id": "?????", "private_key": "-----BEGIN PRIVATE KEY-----\n????????==\n-----END PRIVATE KEY-----\n", "client_email": "???@j<project_id>.iam.gserviceaccount.com", "client_id": "???????", "auth_uri": "https://accounts.google.com/o/oauth2/auth", "token_uri": "https://oauth2.googleapis.com/token", "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs", "client_x509_cert_url": "https://www.googleapis.com/robot/v1....." }
AWS S3 V3
To use an AWS S3 bucket as the cluster's filestore and access it with the official AWS SDK, see the S3 Official SDK Binary Provider. This filestore template uses the official AWS SDK. Use this template if you want to Create an IAM OIDC provider and assign the IAM role to Kubernetes Service Accounts.
Pass the AWS S3 V3 parameters and the annotation pointing to the IAM role to your helm install
in a custom values.yaml
.
# Using an existing IAM role serviceAccount: create: true annotations: eks.amazonaws.com/role-arn: arn:aws:iam::<ACCOUNT_ID>:role/<IAM_ROLE_NAME> artifactory: persistence: type: s3-storage-v3-direct awsS3V3: region: ${AWS_REGION} bucketName: ${AWS_S3_BUCKET_NAME} useInstanceCredentials: true # Using explicit credentials artifactory: persistence: type: s3-storage-v3-direct awsS3V3: region: ${AWS_REGION} bucketName: ${AWS_S3_BUCKET_NAME} identity: ${AWS_ACCESS_KEY_ID} credential: ${AWS_SECRET_ACCESS_KEY} useInstanceCredentials: false
To enable Direct Cloud Storage Download, use the following.
artifactory: persistence: awsS3V3: enableSignedUrlRedirect: true
Microsoft Azure Blob Storage
You can use Azure Blob Storage as the cluster's filestore by passing the Azure Blob Storage parameters to helm install
and helm upgrade
. For more information, see Azure Blob Storage.
artifactory: persistence: type: azure-blob-storage-v2-direct azureBlob: accountName: ${AZURE_ACCOUNT_NAME} accountKey: ${AZURE_ACCOUNT_KEY} endpoint: ${AZURE_ENDPOINT} containerName: ${AZURE_CONTAINER_NAME}