Advanced Storage Options

JFrog Installation & Setup Documentation

Content Type
Installation & Setup
ft:sourceType
Paligo

The filestore is where binaries are physically stored, and it is one of the two stores essential for Artifactory's storage and management resources. Artifactory supports a wide range of storage back ends; in this section, we have detailed some of the advanced options for Artifactory storage; for more information, see Artifactory Filestore options.Common Resources

Setting the Artifactory Persistency Storage Type

In the Helm chart, set the type of storage you want with artifactory.persistence.type and pass the required configuration settings. The default storage in this chart is the file-system replication, where the data is replicated to all nodes.

Important

All storage configurations, except Network File System (NFS) come with a default artifactory.persistence.redundancy parameter. This is used to set how many replicas of a binary should be stored in the cluster's nodes. Once this value is set on initial deployment, you can not update it using Helm. It is recommended to set this to a number greater than half of your cluster's size, and to never scale your cluster down to a size smaller than this number.

To use your selected bucket as the HA's filestore, pass the filestore's parameters to the Helm installation/upgrade.

Setting up the Network File System (NFS) Storage

To use an NFS server as your cluster's storage, you will need to do the following.

  1. Set up an NFS server and get its IP as NFS_IP.

  2. Create data and backup directories on the NFS exported directory with write permissions to all.

  3. Pass NFS parameters to the Helm installation/upgrade as follows.

    artifactory:
     persistence:
       type: nfs
       nfs:
         ip: ${NFS_IP}
Configuring the NFS Persistence Type

In some cases, it is not possible for the Helm Chart to set up your NFS mounts automatically for Artifactory. In these cases (for example the AWS EFS), you will use the artifactory.persistence.type=file-system, even though your underlying persistence is actually a network file system.

The same thing applies when using a slow storage device (such as cheap disks) as your main storage solution for Artifactory; this means that serving highly-used files from the network file system/slow storage can take time, which is why you would want a cache filesystem that is stored locally on disk (on fast disks such as SSD).

  1. Create a values.yaml file.

  2. Set up your volume mount to your fast storage device as follows.

    artifactory:
      ## Set up your volume mount to your fast storage device
      customVolumes: |
        - name: my-cache-fast-storage
          persistentVolumeClaim:
            claimName: my-cache-fast-storage-pvc
      ## Enable caching and configure the cache directory
      customVolumeMounts: |
        - name: my-cache-fast-storage
          mountPath: /my-fast-cache-mount
      ## Install the helm chart with the values file you created
      persistence:
        cacheProviderDir: /my-fast-cache-mount
        fileSystem:
          cache:
            enabled: true
  3. Install Artifactory with the values file you created.

    Artifactory

    helm upgrade --install artifactory jfrog/artifactory --namespace artifactory -f values.yaml

    Artifactory HA

    helm upgrade --install artifactory-ha jfrog/artifactory-ha --namespace artifactory-ha -f values.yaml
Google Storage

You can use Google Storage bucket as the cluster's filestore by passing the Google Storage parameters below to helm install and helm upgrade. For more information, see Google Storage Binary Provider.

artifactory:
 persistence:
   type: google-storage
   googleStorage:
     identity: ${GCP_ID}
     credential: ${GCP_KEY}
Artifactory HA

To use a GCP service account, Artifactory requires a gcp.credentials.json file in the same directory as the binaraystore.xml file.

This can be generated by running the following.

gcloud iam service-accounts keys create <file_name> --iam-account <service_account_name>

This will produce the following, which can be saved to a file or copied into your values.yaml.

{
   "type": "service_account",
   "project_id": "<project_id>",
   "private_key_id": "?????",
   "private_key": "-----BEGIN PRIVATE KEY-----\n????????==\n-----END PRIVATE KEY-----\n",
   "client_email": "???@j<project_id>.iam.gserviceaccount.com",
   "client_id": "???????",
   "auth_uri": "https://accounts.google.com/o/oauth2/auth",
   "token_uri": "https://oauth2.googleapis.com/token",
   "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
   "client_x509_cert_url": "https://www.googleapis.com/robot/v1....."
}

One option is to create your own s ecret and to pass it to your helm install in a custom values.yaml.

# Create the Kubernetes secret from the file you created earlier.
# IMPORTANT: The file must be called "gcp.credentials.json" because this is used later as the secret key!
kubectl create secret generic artifactory-gcp-creds --from-file=./gcp.credentials.json

Set this secret in your custom values.yaml.

artifactory:
  persistence:
    googleStorage
      gcpServiceAccount:
        enabled: true
        customSecretName: artifactory-gcp-creds

Another option is to put your generated config directly in your custom values.yaml and then a secret will be created from that.

artifactory:
  persistence:
    googleStorage
      gcpServiceAccount:
        enabled: true
        config: |
          {
             "type": "service_account",
             "project_id": "<project_id>",
             "private_key_id": "?????",
             "private_key": "-----BEGIN PRIVATE KEY-----\n????????==\n-----END PRIVATE KEY-----\n",
             "client_email": "???@j<project_id>.iam.gserviceaccount.com",
             "client_id": "???????",
             "auth_uri": "https://accounts.google.com/o/oauth2/auth",
             "token_uri": "https://oauth2.googleapis.com/token",
             "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
             "client_x509_cert_url": "https://www.googleapis.com/robot/v1....."
          }
AWS S3 V3

To use an AWS S3 bucket as the cluster's filestore and access it with the official AWS SDK, see the S3 Official SDK Binary Provider. This filestore template uses the official AWS SDK. Use this template if you want to attach an IAM role to the Artifactory pod directly (as opposed to attaching it to the machine/s that Artifactory runs on).

Note

You will need to combine this with a Kubernetes mechanism for attaching IAM roles to pods, such as kube2iam.

Pass the AWS S3 V3 parameters and the annotation pointing to the IAM role (when using an IAM role; this is kube2iam-specific and may vary depending on the implementation) to helm install and helm upgrade .

# Using explicit credentials:
artifactory:
 persistence:
   type: aws-s3-v3
   awsS3V3:
     region: ${AWS_REGION}
     bucketName: ${AWS_S3_BUCKET_NAME}
     identity: ${AWS_ACCESS_KEY_ID}
     credential: ${AWS_SECRET_ACCESS_KEY}
     useInstanceCredentials: false

# Using an existing IAM role
artifactory:
  annotations: 'iam\.amazonaws\.com/role'=${AWS_IAM_ROLE_ARN}  
  persistence:    
    type: aws-s3-v3    
    awsS3V3:      
      region: ${AWS_REGION}      
      bucketName: ${AWS_S3_BUCKET_NAME}

To enable Direct Cloud Storage Download, use the following.

artifactory:
 persistence:
   awsS3V3:
     enableSignedUrlRedirect: true
Microsoft Azure Blob Storage

You can use Azure Blob Storage as the cluster's filestore by passing the Azure Blob Storage parameters to helm install and helm upgrade. For more information, see Azure Blob Storage.

artifactory:
 persistence:
   type: azure-blob
   azureBlob:
     accountName: ${AZURE_ACCOUNT_NAME}
     accountKey: ${AZURE_ACCOUNT_KEY}
     endpoint: ${AZURE_ENDPOINT}
     containerName: ${AZURE_CONTAINER_NAME}

To use a persistent volume claim as cache dir together with Azure Blob Storage, pass the following parameters as well to helm install and helm upgrade (verify that mountPath and cacheProviderDir point to the same location).

artifactory:
 persistence:
   existingClaim: ${YOUR_CLAIM}
   mountPath: /opt/cache-dir
   cacheProviderDir: /opt/cache-dir
Custom binarystore.xml

There are two options for providng a custom binarystore.xml.

  1. Editing directly in the values.yaml.

    artifactory:
      persistence:
        binarystoreXml: |
          <!-- The custom XML snippet -->
          <config version="v1">
              <chain template="file-system"/>
          </config>
  2. Create your own secret and pass it to your helm install command.

    # Prepare your custom Secret file (custom-binarystore.yaml)
    kind: Secret
    apiVersion: v1
    metadata:
      name: custom-binarystore
      labels:
        app: artifactory
        chart: artifactory
    stringData:
      binarystore.xml: |-
          <!-- The custom XML snippet -->
          <config version="v1">
              <chain template="file-system"/>
          </config>
  3. Next, create a secret from the file.

    kubectl apply -n artifactory -f ./custom-binarystore.yaml
  4. Pass the secret to your helm install command.

    Artifactory

    helm upgrade --install artifactory --namespace artifactory --set artifactory.persistence.customBinarystoreXmlSecret=custom-binarystore jfrog/artifactory

    Artifactory HA

    helm upgrade --install artifactory-ha --namespace artifactory-ha --set artifactory.persistence.customBinarystoreXmlSecret=custom-binarystore jfrog/artifactory-ha