ARTIFACTORY: Installation Quick Start Guide – Artifactory on EKS with NLB, S3 bucket and configure subdomain docker access method

Vignesh Surendrababu
2023-01-22 11:10


This example will demonstrate the steps to install Artifactory using helm deployment with backend s3 storage on an EKS cluster with the use of a Network Load Balancer (NLB).

All commands used in the example are in Helm V3 format. 
User-added image

Artifactory helm charts used on this example can be found on our GitHub page and review the prerequisites prior to proceeding with the installation.

Step 1:

Since we are going to use a Network Load Balancer available on AWS, we need to make sure that the ACM certificates are created for the DNS names and that the domain records are created on Route 53.

User-added image

Note: As we are going to use Subdomain method for Docker, we need to create the domain names using <dns>, *.<dns>
In this case, if we consider using the domain name “”, we also need to create a domain record with *

Step 2:

As we are going to use an s3 bucket for storing the checksums, create the s3 bucket and we will need to use an IAM Role to connect to the s3 bucket using the instructions available in our knowledge base article.

Step 3:

Once the certificates and s3 buckets are created, let's construct the values yaml file as shown in the below example
 create: true
 name: artifactory
 annotations: <use the role arn created to access s3 bucket>
 joinKeySecretName: joinkey-secret
 masterKeySecretName: masterkey-secret
   secret: artifactory-cluster-license
   dataKey: artifactory.txt
   enabled: true
   accessMode: ReadWriteOnce
   customBinarystoreXmlSecret: custom-binarystore
databaseUpgradeReady: true
 enabled: true
 artifactoryConf: |
   {{- if .Values.nginx.https.enabled }}
   ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3;
   ssl_certificate  {{ .Values.nginx.persistence.mountPath }}/ssl/tls.crt;
   ssl_certificate_key  {{ .Values.nginx.persistence.mountPath }}/ssl/tls.key;
   ssl_session_cache shared:SSL:1m;
   ssl_prefer_server_ciphers   on;
   {{- end }}
   ## server configuration
   server {
     {{- if .Values.nginx.internalPortHttps }}
     listen {{ .Values.nginx.internalPortHttps }} ssl;
     {{- else -}}
     {{- if .Values.nginx.https.enabled }}
     listen {{ .Values.nginx.https.internalPort }} ssl;
     {{- end }}
     {{- end }}
     {{- if .Values.nginx.internalPortHttp }}
     listen {{ .Values.nginx.internalPortHttp }};
     {{- else -}}
     {{- if .Values.nginx.http.enabled }}
     listen {{ .Values.nginx.http.internalPort }};
     {{- end }}
     {{- end }}
     server_name ~(?<repo>.+)\
     {{- range .Values.ingress.hosts -}}
       {{- if contains "." . -}}
         {{ "" | indent 0 }} ~(?<repo>.+)\.{{ . }}
       {{- end -}}
     {{- end -}};
     if ($http_x_forwarded_proto = '') {
       set $http_x_forwarded_proto  $scheme;
     ## Application specific logs
     ## access_log /var/log/nginx/artifactory-access.log timing;
     ## error_log /var/log/nginx/artifactory-error.log;
     rewrite ^/artifactory/?$ / redirect;
     if ( $repo != "" ) {
       rewrite ^/(v1|v2)/(.*) /artifactory/api/docker/$repo/$1/$2 break;
     chunked_transfer_encoding on;
     client_max_body_size 0;
     location / {
       proxy_ssl_server_name on;
       proxy_read_timeout  2400;
       proxy_send_timeout 2400;
       proxy_pass_header   Server;
       proxy_request_buffering off;
       proxy_http_version 1.1;
       proxy_buffering off;
       proxy_cookie_path   ~*^/.* /;
       proxy_pass          {{ include "artifactory.scheme" . }}://{{ include "artifactory.fullname" . }}:{{ .Values.artifactory.externalPort }}/;
       {{- if .Values.nginx.service.ssloffload}}
       proxy_set_header    X-JFrog-Override-Base-Url;
       {{- else }}
       proxy_set_header    X-JFrog-Override-Base-Url $http_x_forwarded_proto://$host:$server_port;
       proxy_set_header    X-Forwarded-Port  $server_port;
       {{- end }}
       proxy_set_header    X-Forwarded-Port  443;
       proxy_set_header    X-Forwarded-Proto https;
       proxy_set_header    Host              $http_host;
       proxy_set_header    X-Forwarded-For   $proxy_add_x_forwarded_for;
       add_header Strict-Transport-Security always;
       location /artifactory/ {
         if ( $request_uri ~ ^/artifactory/(.*)$ ) {
           proxy_pass       http://{{ include "artifactory.fullname" . }}:{{ .Values.artifactory.externalArtifactoryPort }}/artifactory/$1;
         proxy_pass         http://{{ include "artifactory.fullname" . }}:{{ .Values.artifactory.externalArtifactoryPort }}/artifactory/;
   enabled: true
   enabled: true
   annotations: http "3600" "false" arn:aws:acm:eu-north-1:XXXXXXXXXXXX:certificate/XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX https nlb
   externalTrafficPolicy: Local
   ssloffload: true
   type: LoadBalancer
 postgresqlPassword: password
unifiedUpgradeAllowed: true

Note: The important fields have been highlighted (in bold letters) above where the customized values need to be updated

In order to create the secrets for license, joinKey, masterKey refer to the instructions available here:

Step 4:  

For using the s3 configurations, create a secret by updating the bucket details. (Save this file as custom-binarystore.yaml) and then use the command “kubectl apply -f custom-binarystore.yaml”# Prepare your custom Secret file (custom-binarystore.yaml)
kind: Secret
apiVersion: v1
 name: custom-binarystore
   app: artifactory
   chart: artifactory
 binarystore.xml: |-
   <?xml version="1.0" encoding="UTF-8"?>
   <config version="2">
     <chain template="s3-storage-v3-direct" />
     <provider id="s3-storage-v3" type="s3-storage-v3">

Now, once all the values are in place, lets review the configurations along with the parameters used

  1. ServiceAccount: At the time of installation, we need to create the serviceAccount using the role name created for connecting with the s3 bucket, so that the Artifactory pods will be able to connect to the s3. Hence, we have enabled the ServiceAccount.create=true and passed the IAM role arn under the annotation. Note that we need to pass the IAM role created in the above step 2 under “serviceAccount.annotations: <use the role arn created to access s3 bucket>” in values.yaml
  2. Under the artifactoryConf used under Nginx, the server name should be matching the DNS name of the certificates used with a regex pattern “server_name ~(?<repo>.+)”
  3. The “X-JFrog-Override-Base-Url”  used as a proxy_set_header should be matching the “https” endpoint which uses the DNS name.
  4. Now pass the certificate arn on the parameter “”
  5. Finally, perform the helm installation.


How to test the docker operations?

Since the Nginx is using the server_name which includes the regex pattern, we can use the below format to perform the docker login, docker pull/push
Example: if the server_name contains “~(?<repo>.+)”, it indicates that the server is configured to use the subdomain docker access method.
Then to perform docker login
docker login <repositoryname>
To perform Docker pull/push:
docker pull <repositoryname>
docker push <repositoryname>