ARTIFACTORY: Installation Quick Start Guide – Artifactory on EKS with NLB, S3 bucket and configure subdomain docker access method
Overview:
This example will demonstrate the steps to install Artifactory using helm deployment with backend s3 storage on an EKS cluster with the use of a Network Load Balancer (NLB).
All commands used in the example are in Helm V3 format.
Artifactory helm charts used on this example can be found on our GitHub page and review the prerequisites prior to proceeding with the installation.
Step 1:
Since we are going to use a Network Load Balancer available on AWS, we need to make sure that the ACM certificates are created for the DNS names and that the domain records are created on Route 53.
Note: As we are going to use Subdomain method for Docker, we need to create the domain names using <dns>, *.<dns>
In this case, if we consider using the domain name “test.eks.com”, we also need to create a domain record with *.test.eks.com
Step 2:
As we are going to use an s3 bucket for storing the checksums, create the s3 bucket and we will need to use an IAM Role to connect to the s3 bucket using the instructions available in our knowledge base article.
Step 3:
Once the certificates and s3 buckets are created, let's construct the values yaml file as shown in the below example
serviceAccount:
create: true
name: artifactory
annotations:
eks.amazonaws.com/role-arn: <use the role arn created to access s3 bucket>
artifactory:
joinKeySecretName: joinkey-secret
masterKeySecretName: masterkey-secret
license:
secret: artifactory-cluster-license
dataKey: artifactory.txt
persistence:
enabled: true
accessMode: ReadWriteOnce
customBinarystoreXmlSecret: custom-binarystore
databaseUpgradeReady: true
nginx:
enabled: true
artifactoryConf: |
{{- if .Values.nginx.https.enabled }}
ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3;
ssl_certificate {{ .Values.nginx.persistence.mountPath }}/ssl/tls.crt;
ssl_certificate_key {{ .Values.nginx.persistence.mountPath }}/ssl/tls.key;
ssl_session_cache shared:SSL:1m;
ssl_prefer_server_ciphers on;
{{- end }}
## server configuration
server {
{{- if .Values.nginx.internalPortHttps }}
listen {{ .Values.nginx.internalPortHttps }} ssl;
{{- else -}}
{{- if .Values.nginx.https.enabled }}
listen {{ .Values.nginx.https.internalPort }} ssl;
{{- end }}
{{- end }}
{{- if .Values.nginx.internalPortHttp }}
listen {{ .Values.nginx.internalPortHttp }};
{{- else -}}
{{- if .Values.nginx.http.enabled }}
listen {{ .Values.nginx.http.internalPort }};
{{- end }}
{{- end }}
server_name ~(?<repo>.+)\.test.eks.com test.eks.com
{{- range .Values.ingress.hosts -}}
{{- if contains "." . -}}
{{ "" | indent 0 }} ~(?<repo>.+)\.{{ . }}
{{- end -}}
{{- end -}};
if ($http_x_forwarded_proto = '') {
set $http_x_forwarded_proto $scheme;
}
## Application specific logs
## access_log /var/log/nginx/artifactory-access.log timing;
## error_log /var/log/nginx/artifactory-error.log;
rewrite ^/artifactory/?$ / redirect;
if ( $repo != "" ) {
rewrite ^/(v1|v2)/(.*) /artifactory/api/docker/$repo/$1/$2 break;
}
chunked_transfer_encoding on;
client_max_body_size 0;
location / {
proxy_ssl_server_name on;
proxy_read_timeout 2400;
proxy_send_timeout 2400;
proxy_pass_header Server;
proxy_request_buffering off;
proxy_http_version 1.1;
proxy_buffering off;
proxy_cookie_path ~*^/.* /;
proxy_pass {{ include "artifactory.scheme" . }}://{{ include "artifactory.fullname" . }}:{{ .Values.artifactory.externalPort }}/;
{{- if .Values.nginx.service.ssloffload}}
proxy_set_header X-JFrog-Override-Base-Url https://test.eks.com;
{{- else }}
proxy_set_header X-JFrog-Override-Base-Url $http_x_forwarded_proto://$host:$server_port;
proxy_set_header X-Forwarded-Port $server_port;
{{- end }}
proxy_set_header X-Forwarded-Port 443;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
add_header Strict-Transport-Security always;
location /artifactory/ {
if ( $request_uri ~ ^/artifactory/(.*)$ ) {
proxy_pass http://{{ include "artifactory.fullname" . }}:{{ .Values.artifactory.externalArtifactoryPort }}/artifactory/$1;
}
proxy_pass http://{{ include "artifactory.fullname" . }}:{{ .Values.artifactory.externalArtifactoryPort }}/artifactory/;
}
}
}
http:
enabled: true
https:
enabled: true
service:
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "3600"
service.beta.kubernetes.io/aws-load-balancer-internal: "false"
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:eu-north-1:XXXXXXXXXXXX:certificate/XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: https
service.beta.kubernetes.io/aws-load-balancer-type: nlb
externalTrafficPolicy: Local
ssloffload: true
type: LoadBalancer
postgresql:
postgresqlPassword: password
unifiedUpgradeAllowed: true
Note: The important fields have been highlighted (in bold letters) above where the customized values need to be updated
In order to create the secrets for license, joinKey, masterKey refer to the instructions available here: https://jfrog.com/knowledge-base/artifactory-installation-quick-start-guide-helm/
Step 4:
For using the s3 configurations, create a secret by updating the bucket details. (Save this file as custom-binarystore.yaml) and then use the command “kubectl apply -f custom-binarystore.yaml”# Prepare your custom Secret file (custom-binarystore.yaml)
kind: Secret
apiVersion: v1
metadata:
name: custom-binarystore
labels:
app: artifactory
chart: artifactory
stringData:
binarystore.xml: |-
<?xml version="1.0" encoding="UTF-8"?>
<config version="2">
<chain template="s3-storage-v3-direct" />
<provider id="s3-storage-v3" type="s3-storage-v3">
<testConnection>true</testConnection>
<region>eu-north-1</region>
<bucketName>bucket-name</bucketName>
<path>artifactory</path>
<endpoint>http://s3.amazonaws.com</endpoint>
<useInstanceCredentials>true</useInstanceCredentials>
<usePresigning>false</usePresigning>
<maxConnections>200</maxConnections>
<connectionTimeout>120000</connectionTimeout>
<socketTimeout>240000</socketTimeout>
<signatureExpirySeconds>300</signatureExpirySeconds>
</provider>
</config>
Now, once all the values are in place, lets review the configurations along with the parameters used
- ServiceAccount: At the time of installation, we need to create the serviceAccount using the role name created for connecting with the s3 bucket, so that the Artifactory pods will be able to connect to the s3. Hence, we have enabled the ServiceAccount.create=true and passed the IAM role arn under the annotation. Note that we need to pass the IAM role created in the above step 2 under “serviceAccount.annotations: eks.amazonaws.com/role-arn: <use the role arn created to access s3 bucket>” in values.yaml
- Under the artifactoryConf used under Nginx, the server name should be matching the DNS name of the certificates used with a regex pattern “server_name ~(?<repo>.+).test.eks.com test.eks.com”
- The “X-JFrog-Override-Base-Url” used as a proxy_set_header should be matching the “https” endpoint which uses the DNS name.
- Now pass the certificate arn on the parameter “service.beta.kubernetes.io/aws-load-balancer-ssl-cert”
- Finally, perform the helm installation.
How to test the docker operations?
Since the Nginx is using the server_name which includes the regex pattern, we can use the below format to perform the docker login, docker pull/push
Example: if the server_name contains “~(?<repo>.+).test.eks.com test.eks.com”, it indicates that the server is configured to use the subdomain docker access method.
Then to perform docker login
Use:
docker login <repositoryname>.test.eks.com
To perform Docker pull/push:
Use:
docker pull <repositoryname>.test.eks.com/image:tag
docker push <repositoryname>.test.eks.com/image:tag