Deploy Iron Bank-Approved Artifactory/Xray on AWS GovCloud and RKE2

With Artifactory and Xray now included in the U.S. Department of Defense’s Iron Bank container repository, we’re eager to help you benefit from this accreditation. Today, we’ll explain how to deploy these hardened JFrog images on AWS GovCloud using Rancher Kubernetes Edition (RKE2.)

Specifically, we’ll describe the installation and configuration of the Iron Bank-accredited Artifactory version 7.21.7 and Xray version 3.30.2. The RKE2 version at hand is the v1.21.4+rke2r3 distribution from the Big Bang infrastructure-as-code (IaC) service of the DoD’s Platform One DevSecOps platform.

You’ll learn how to install and configure the RKE2 cluster using a Terraform script from Big Bang; to create a storage class; and finally to install Artifactory and Xray using Helm charts that pull Iron Bank-accredited images from the Iron Bank registry.

Platform One Requirements

Platform One guidelines for implementing DevSecOps in your environment require you to set up a secure infrastructure to deploy a Kubernetes distribution on a secured cloud platform such as AWS GovCloud. The deployment outlined below is designed to work on AWS GovCloud and RKE2 Kubernetes distribution, which are both approved for Platform One.

Deployment of Rancher RKE2 Cluster

Big Bang offers IaC with a uniform script to deploy and configure the Rancher cluster.  

Rancher Cloud Enabled Cluster for AWS


  • AWS Cloudaware RKE2 cluster is the type of cluster we assume to deploy on GovCloud (us-gov-west-1).
  • Security Technical Implementation Guide RHEL8.3 image
  • User must have an AdministratorAccess IAM Role.
  • RKE2 cluster must be provisioned on a new AWS VPC with one control plane node(server) and 2 node pools where Artifactory and Xray are installed.
  • A non-HA RKE2 cluster must be deployed (although this could be changed to HA with config changes.)

RKE2 Cluster Creation

The RKE2 cluster creation is based on cloud-enabled Terraform scripts, which are themselves enabled for public cloud providers such as AWS.  This assumes you have installed the Terraform CLI on the machine where you are executing the RKE2 Terraform script. 

a. Clone RKE2 repo from github :

b. The AWS cloud-enabled cluster Terraform is located under:


c. Modify the Terraform with the following changes: specify RHEL version 8.3.* as the earlier v8.0 has issues with network services that make DNS lookup services fail. Replace existing entire datasource section: “rhel8” with the following: Please note : changed “RHEL-8.3*”

data "aws_ami" "rhel8" {
  most_recent = true
  owners      = ["219670896067"] # owner is specific to aws gov cloud

  filter {
    name   = "name"
    values = ["RHEL-8.3*"]

  filter {
    name   = "architecture"
    values = ["x86_64"]

d. Modify Terraform :  RKE2 cluster needs to be enabled for running on AWS, hence we need to enable AWS Cloud Controller Manager : rke2, agents module.

# Enable AWS Cloud Controller Manager
  enable_ccm = true

e. Specify latest rke2 version to module rke2 and agents in

rke2_version = "v1.21.4+rke2r3"

f. If you need high availability of nodes, change server=3 under module rke2 section

Terraform Execution

a. Initialize Terraform from the `rke2-aws-tf/examples/cloud-enabled` directory where you cloned the repository.

terraform init 

terraform plan - out=output

b. Execute Terraform module : will deploy and configure AWS Cloudaware cluster.

terraform  apply

Check  RKE cluster to verify if services(pods) that were installed are properly configured and in running state by executing following command:

kubectl get pods -n kube-system

Define Storage Class for AWS 

A storage class needs to be defined for the Rancher cluster to provision a storage volume on AWS, so that any persistent volume claims can be bounded. This code for StorageClass can be saved in yaml format — storageclass.yaml — and kubectl create -f storageclass.yaml

kind: StorageClass
 annotations: |
     {"apiVersion":"","kind":"StorageClass","metadata":{"annotations":{"":"true"},"name":"gp2"},"parameters":{"fsType":"ext4","type":"gp2"},"provisioner":"","volumeBindingMode":"WaitForFirstConsumer"} "true"
 name: gp2
 resourceVersion: "256"
 fsType: ext4
 type: gp2
reclaimPolicy: retain
volumeBindingMode: WaitForFirstConsumer


This creates “gp2” as a storage class on the cluster.

Deployment of JFrog Platform Helm Charts

The deployment instructions that follow are exactly the same from the JFrog docs on installing Artifactory and Xray as Helm charts by the following clone from this  Big Bang repo :

Instructions on how to install and configure the Helm charts:

Note: The assumption is to create a new database as part of the Artifactory install. This does not cover installing into an existing database.

Preparation Steps

  1. Create namespace “artifactory”
  2. Create secret “registry-secret” with credentials for accessing Iron Bank registry in artifactory namespace.
kubectl create secret registry-secret regcred \ \
  --docker-username= \
  --docker-password= \
  --namespace artifactory

JFrog Helm Charts Customizations

Clone Iron Bank-specific platform values in yaml from this repo which has Iron Bank-specific customizations. Enable only Artifactory and Xray.


Install Artifactory/Xray Chart

The following Helm chart installs JFrog Platform charts which include Artifactory and Xray on the Rancher RKE2 cluster.

helm install jfrog jfrog/jfrog-platform -f values_platform_ib.yaml \
   --set artifactory-ha.artifactory.node.replicaCount=1 \
   --set global.joinKey= xxxxx \
   --set global.masterKey= xxxxxx \
   --namespace artifactory

Once the chart install successfully completes, an AWS load balancer will be created to route traffic to Artifactory.

You will see services created.  Please note the cname of the load balancer that is listening on port 80. One could create a DNS entry with a custom domain name pointing to this cname to start routing to Artifactory.

kubectl get services -n=artifactory

NAME                           TYPE           CLUSTER-IP      EXTERNAL-IP                                                                  PORT(S)                           AGE
jfrog-artifactory-ha           ClusterIP                                                                           8082/TCP,8081/TCP                       19d
jfrog-artifactory-ha-primary   ClusterIP                                                                           8082/TCP,8081/TCP                       19d
jfrog-nginx                    LoadBalancer   <loadbalancer name>          80:31886/TCP,443:30736/TCP              19d
jfrog-postgresql               ClusterIP                                                                           5432/TCP                                19d
jfrog-postgresql-headless      ClusterIP      None                                                                                   5432/TCP                                19d
jfrog-rabbitmq                 ClusterIP                                                                            5672/TCP,4369/TCP,25672/TCP,15672/TCP   19d
jfrog-rabbitmq-headless        ClusterIP      None                                                                                   4369/TCP,5672/TCP,25672/TCP,15672/TCP   19d
jfrog-xray                     ClusterIP                                                                          80/TCP,8082/TCP                         19d


To log in to Artifactory – please point your browser to:

http://<loadbalancer name>

The first time you log in, the default username is “admin” and password is “password”.

You will need to set the Artifactory license as part of the setup.


This is how you install Iron Bank-accredited Artifactory and Xray images using Big Bang Infrastructure as code (IaC), JFrog Platform Helm charts and Rancher RKE2. 

Want to learn more about the Iron Bank accreditation for Xray and Artifactory, and about how JFrog can help government agencies and contractors boost their DevOps efforts? Sign up for our webinar “Aligning To the DoD Enterprise DevSecOps Reference Design” on Nov. 17 at 11 am PT.