Prerequisites
Choose an Authentication Method
Artifactory uses the standard S3 SDK for authentication. Users should use one of the following methods for authentication:
Connect an IAM role to ensure your machine has access to the S3 buckets in the cloud. The IAM role must have one of the following policies:
The pre-made policy (AmazonS3FullAccess), or...
Your own custom policy with these action permissions:
s3:ListBucket, s3:ListBucketVersions, s3:ListBucketMultipartUploads, s3:GetBucketLocation, s3:GetObject, s3:GetObjectVersion, s3:PutObject, s3:DeleteObject, s3:ListMultipartUploadParts, s3:AbortMultipartUpload, s3:ListAllMyBuckets, s3:CreateBucket
You need to connect the IAM role to an EC2 instance or an EKS instance. For information on connecting to an EC2 instance, click here. For information on connecting to an EKS instance, click here.
Pass credentials explicitly in the provider chain. Artifactory providers use the terms identity and credential. These translate to:
identity = access key
credential = Secret key / Secret access key
Here is an example showing an access key and secret access key for AWS:
Here is an example showing an access key and secret key for an external S3 provider:
You can read more about access credentials here: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html
Choose a Template
Decide which type of template you want to use. Choose from one of the following:
Direct (recommended): Uploads binaries directly to the S3 bucket. This mechanism generally allows for faster uploads, since there is less contention for disk IO, particularly when Artifactory is hosted on AWS. The direct template can be used for both single-node and high-availability (HA) configurations. A shared mount is not needed for HA.
For more information on the direct template, click here.
Eventual: Temporarily stores binaries locally and returns a 201 response. Binaries are uploaded sometime later to the bucket. Useful if the S3 upload speed is slow (for example, when Artifactory is not in the same network as the bucket). The Eventual Upload mechanism also allows you to upload when S3 is down or experiencing network issues.
This template can be used for single-node and HA configurations, but for HA a shared mount is needed.
For more information on the eventual template, click here.
Eventual Cluster: Uses the same mechanism as eventual, but suitable for HA environments only. A shared mount is not needed. For more information on the eventual cluster template, click here.
S3 Sharding: Like direct, but also allows connection to multiple buckets. For more information on the S3 Sharding template, click here.
Configure Artifactory with S3 Storage
Once you have implemented your authentication method and decided on the type of template you want to use, you are ready to start configuration. There are two options for configuring Artifactory with S3 storage:
To configure Artifactory with S3 Storage (non-Kubernetes):
Create an S3 bucket.
Go to
$JFROG_HOME/artifactory/var/etc/artifactory
folder and openbinarystore.xml.
Note
$JFROG_HOME
is a variable for the path of the installer type. For the possible values for this variable, click here.Open a template editor of your choice and replace the default template in
binarystore.xml
with the template you want to use (i.e., direct, eventual, cluster, or sharding).Set the following parameters in the template:
<bucketName>bucketName</bucketName>:
Enter the bucket name from the bucket that you created at Step 1.<path>filestore</path>:
The path inside the bucket where binaries will be saved. If you don't enter a path here, the default pathfilestore
is used.
Set either the
<endpoint>
or<region>
parameter according to the following criteria:If you use the default value for
<endpoint>
, then you need to set<region>
. Get the value for<region>
from the bucket you created.If you use a custom value for
<endpoint>
, (i.e., you don’t use the default value), no value is needed for<region>
.
If you are using IAM roles for your authentication method, set
<useInstanceCredentials>
totrue
as follows:<useInstanceCredentials>true</useInstanceCredentials>
If you are passing credentials explicitly in the provider chain for your authentication method, set values for
<identity>
and<credential>
(as discussed in prerequisites) as follows:<identity>access key</identity>
<credential>secret key</credential>
Optional: Configure other parameters listed in the table in Amazon S3 Template Parameters according to your system needs.
Save the template with the changes that you made.
Restart Artifactory. If you are using an HA cluster (i.e. multiple nodes) you need to restart all nodes in the cluster.
While Artifactory is restarting, check the Artifactory log to verify that Artifactory is connected to the S3 Bucket. The Artifactory Log is located at:
$JFROG_HOME/artifactory/var/log/artifactory-service.log
Note
Note: For HA configuration, there is no need to make changes in the
binarystore.xml
of each node. When you restart each node, the changes you made inbinarystore.xml
above will automatically transfer to thebinarystore.xml
of each node.
To configure Artifactory with S3 Storage for Kubernetes:
Create an S3 bucket.
Select the template you want to use:
Direct: template name: s3-storage-v3-direct (recommended)
Eventual: template name: s3-storage-v3
Cluster: template name: cluster-s3-storage-v3
Sharding: This requires a custom setup. Use the sharding template shown here: S3 Sharding and apply it as a custom binarystore as shown here.
Set the template name in the values.yaml under
artifactory.persistence.type
.Set the parameters mentioned in steps 4-7 in the previous (non-Kubernetes) section under artifactory.persistence.awsS3V3.
Note
Any parameter not set explicitly will take the default value defined in the
values.yaml
Run one of the commands below to apply the changes depending on the Helm chart you are using:
For Artifactory:
helm upgrade --install artifactory jfrog/artifactory --namespace artifactory -f values.yaml
For Artifactory HA:
helm upgrade --install artifactory-ha jfrog/artifactory-ha --namespace artifactory-ha -f values.yaml
Check the Artifactory log to verify that Artifactory is connected to the S3 Bucket.