Artifactory Command Line Interface (CLI) – Pure and Simple

Since writing this post, Artifactory CLI has evolved into JFrog CLI that works with both Artifactory and Bintray. I encourage you to read this follow-up post to learn about the power of the JFrog CLI when working with Artifactory, and then read the follow-up post to learn about the great things you can do with JFrog CLI on Bintray.

Feel free to download JFrog CLI.


We are constantly looking for new ways to make things simpler, so we’re glad to introduce the Artifactory CLI! Artifactory CLI is a new, compact and smart client that provides a simple interface to automate access to Artifactory (through the REST API), yet, with many fancy options. If you are a developer or DevOps, you should definitely take advantage of it and embed CLI commands into your scripts to make automation easier, more readable and maintainable.
You are going to love it! Here’s why:

Easy installation

[Artifactory CLI is no longer available for download. Instead, please visit the JFrog CLI Download Page]

Installing Artifactory CLI is quick and simple. Cloud based CI scripts can download the CLI exe directly from Bintray and can invoke it automatically. You can also get Artifactory CLI  sources on your machine using Go (which is the language it was written in) from GitHub.

Easy usage

You only need to provide your Artifactory credentials once. Artifactory CLI saves them for reuse so you don’t have to provide them with every command.

Parallel uploads and downloads

Artifactory CLI makes your automated builds run faster than before. Artifacts can be uploaded and downloaded concurrently by a (configurable) number of threads. For big artifacts, Artifactory CLI even lets you define a number of chunks into which the files should be split for parallel download.

Checksum optimization

Artifactory CLI calculates checksums for both uploads and downloads. When uploading an artifact, Artifactory CLI first queries Artifactory with the artifact’s checksum. If Artifactory finds that it already has a file with the same checksum in the upload path, Artifactory CLI will skip sending the artifact. This prevents redundant uploads and significantly speeds up your overall upload time! Artifactory CLI does a similar check for download requests to ensure that a file requested for download does not already exist. Reducing overall upload and download times, helps to reduce your build times even further.

Support wildcard and regular expressions

CLI supports wildcard and regular expressions, so you can smartly collect the artifacts you wish to upload or download.
For example, to upload all files whose names include “Frog” from my local disk to “swampRepo” in Artifactory in one command, I can run:

> art upload "(.*)Frog.*.(.*)" "swampRepo/{2}/" --regexp=true

This will also create folders under “swampRepo” using the files’ extensions. For example, if you upload a jar file, a jar folder will be created, and all the jar files will be uploaded to this location.

Preview the uploads

You can also use the dry-run option. This will let you preview the artifacts that will be successfully uploaded and to which location.

The power of CLI – Let’s see another Example

This simple script recursively uploads the content of a whole folder to an Artifactory local repository called “swampRepo”. For each file in the folder, it adds a header containing the file’s calculated checksum to the request to achieve a checksum deploy ( i.e., skipping the file if it already exists in Artifactory). You can find the full bash script in our Github examples repository.

To run this script I need to pass it the path to the folder containing the files I wish to upload:

> myFolderPath

Here are the main parts of the script:

First loop over the files in the folder to upload and calculate the checksum for each file:

# Upload by checksum all files from the source dir to the target repo
find "$dir" -type f | while read f; do
rel="$(echo "$f" | sed -e "s#$dir##" -e "s# /#/#")";
sha1=$(sha1sum "$f")
printf "nnUploading '$f' (cs=${sha1}) to '${repo_url}/${tgt_repo}/${rel}'"

Create a curl PUT command with a header to upload each artifact – trying checksum upload first:

status=$(curl -k -u $user:$pass -X PUT -H "X-Checksum-Deploy:true" -H
"X-Checksum-Sha1:$sha1" --write-out %{http_code} --silent --output /dev/null
echo "status=$status"

Finally, see if we need to send the contents if checksum upload could not be accomplished:

# No checksum found - deploy + content
[ ${status} -eq 404 ] && {
curl -k -u $user:$pass -H "X-Checksum-Sha1:$sha1" -T "$f"

Now, let’s see how much easier it is to do this with the Artifactory CLI:

Assuming my credentials are already stored with the CLI, all I need to do is run a single command:

> art upload "myFolderPath/(.*)" swampRepo/ --regexp=true --recursive=true

This does everything the bash script does a bit faster and in a single line!
Use Artifactory CLI in your scripts and make your automation smarter, faster and more readable.