JFrog Product Leaders Answer swampUP Attendees’ Burning Questions

UPDATE: JFrog’s Private Distribution Network capabilities are constantly being enhanced, and are currently undergoing major improvements. PDN is, as a result, not available for public sale at this time. If you’re interested in exploring PDN capabilities in your own company, please contact your sales representative or email service AT jfrog.com.


In a live, unscripted “ask me anything” session, a group of JFrog product leaders candidly answered questions from swampUP attendees, with topics ranging from newly-announced JFrog products and capabilities to current cybersecurity concerns that impact DevOps teams.

Because the lively discussion yielded so many great questions and answers, we’ve put together here a summary of the session.

Q: Can you provide an update on the new Xray reports feature?

A: Dganit Arnon, Product Manager: Yes, we recently released a new reporting module for JFrog Xray that allows you to generate any reports about vulnerabilities, licenses, and policy violations. You can now create a report on any scope that you want, with any advanced filters. You can export it to any format you want — CVSS, JSON file, or PDF — so you can slice and dice it some more to your needs, or use it in a third party system.

As with all JFrog features, everything is available via APIs so you can also integrate it into your automations. 

Q: Can you please elaborate on the new cloud-native high availability features?

A: Evgeny Karasik, Senior Product Manager: Prior to the JFrog Platform version 7.17.4, maintenance tasks like replication, garbage collection, backups, and exporting/importing required upfront definition of what we call primary nodes, which you configure using the Artifactory system in YAML. This was ok, but it creates a challenge for our support in cloud-native environments where all the nodes in the cluster should be capable of running those tasks by definition. 

To address that, we developed a new capability that allows all nodes in the cluster to take an active part in the execution of those tasks. You just need to enable that functionality and dispatch the relevant tasks to the relevant nodes. That way we can provide a right way to balance it, and significantly streamline the upgrade process of Artifactory in JFrog DevOps Platform in general. 

By the way, this is available by default in new installations. For existing customers, they have a very intuitive way to migrate from the current state, which is still using primary nodes. After upgrading to the newest version, they need to change the configurations, which includes this functionality.

Q: Please provide more details about the Cold Artifact Storage feature?

A: Ori Yitzhaki, VP of Product: This very exciting feature is scheduled for next quarter. It’ll let customers take some of their workloads from whatever storage they use today, and shift them to cold storage services like AWS Glacier and others, which is more affordable. We did this because we heard many customers wanted to retain the applications they’ve been deploying in production for many years. The standard is 7 years, but we’ve seen applications deployed for 20 or 30 years. So we really wanted to support this use case.

Q: Can I re-use my distribution release bundles with Private Distribution Network?

A: Evgeny Karasik, Senior Product Manager: Yes, absolutely. The release bundle concept is unified for the entire platform. A release bundle is an immutable collection of artifacts or of a single software bill of materials (SBOM). You can use a release bundle for distribution to the edge nodes, as well as using it as part of what you call a cache warmup for Private Distribution Network as well.  The current release bundles are fully distributable through the PDN and therefore you can safely go and extend their re-usage.

Q: Can you recommend a DR (disaster recovery) strategy for distribution in Xray’s self hosted version?

A: Ori Yitzhaki, VP of Product: A feature presented in the swampUP keynote was Federated Repositories which will mirror your entire Artifactory, end to end, and you can use it for DR purposes. We’re still looking for the Xray part, but most of that will be configuration and policies, so that’s something we might add to Mission Control as part of the global management for our platform.

Q: I already have a lot of CI automation using Jenkins. How can I start using JFrog Pipelines?

A: Manisha Sahasrabudhe, Director of Product: We recommend a phased approach for JFrog Pipelines, because if you have a lot of automation already on Jenkins or another CI/CD tool, you’re not going to migrate completely to JFrog Pipelines in a day. 

First, you should identify gaps in your automation, like a task that’s not automated in Jenkins, so you can start doing it with JFrog Pipelines. Or identify greenfield applications you’re building and automate those with JFrog Pipelines. Then as you go back to some of the old automation that you may want to change, migrate it piece by piece to JFrog Pipelines.

That being said, we do recognize that regardless of your migration path, you might still have a lot of automation on Jenkins. So we’ve built a native integration with Jenkins that’s first class, where you can have some of your workflows on Jenkins and you can seamlessly integrate them with JFrog Pipelines and view the entire end-to-end pipeline inside of the JFrog Platform. 

Q: How can Private Distribution Network be deployed?

A: Evgeny Karasik, Senior Product Manager: PDN is deployed on your premises. The distribution node is a self-contained executable binary that you can deploy or bake into your automation or bootstrapping of your environment. We will provide the runtime configuration for this executable and with that, you’re going to implement or build the distribution topology, which is entirely on your side. That’s why we call this a private distribution network. You have full freedom to use it and to build it according to your network requirements, if you have special areas that need to be treated differently.  It’s completely flexible and can be baked into any type of environment — hybrid, virtualized or bare metal hardware.

Ori Yitzhaki, VP of Product: Also, we’re launching several beta programs, including one for the Artifactory Cold Storage feature, and another for PDN

Q: We see strong integration with SCA (software composition analysis.) What about with DAST (Dynamic Application Security Testing) and SAST (Static Application Security Testing) tools?

A: Dganit Arnon, Product Manager: We’re looking into both — DAST and SAST integrations — but we don’t have anything concrete to announce at this point. Currently, we provide a very “shift left” approach, starting with scanning your builds as early as possible. We just announced that we will soon scan source code dependencies.

Q: Our team is new to Artifactory and uses Jenkins. What should we start with if we want to use JFrog Pipelines?

A: Manisha Sahasrabudhe, Director of Product: If the team is still in the process of adopting CI/CD and you’re using Jenkins, there’s a couple of paths you can take. 

One is to start using Jenkins and JFrog Pipelines side by side. We have a Jenkins integration, so part of your workflow can be on Jenkins and part on JFrog Pipelines, and you’ll see it all visualized on JFrog Pipelines in a seamless manner. 

Secondly, we’d love to do a demo for your team, and maybe even a POC (proof of concept) to explain all the different ways in which your organization can use JFrog Pipelines, especially if you’re already using Artifactory, Xray and Distribution. We have many bells and whistles in JFrog Pipelines that will make it very easy for you to create streamlined workflows with just a few YAML keys instead of scripting out everything. That’s one of the major benefits that you’ll get.

Q: Regarding Artifactory’s performance testing, I’m using the JMeter performance testing tool to test the performance of Artifactory repos using JFrog CLI. To upload a 10GB file takes about 8 minutes. Will there be any improvements in future versions of Artifactory?

A: Evgeny Karasik, Senior Product Manager: We made significant enhancements to Artifactory’s performance in its latest release, not just to upload/download speeds, but also for its permissions management, and access management in general. Specifically with regards to upload/download performance, we focused on what’s the bottleneck in the entire system and figured out it’s in the database. The performance of PostgreSQL DB was improved dramatically in the latest release.

Q: Please explain the benefits of the Signed Pipelines feature.

A: Manisha Sahasrabudhe, Director of Product: This brand new feature is intended to create trust in your software delivery pipelines so you can ensure that any artifact that progresses through your pipeline, or that for example gets deployed or promoted, is an artifact that was created by the pipeline and wasn’t tampered with. So creating tamper-proof pipelines is a major security goal of this feature.

For example, you could have a development pipeline that creates a build or an artifact, and then a developer working on his local machine tries to fix an issue and overwrites the artifact. Eventually the next pipeline takes this artifact and promotes it. We want to detect these scenarios and empower organizations to stop the pipeline if this happens. In my example, the developer’s intentions were benign, but Signed Pipelines will also help you block pipeline actions when someone tampers with malicious intent. It’s repudiation through the use of metadata.

The second main benefit of Signed Pipelines is the traceability. We create a pipeInfo.json which has the entire trace for all the runs, and everything that happened to an artifact from the time the code was committed to the time the code was deployed. So you have complete traceability for all your artifacts. You know where they’ve been, who’s worked on them, where they were deployed, all the tests and so on and so forth.

Q: Is Xray’s dependency analysis purely checksum-based for matching packages? If so, are there any plans for deeper package analysis that can potentially handle shaded or UberJars?

A: Dganit Arnon, Product Manager: We already support scanning of UberJars, which is a Jar file that contains many other Jar files, that in turn each may contain other Jar files. Xray works in many different ways, including Checksum and ComponentID. For each package type, we have a specific way to detect and recursively open it layer by layer. We show a complete dependency tree for each UberJar.

Q: Which third-party tools can I use to monitor the JFrog Platform?

A: Loreli Cadapan, Senior Director of Product: We’ve done extensive work on making sure that our customers are able to monitor the JFrog Platform. In addition to the native integration with Sumo Logic, we’ve built other integrations with observability tools like Splunk, Elastic and Datadog.  These integrations leverage FluentD, which allows us to support all these vendors agnostically. These integrations enable customers to consume and visualize logs from the JFrog Platform and get insights on the health of the platform, as well as key operating insights around error codes, HTTP status codes, accepted or declined logins, data transfer, frequently downloaded artifacts, requests, status codes, most active repos, and more.

Q: If Jenkins is working for you, are there any advantages to switching to JFrog Pipelines?

A: Manisha Sahasrabudhe, Director of Product: Yes, there are many advantages to using JFrog Pipelines. 

The first and most important one is that it’s completely integrated with all other JFrog products and with the JFrog Platform. So if you’re using Xray, or Distribution, or Artifactory, you can create pipelines across all of these products with very little custom scripting. The goal of JFrog Pipelines is to allow users to build pipelines from scratch without custom scripting. We have pre-packaged steps called Native Steps that allow you to do a lot of common operations like pulling from Artifactory, building Docker images, distributing, deploying, and more. You can create a pipeline in just a few minutes using Native Steps. Jenkins isn’t designed to make life easy for you if you’re using Artifactory. It’s a more generic kind of tool.

The second major benefit is scale. JFrog Pipelines can scale to thousands of concurrent builds, and thousands of concurrent users, whereas we’ve heard from many customers that with Jenkins you can get into “plug-in hell.” Because of their plug-in architecture, if they have a specific version of a plug-in that they want to upgrade, it can create conflicts with other versions of the plug-in, or with other plug-ins. So what happens is that different teams end up having their own Jenkins instances. We’ve also heard stories about Jenkins having trouble handling more than 100 or 150 concurrent builds.

With JFrog Pipelines, you won’t have these problems. It’s built for scale. 

There are many other benefits as well. For example, we have this concept of “resources” that helps you to build pipelines that span different teams, so each team can have a Dev pipeline, a SecOps pipeline, a Production pipeline, and so on, and all of those can be systematically built into what’s called a “pipeline of pipelines” using those immutable version resources. 

We also focus on re-usability, so we have extensions and templates which help you to define your own custom pre-packaged steps or resources, and create complete pipeline templates.

Our goal is to help you create pipelines very easily, and integrate super tightly with the JFrog Platform. We also have security features that are superior to Jenkins’ features. 

I could keep going on about all the other benefits and advantages of using JFrog Pipelines over Jenkins.

Ori Yitzhaki, VP of Product: An easy way for you to try JFrog Pipelines is with the JFrog Free Tier subscription, which also includes Artifactory, as well as some of the Xray functionality. It’s completely free to use. You don’t even need to provide a credit card. You just spin up an environment and start using it.

Q: From which version is the Cold Artifact Storage feature available? Does this mean Artifactory now supports multiple storage mounts — one for the active artifacts, and one for the cold storage?

A: Ori Yitzhaki, VP of Product: The version is currently unknown because we’re still developing this feature, but it will be part of the 7.x code base. There are now many reasons to upgrade: Cold Storage, Distribution in general, PDN, Federated Repositories, Projects, and more.

In terms of the architecture, we’re going to support more than one binary provider but the way the architecture is going to work is with another instance, which is headless. Your developers and admins will not need to log in. The amount of data you’ll be able to aggregate in the regular archives will have an impact on your ongoing performance, and we wanted to make sure that you’ll get performance benefits by moving stuff to cold storage. 

Q: In addition to Xray, we’re also using another tool — WhiteSource — for open source dependency scanning. Whitesource has documentation saying Xray can be hooked up to our WhiteSource instances. Is this recommended? I’m under the impression Xray and WhiteSource compete against each other?

A: Dganit Arnon, Product Manager: I’m not sure which documentation you’re referring to, but there’s currently no integration between WhiteSource and Xray, and we’re indeed competing. The only way to integrate Xray with WhiteSource is through our custom integration capability, which lets you integrate to any source of vulnerability data.

Ori Yitzhaki, VP of Product: We used to have an integration with WhiteSource but due to competitive situations and other reasons it was hard to maintain, so we decided to deprecate that integration.

Q: Can you elaborate on the Slack-JFrog integration?

A: Loreli Cadapan, Senior Director of Product: For now, the Slack integration is with Artifactory-related events and Xray-related events.

It’ll notify you on Slack when an artifact has been uploaded, when an artifact has been moved, copied or deleted, and on the build side, when a build has been promoted, and we’ll be adding more functionalities. On the Xray side, you’ll get Slack notifications about violations once you’ve set up your Xray policies and watches. 

This is built out of the box so you don’t need to work with our webhooks or any of that. It’s all integrated for you. This is currently available for our SaaS instances. We’re looking to get this integrated with the self-managed platform.  

Both the Slack and Microsoft Teams integrations are bi-directional, so you’ll be able to take actions as well directly from these collaboration tools, such as ignoring an Xray rule.

We are seeking beta customers to give us feedback on the Slack and MSTeams integrations before we release them.

Q: What’s the JFrog response to the recent dependency confusion attack?

A: Evgeny Karasik, Senior Product Manager: Dependency confusion attacks aren’t new. They happen when an attacker identifies the external packages from public registries that you’re using on your code, and they upload a clone of those packages that contain malware. So when you inadvertently fetch those newly-uploaded clones from the public repository, your code gets infected. 

For many years we had a solution for these situations via the advanced configurations for the repository. For example, you can use exclude patterns or you can use the scope packages.

However, we recently took another step forward to simplify these configurations for our customers and introduced a new setting for our repository called Priority Resolution, which you can turn on or off at the repository level and mark trusted external repositories that you allow your developers to fetch artifacts from. Once the results are merged — for example under a virtual repository — only the approved results from approved repositories will be combined. Then you can eliminate the artifacts fetched from unapproved repositories, and that way avoid becoming victim to a dependency-confusion attack.

Q: What’s the recommended path to migrate to JFrog Projects?

A: Evgeny Karasik, Senior Product Manager: JFrog Projects provides a unique capability because it’s not just about adding functionality to the JFrog Platform but also about impacting your working methodology. 

So first, you should focus on the methodology, and identify what’s the representation of the project in your organization. Is it an application development project? Is it a release campaign? 

Then you can start identifying what kinds of assets you need to maintain as part of your project. Start with the repositories, and identify how you’re going to segregate those repositories, or alternatively how to share those repositories across different projects. For example, if you have a public registry that’s shared across different projects through a virtual or remote repository, maybe you’d like to keep it as a shared repository for all. If you’re using JFrog Pipelines, you need to do the same for its resources. 

Another area to focus on is roles. A role is an abstraction we introduced on top of the current permissions model. You need to have a clear methodological understanding of what kinds of  roles there are in your organization, and what users or groups you need to apply to those roles.

At this point, you can start onboarding your projects. It’s a good practice to start with pilot projects or proof-of-concept projects, before moving on to more important projects.

Q: How did you build the excellent JFrog-Datadog dashboards you showed during the conference?

A: Loreli Cadapan, Senior Director of Product: We built this integration out of the box in partnership with Datadog using their SIEM capability. This allows customers to consume, analyze and visualize Xray data using out-of-the-box dashboards. As and when Xray finds a license violation or a security vulnerability, it is reported and reflected in the dashboards within these tools.  

With these dashboards you can see an overall summary of the count of violations found with the software you’re running.  An aggregated count of all license violations and security vulnerabilities is provided to the user with deeper policy and rule based segmentation. Customers can also track the count of violations by type or severity.  They also get insights in the most frequently occurring vulnerabilities and most impacted artifacts and components.  

The important thing to note here is that we’ve actually transformed this data into a format that is accepted by these SIEM tools such that customers can weave them into their internal workflows that notify the relevant security teams if and when a critical vulnerability is reported by Xray.

To participate in the beta programs for Cold Artifact Storage and for JFrog’s Slack and Microsoft Teams integrations, please contact your JFrog representative.

To participate in the beta program for Private Distribution Network, click here.