Learn the latest empowering features of the JFrog Platform announced at swampUp. We’ll show how new and innovative improvements such as Projects, Federated Repositories, Signed Pipelines, and Private Distribution Networks enable binary lifecycle management at scale through security, collaboration, and observability. We’ll also explore the latest ecosystem integrations for PagerDuty, Slack, MS Teams, Splunk, and DataDog that bring greater visibility to your SDLC.
Additional resources:
- Learn more about the Industry’s First Private Distribution Network
- What’s New from JFrog: Binary Lifecycle Management at Scale
- JFrog Product Leaders Answers your Burning Questions
- Summary of SwampUp 2021 announcements
- Catch all the talks from SwampUp 2021
Video Transcript
Hello everyone and thank you for joining JFrog’s What’s New webinar where we’ll be covering the latest announcements from our recent Swamp Up event. My name is David Landsberg and I’m the Director of Product Management here at JFrog. Before we dive into the What’s New announcements, I want to reiterate JFrog’s mission to power a world of continuously updated versionless software which we have dubbed Liquid Software.
The first announcement was around addressing a challenge of a shared JFrog platform across siloed teams inside your company. And to address this challenge, we have announced JFrog projects. So what are projects and why do I need them? What we are seeing in recent years is that the JFrog platform is usually managed by a small DevOps team who is responsible for various services such as source control, binary management, CICD and so on. We like to call those teams super DevOps. One of the challenges those teams are facing today is that they need to handle a lot of the work related to onboarding new teams. For example, creating repositories, granting permissions. In addition, whenever a team or a project need a new resource or to onboard a new member perhaps they also need the assistance of this super DevOps team. And this can overload them with work and cause them to really become a bottleneck. Projects are a new platform capability that solve these challenges.
A project is a management entity for hosting your team resources and associating users or groups as members with specific access rights to just those resources. Resources are scoped at the level of a project. Project has a name and a project key. Resource names are prefixed with a project key in order to distinguish them and specific resources can be shared between projects. As you can see in the example here, the repository project A is a local Docker repo and it is shared with project B. You can mend storage quota at the level of a project and you can still have the global resources that are shared across all projects.
We have also simplified our permissions model by introducing roles. Roles are used for granting permissions to users and groups. A role is associated with a list of actions that are permitted for users with that role. Project members can be as signed with more than one role as you can see here. We have global roles that are defined in the scope of the entire JFrog platform. JFrog platform comes with a few of them out of the box but you can define your own as well. Those can be used for defining roles which are more general and can be used by many projects for example, a viewer role which is only allowed to read artifacts. As we have project roles which are defined in the scope of the project, those can be used for defining roles which are project specific.
The next announcement is around addressing a challenge of staying in control while scaling. Today, you need to keep artifacts in sync between locations, on-prem and cloud, between multiple data centers. You may have one-to-one topologies, or you might have one to many. And managing the syncs across a growing number of global sites, growing number of repositories, growing number of artifacts has become a real challenge. So we have announced federated repositories which is really an end directional replication that’s simple to configure and maintain. Local repositories in different JFrog platform deployments are in cap under federation. The configurations themself are synced automatically as well. Changes are distributed as rapidly as possible and we are using a new underlying replication framework we call mirroring. This really saves a lot of time and complexity. With increasing pace of releases, we find ourselves storing many old versions that are not necessarily in use anymore. This is even more relevant for enterprises under regulation that must store their releases for compliance reasons sometimes for many years. Storage can be expensive and we have to be agile in storage management as we are in other resources that we manage.
And to address the challenge of data retention and growing storage costs, we have announced our cold storage solution. Cold storage allows you to save costs by archiving unused artifacts. We will allow storing archived artifacts in a different type of storage. So you can use cheaper storage solutions and reduce your total cost of ownership. You’ll be able to define archiving policies for determining which artifacts will be archived, when and where. Archiving policies will support the use of artifact metadata in order to define smart rules for archiving. And finally, we will provide the means for retrieving archive artifacts together with their associated metadata. If we look at this from the artifact lifecycles point of view, we save the artifact in our normal file store. Based on the policy we defined, we can archive the artifact to a long term storage environment. The artifact can be cleaned up from the archiving storage based on policies that you have defined as well or retrieved back to the active file store.
Our next announcement is our new JFrog signed pipelines to ensure tamper proof builds. This is an industry first innovation where the signed pipelines support zero trust meaning the pipelines will only take action on builds and artifacts that have resulted from signed pipelines. This allows to avoid any tampering with build artifacts whether accidental or malicious and basically block scenarios where for example a developer might decide to update directly a file in one of the environments without it going through a source control being an output of a pipeline thus really forgets to store it in the source control system. Each action is stored in a blockchain type ledger and this is signed as well, allowing for complete audit and visibility. Our next announcement is around our ability to distribute software at scale. You might already be familiar with our distribution capabilities via ed servers that allow for sending artifacts to remote locations with persistency at the edge. It supports remote cache pre-population, it supports security, you can validate delivery by signing your release bundle and report back auditing and tracking.
But what we have just announced in swamp app is an industry first private distribution network. PDN combines two industry known network accelerators in CDN and peer-to-peer protocol allowing for high scale concurrent downloads at a much faster rate. And now you can create your own custom distribution topology and bring the software artifacts as close to the consumption point as physically possible. The PDN consists of distribution groups that contain as many lightweight distribution nodes as you choose to deploy and they can run on any infrastructure, whether cloud, on-premise, IOT, you decide. And you can now pre-populate the cache with release bundles very close to the random clients that consume those artifacts significantly improving the distribution performance.
Let’s see a quick demo of PDM in action. Before we start, I want to just explain the topology of what we are about to see. In the demo we will have a central artifact instance and it’s connected to three edge servers, one in the US, one in Europe and one in Asia. Each Artifactory ed server has one or more distribution groups extending our distribution network. And each group contains many more distribution nodes. So let’s start the demo. So the first thing that we want to do is we want to generate a release bundle. We will do this via API via command line. In this case we’re going to call our release bundle swamp up 2021. So going to be your version 303. We have a query to capture the items as part of the release bundle. You can see that JSON that’s generated has different layers that are part of the release bundle. There’s a manifest file here. We have the check sum and we’ll use the command line to submit it into Artifactory.
Here you can see the new release bundle version 303. Note that the status is not distributed yet. I’m not going to distribute just yet. I want to show you a client pull request first. So imagine we are in London where as close to one of the distribution groups as possible and we’re going to do a pull request for that new release bundle that we just submitted into Artifactory. We’ll run the pull request and you can see the file is not found, I cannot pull the Docker image. We have not distributed the release bundle yet. So let’s go back to our factory UI and choose distribute. What you see here is a tree of all the groups, the edges that we have in our topology. We’re going to choose the distribution groups we want to distribute to. Let us just unmark California one. Overall in this case, seven groups around 600 nodes. As we said, this can scale to thousand and thousand of nodes. Once we press a distribute button, the release bundle automatically propagates throughout the different distribution groups through are the hierarchy, concurrent downloads all the way down to the different edge groups.
You can see it propagating in our topology tree. And it looks like it finished. The status shows completed. What we will do now is we will run that pool request from our London client once again. And you can see the speed of download was very, very fast because we are now located much, much closer to that distribution group that already has a release bundle available next to it. So just again, to summarize and highlight the key benefits of private distribution network. Speed, for fast deployments across complex topologies. Trust, we ensure security with full roll based access and signing of release bundles. Scale, we support high volume of concurrent downloads using proprietary peer-to-peer protocols. And simple low footprint node that is part of our DevOps platform. One more important item is security. In the world of enterprise software one of the main challenges we will always have is security. One of the main concerns of companies these days is supply chain attacks. One of the great examples of this was the solar wind supply chain attack where an attack on a single vendor ended up in 18,000 customers affected.
And Xray is our answer to this challenge. Xray detect vulnerabilities as early as possible, has a very powerful vulnerability database and is a core part of the DevOps platform. And this quarter as part of the announcements, we introduced two new features and it’s part of our shift lift philosophy. These features are very much developer centric and allowed to scan both source code dependencies and binaries earlier in the process to bring security to the hands of the developer. Source code dependencies scan is the first one. This is a first phase of the end goal to have full Git repository integration. This is a new CLI scan command that allows you to scan any source code on your file system and get the full dependency tree with all the Xray findings.
And most importantly, understand whether it complies with Xray security and compliance policies. This will be available via CLI and can be integrated into your pipelines. The second is on demand binary scan. It’s another new CLI scan command that allows you to point to any binary anywhere on your file system or on your CI server, really anywhere you want and scan it against Xray policies. You can scan binaries before and as condition to uploading it into Artifactory. And this is it for me, and now I will pass it over to my colleague Deep to go over the new integration analysis.
Thanks David. All right so I’m here to tell you a little bit more about what’s new with our partner integrations. My name is Deep Datta and I am a Product Manager on the partner team at JFrog. So this year and especially the last two quarters, our goal has been to better under understand how our JFrog customers want to use our products with other tools so that they can do things like monitoring, observability, collaboration and IT Ops or IT service management. We really have been focusing on how can we help our users better utilize metadata around their artifacts and what’s happening to those artifacts to create a unified end-to-end observability framework that allows both developers and Ops teams to better manage the release of software through various stages to get better insights into the performance of applications and multi-cloud environments and be able to troubleshoot issues effectively. To collaborate in a much more fluid and frictionless way through things like chat applications and to be able to respond to incidents and manage other kinds of performance and operational issues.
So the integrations that I’m going to cover today do fall into these three buckets. The first is observability. So we’re really doubling our efforts here. What we want to do is provide developers a much smarter view into everything happening within the JFrog platform so that they can look at metrics and get advanced monitoring capabilities. In the same vein, we want to make incidents and information that’s happening within the JFrog platform available in collaboration tools like chat applications. We’re making sure that not only are developers and Ops teams notified of things like a build status change, new artifact properties being added, maybe a vulnerability but we want to allow you to collaborate right within those chat tools themselves and even provide bidirectional actions that you can take that can then change or affect something within a repo or within your JFrog platform instance.
The third theme that I’m going to cover today is IT operations or IT service management. Making sure that incidents and actions are routed to the right places. That any incidents that occur that might require approval or input from various stakeholders, that those can be routed and decisions can be made effectively in an automated way even to the point of resolving issues automatically and then something like closing a Jira ticket. So here are the integrations I’m going to cover today. Splunk, Datadog, PagerDuty, Jira, Slack and Microsoft Teams. We also have a few other integrations that we’ve been working on but those aren’t going to be covered in this specific webinar. So first what I want to do is really lay the groundwork and tell you why. And here I’m telling you why we are focusing on observability.
So a lot of customers have asked us about tools that can help them monitor the performance of applications and the artifacts and builds that are happening inside their Artifactory repos. So as a first step, we want to make it really easy for third party tools to get a detailed view of what’s happening in the JFrog platform. We’re going to do this by integrating our open metrics API and I’ll cover that on the next slide. But the metrics API has a bunch of data that can then be fed into prebuilt dashboards that contain charts on system performance, charts on operational insights that talk about what’s happening in Artifactory and Xray. And these charts are available directly within the tool itself. So the dashboards can be accessed in Datadog, in Splunk. And this is all information coming from the JFrog platform form through our metrics API.
Now the other side is observability with. So this is where we provide all the build info to partner tools to track usage of what’s happening inside the JFrog platform. So with JFrog platform build info APIs, we provide information such as build version, artifact commit level and all of this can be used to enhance your application dependency maps and help your teams with troubleshooting. So some other examples of how this information is used is maybe for example JFrog pipelines and Kubernetes and integration that can allow you to track your deployment events along with the build info which provides relevant metadata for event analysis and root cause analysis. These integrations include prebuilt vulnerability dashboards to show Xray security and license violations. And again, this data is provided to our partners like Elastic, Datadog and Splunk which I’ll get into in a little bit. I think most importantly these tools deliver more than just metrics.
We provide logs with trace information. These integrations really provide a full end-to-end observability platform and full support for all the technologies you use most. So the first thing I want to talk about is our Splunk integration. So what is our Splunk integration? Again, it comes back to the open metrics API. This data is fed into Splunk in prebuilt dashboards that we’ve worked with our partner to provide. And that information covered here is basically things like what is the total CPU used by the Artifactory process? What is the disc space? What are the number of binaries removed by a run? All this information is provided in a dashboard view within Splunk itself and really helps your development and IT Ops teams monitor the performance of what’s happening inside your JFrog platform instance.
The next thing to cover is Datadog. So Datadog also is using the open metrics API that we provide. And we’ve also built in some vulnerability dashboards. So the vulnerability dashboards are really the newest update. We want to really expand the view into what’s happening in Xray. And Datadog the integration it includes logs, it includes all the open metrics API information I provided you before. And that along with the vulnerability data provides your developers a complete set of available insights into your vulnerabilities and how those impact the artifacts and components inside your repos. So this is similar to the information that we provide Splunk but it goes a step further with prebuilt dashboards for your Xray issues. So this includes the Xray log information, the Xray violation data and of course license compliance information as well.
And what’s really powerful about this integration is everything is centralized. It gives you insights into actions you can take that help you resolve issues or optimize your software development life cycle. And we also provide you a true view into the deployment and build information so you can monitor and deal with any issues that you see there. This is some of the Xray metrics data that we’ve provided to Datadog. Again, these are seen through these prebuilt dashboards that have all the information that I just listed above. The next thing I want to cover is IT operations or sometimes also expanded to IT service management. The main use case for this quarter was incident management. So if there is an issue that happens in Xray or using Xray an issue that happens within your repos like an artifact has a new vulnerability or hasn’t passed your license compliance policy.
We make those apparent to tools like PagerDuty and then using PagerDuty you can actually take that information, take all the metadata we provide and you can build out actions or automatic workflows using other tools. So for example, some companies have really regulated software development processes that might require monitoring and quality gates and controls over version control practices. They might have very strict code review. Obviously security scanning is a big topic nowadays. So all this metadata that we provide inside PagerDuty helps you manage your incidents and can help you really streamline your IT operations.
So our PagerDuty integration for Xray gives you all the Xray metadata around violations and license issues, your CBE details like the CBE ID, CBE summary, impacted components. And using this metadata you can use PagerDuty for the incident alerts but you can also integrate tools like PagerDuty with other IT Ops or IT service management tools to automate the resolution of these issues. And so here you’re going to see that on the right hand side I show you a little bit of that metadata view like CBE description, the name and the [inaudible 00:27:30] value and the path of the impacted component. It’s all there.
The next thing I want to cover is collaboration. So this for us is the ultimate shift left. Software development doesn’t just happen where the code sits. It’s also a very people oriented process. So your people, your teams, your developers, they’re all talking to each other inside these collaboration tools. In fact we feel and data has shown us that your developer teams spend hours and hours inside these tools probably as much as they spend inside their IDEs. So what we’ve started with is connecting your build information and your deployment information from Artifactory into Jira. And what this does is it gives you a very detailed view of what’s happening with each build, what’s happening with each deployment, how that metadata feeds back into operations that you are managing within Jira itself. And this is actually an integration that customer have been asking for a while. So we’re really excited about the Jira integration.
The next one is of course chat tools like Slack. What we are doing is we are making your Artifactory web hook event information and your Xray web hook event information available in Slack. So you’re going to get these cards that have metadata around vulnerabilities that are coming from Xray policy and violations. You’re going to get cards that show you when an artifact has been uploaded or an artifact property has been added. This is going to allow you to really track what’s happening inside individual repos and the able to talk about that behavior with the teams inside channels that you set up within Slack. This, we believe is a very powerful new way to gain insight and visibility into what’s happening within your JFrog platform instance. And we’re really excited about providing some of these actions that normally you would take inside the platform. You can take actions like adding an ignore rule to a violation. You can do this right inside Slack itself.
And along the same lines, we are building a very similar integration into Microsoft Teams. You’re going to be able to see your Docker tag information, your builds and your release bundle information. And like I mentioned before, you can send these notification cards into different channels. You can share them with individual team members and this really gives your developers a true view, an end-to-end view of what is happening with all your artifacts and builds from the very beginning of when they’re talking about what to build the planning stages to the end with the monitoring and observability that I covered earlier. So what I want to do now is actually just show you one quick demo of what we are building in Microsoft Teams. Now just keep in mind this was a prerecorded demo with one of my engineers. Here we go.
And let’s create a notification now. So once again, I’m going to create a notification, choose Artifactory, make sure that I’m creating notifications in this Artifactory. And also it tells that this notification will be created in this channel. So let me type notification name. This artifact is a demo. I’m going to just type delete later so I remember that I can delete this and I don’t need it. So if you’re familiar with [inaudible 00:31:48] this is very familiar with what you would see in our UI. And I’m just going to listen on any changes and any repository. We also fetch [inaudible 00:32:03] list of repositories. So to actually use this UI you need to be powerful enough to obviously create a web hook and also fetch a list of repositories, thus keeping good patterns, exclude patterns and other stuff and create a notification.
As you can see that notification was created. And there is a obviously if you want to create notifications you can click on this one. And if you want to do something else instead of typing here, you can just click help. And then you got more options. Just to confirm that our notification was created let’s click on manage notifications, or maybe no, no, let’s skip it for now. Let’s test the notification. So I’m going to do and trigger something, trigger this notification inside of my Artifactory so you won’t see that. So I’m clicking test to pass my notification inside of that [inaudible 00:33:11] factory. As you can see, we receive four notifications including some actions that they’re related to the notifications. Keep in mind that this is our sample notifications. So if I click more info or download then what’s going to happen is our app has given me that I don’t really have a, it’s more info that’s nowhere because just it’s fake.
But you got idea how those notifications work and how channels receive notifications. So manage notifications as it this point is nothing but delete notifications. Eventually there will be or options. For example, if you choose notification, you can not only delete it, but you’ll be able also pause notifications or maybe even added notifications. However, it will be, at this point it’s kind of I’m not sure if I will have time to do so but now you can only delete them. We can actually delete this notification if you want to see how that interaction looks like. So which one was it? This one I think right. No, this one. So the interaction is pretty simple. You just click on it and it will just delete it. And you got confirmation that the notification is no longer available.
Okay. Well now that you’ve seen the demo of Microsoft Teams and sort of understood what we’re doing with new partner integrations I just want to see if you guys had any questions.
Okay. So we actually have a couple questions in the inbox, we’ll start with that. First question, what is the pricing model for PDN for private distribution network? Good question. So private distribution network pricing model is based on consumption. We’re still in beta phase. By the way, if you want to try out our distribution network and participate in the beta, just feel free to reach out to me. Both the cold storage and private distribution network are in database. So you can just ping me at davidl@jfrog.com. We can start working on that. So the model itself is consumption based. So we measure the amount of gigabytes that get transferred in the distribution network. And that’s the pricing model. Again, until we’re GA I don’t have the final numbers of the pricing but P2P is already published in our pricing page. So you can look at that, that will give you a good idea of the model.
Next question. Let’s see. Okay there’s one here about how can I try out the cold storage feature? Oh so I already answered that. That is again in beta. So drop me a note, davidl@jfrog.com for either PDN or cold storage. There’s one here about the projects. Is there any limit to the number of projects that I can use? So projects have limits based on the subscription. So you can start interacting with projects in the enterprise subscription. You have 30 projects that you’re allowed to use in enterprise, enterprise X. When you want to use higher numbers you would do that in the Eplus subscription. And there’s no limit there. You get 300 out of the box and you can purchase bundles of a hundred, they’re fairly cheap to extend it. And we have customers that have already deployed thousands of projects. So the system can really handle a very large load of these project numbers.
We have here a question about the planned Jira integration. So that’s still in progress. The Jira integration that we’re working on, if you want to find out more information and maybe be a design partner for that reach out to Deep D, Deep my colleague who’s demoed and talked about at your integration deepd@jfrog.com. And I’m sure Deep will be happy to work with you further on it. Another question here, any limit of Artifactory instance in edge? So I’m not sure what that meant. Shahish maybe you can provide a bit more details on what you meant. But if we’re talking about the distribution we do have the edge server, the edge server is a read only the edge.
There’s no limitation except that even in some cases we allow right but it’s like a full Artifactory server with a whole role based permissions and persistency and everything you’re familiar with from the Artifactory server. When you’re talking about a distribution node at the edge, that is a Golang instance, very lightweight, doesn’t have persistency. Really handles the caching for you with very fast downloads, concurrent downloads as I mentioned. But all the logic, all the IP of Artifactory is still in the server itself. So those are different of course. And I think that’s it. If we don’t have any more questions, we will wrap it up. Thank you everyone for joining.
Great. Thank you David and Deep for this great presentation. We hope you found it useful. And as we mentioned in the beginning, you will be receiving the recording of this webinar tomorrow. And if you have any additional questions, please feel free to reach out to us at webinars@jfrog.com. I see there’s two more questions that came in and we’ll make sure to follow up with you via email. So thank you everyone and we wish you a great day. Goodbye.
Bye-bye.