The annual, can’t-miss JFrog roadmap keynote from swampUP 2020. We believe the next great DevOps frontier is the distribution of software. Automation throughout your pipeline is certainly making software more liquid, but how are you getting it to the ultimate destination? Our answer may surprise you, as we unveil new solutions and strategy! This keynote illuminates how JFrog is fast-forwarding our product portfolio to deliver software distribution to the edge, driving CI/CD and powering DevSecOps for a broader, more comprehensive DevOps platform. Don’t miss this landmark session!
Thank you Shlomi. I’d like to take you now into a journey into our announcements and show you what we’ve been up to in the recent years since the last one. But before I do that, I want to quickly revisit what we’ve done in the past year and show you the results of that.
We released the JFrog Platform, the unified JFrog Platform, in late 2019 or early 2020. And the results have been really amazing for us even to a degree where we were surprised by the adoption. And this adoption was that 40% of you, in the very first three months of the release, adopted the platform, and it was a very good sign for us of how much you trust the engineering in the product deliverables that are coming from JFrog in order to jump on a very major release, and adopt it and upgrade so, so quickly. So this is great news for us.
Now, from the early days of JFrog, we had a focus on binaries. Sometimes, people ridiculed us for that or looked at us with a funny eye, funny face, why we have so, what why we are so passionate about binaries. And the reason for that is something that was critical for us from the beginning is the notion that when you’re building a software release pipeline, you need to focus on binaries from the very beginning.
This is what we refer to as the shift left of binaries. And this is a very important notion because a binary is the only tangible thing. It’s the only format of release. This is what’s going to run up, to end up in your runtime. And, taking this notion further and looking at what many other industries or practitioners are doing, sometimes there is this notion that you can rebuild everything from source and rebuilding everything from source not only it’s inefficient and wasteful, because you’re just wasting time rebuilding things over and over again, but it’s also dangerous, because what may happen when you keep rebuilding even if it’s the same git tag when you keep rebuilding it, you may end up with different results in your end. So it kind of removes the whole security interest from your release lifecycle.
So we don’t want that to happen. We want to relate to binaries and we want to be in a position in a situation when every time we build something, we get the same result over and over again. And the only way to achieve that, really, is to take the first binary that you created in your release, and kind of travel with it throughout the release flow until it’s ready to end up in your runtime, until it’s ready for production. This is what we refer to as BinOps. This is what we felt was referred to as CI/CD that is driving BinOps. And this is what we have embedded into the JFrog Platform, specifically into JFrog Pipelines. And we are taking this notion of BinOps with Pipelines to the next level. And Avi Cavale, our VP of Engineering will speak about it in his session. Another aspect which is very important when we speak about releases in binaries is metadata. And metadata leads immediately to the notion of DevSecOps. So in order to attach any meaningful security information and gain things such as visibility, and security information, quality, and provenance of a release. So all this kind of metadata, the only thing that it can be attached to reliably is a released binary. That’s the only way that you can manage this metadata in a reliable way. What we’ve done is we’ve been looking for ways to offer this metadata as part of the JFrog platform. We already manage it internally, obviously, since the very first release of Artifactory, but taking this metadata and release is managing information about software releases, and versions in the security aspects of it is something that we wanted to offer as a generic public API. And we’ve been looking into all kinds of solutions at the router. And we’ve been looking at standouts that are out there in the market, and almost on the academic level and since we are great believers on practicality, we decided to take what we’ve already built and already are running in production and create a metadata service that spans across the whole platform, across all the services in all the services in the JFrog Platform. And it’s been running like that internally in a hidden way since the release of the unified platform. And what we’re announcing today is we are opening up this metadata for you to drive your automation with, and Dror Bereznitsky, our CPO will dive into more details around this new service. Finally, Shlomi mentioned that 50% of new deployments are going to be on edges. And it’s not surprising because this notion of having everything provisioned for public cloud offerings, it’s just not practical in reality. You do need data proximity, you do need computing proximity, and in remote locations, and this is why edges are making a comeback. This is why edges are becoming increasingly important. And the other aspect of it is, of course, IoT, the explosion in the number of devices, the explosion in the number of run times, which makes software distribution a very critical element in the software release pipeline. Now, we’ve been doing software distribution almost from the very first days of JFrog eight years ago or so. we released the first version of replication, which is taking care of distributing binaries between different locations because we had a lot of customers that are geo-distributed. And two years ago, we released JFrog Distribution, which is a is a step up in this global distribution and auction with the idea that you can take your release, sign it, and distribute it securely and very efficiently to remote edges in order to deploy into production. And this year, what we’re going to do is we’re going to make an announcement on local distribution. So we’re taking this one step further, to distribution to the runtime, within a data center from the edge node to the different times, could be hundreds or thousands of runtimes in a very scalable way, and we’re going to make some very, very exciting announcements around it. So three things that kept us busy. First CI/CD BINOPS, and second is metadata driven SecOps. And finally distribution from global to local on the way. It’s time to dive in and get into the details and really go into the meat of the announcements.
So without further ado, I would like to switch over to Dror, to take us and look at distribution first.
Thank you. Hello. So in the next couple of minutes, I’m going to take you on a tour to review several new capabilities while introducing around software distribution. And the first one I would like to discuss is peer to peer based distribution. So let’s start with a bit of a background. And in today’s modern world with the growth in cloud native application, container based applications, micro services and continuous delivery and deployment, we are starting to see new patterns of software consumption. And one specific phenomena that we are witnessing is something that we call download bursts and download bursts are cases where thousands, and even more, of clients are trying to simultaneously in a very short period of time to download one artifact. This can be for example, when we are provisioning a new version of a micro services or possibly multiple services. And a Docker client or thousands of Docker clients actually, are trying to pull the Docker image concurrently. This situation applies a high loads on the repository manager because all of the clients are trying to download it in a very short period of time. And it can lead to a degradation of performance and to worse download times. So we thought about how to solve this situation. And basically you can increase the amount of concurrent downloads, and the amount of clients that are trying to download while reducing the load on the server and allowing you to grow and get constant download times. And the solution we chose is based on the peer to peer based architecture and let’s see what we came up with. So I’ll begin by reviewing the different actors in the peer to peer story and I start with the peers. So the peers, the peer is a lightweight software piece that’s running on your run time. And the peer is responsible for downloading the artifacts. It’s responsible for the actual download of bytes, and it can download bytes either from Artifactory, or, more importantly, between the peers themselves, so downloading between peers, this is the peer to peer. Then you have the client, the client is the software that you’re usually using in order to fetch the software packages. This can be for example, your Docker client, your NPM client, but it can also be the JFrog CLI or it can be just a simple _ command. And in a regular situation where you have a direct download, the client will communicate with, Artifactory with the repository manager and download the bytes directly. Whereas with the peer to peer, the client will use the peer itself in order to download the bytes. And finally, the last piece is the tracker. The tracker is the one responsible for orchestrating this entire process, it has the knowledge of which peers are currently connected and are part of the swarm. It knows which artifacts are trying to download. It knows which one of them is holding which artifacts, and it is responsible for letting the peer know from where they need to download. So if we’ll go back to the use case that we mentioned before, Docker clients that are trying to pull an image, in the case of peer to peer it will start with a couple of Docker clients that will start communicating with the tracker. The tracker is actually a service within Artifactory. And at this point the tracker knows that none of them still didn’t download the image. So it will guide them to directly download the image from Artifactory. Now what will happen is that the image will be chunked into smaller pieces, and the peers will start downloading those different pieces. And then, at a later stage once they start holding pieces of the image, the tracker will know which peers are holding those pieces and will start sending the next peers that want to download to the fellow peers. And then the pieces of the image will be exchanged between the peers, reducing the load from the server, basically all of the downloads are spread now, and are between the different peers. So this is based on an HTTP based protocol. And we are not using any proprietary binary protocol. We are not using BitTorrent for that. So you don’t need to worry about having special network configuration or about opening firewalls and so on. Everything is done over a secure channel, so using secure HTTP, and all of the peers are authenticated versus Artifactory and basically leveraging the regular authentication and authorization mechanism that Artifactory is providing. One important benefit of using peer to peer based downloads is overall direct downloads, is the fact that you can increase the amount of concurrent downloads, basically, without having an effect on the download time and the load that there is over the server. As opposed to direct downloads, where you start hitting network bottlenecks or the limits of the HTTP session pool, and so on. And this is as a result of what we’ve seen before of the peers exchanging the information between themselves instead of working directly with the repository manager. The second topic I would like to discuss is hybrid, multi-cloud distribution. And about two years ago, we released the first version of JFrog Distribution, aiming at solving the challenge of software distribution in the modern world, and making sure that you have efficient, secure, robust software distribution capabilities. Now the first version we release was intended for on-prem usage for self hosted environments. And earlier this year, we also released JFrog Distribution as part of the JFrog Platform, also to our SaaS solution to the cloud. Now, I’m happy to share that we are completing the offering by also offering a hybrid multi cloud distribution. And the use case for multi cloud distribution is actually something we heard from you. This can be for example, customers that are running software in the cloud, but also running the same software in places like labs, like service centers and retail stores, and those customers have the need to distribute the software to all those different runtimes, all those different locations. And for that, they need the ability to place the edge nodes in all those locations. And this is exactly what we are aiming to solve with the hybrid distribution. So let’s see what we have as part of the hybrid solution. And on the left side, you can see the JFrog Platform. And in this case, it’s provided as a service, fully maintained and operated by JFrog and it can run on the cloud provider of your choice. And this includes all the different capabilities of the JFrog Platform. So starting from Artifactory, for package management, to Xray for security, and Distribution, of course for software distribution and Pipelines for CI/CD, and Mission Control for administration. And on the right side, you can see the edges. Now the edges are actually the hybrid cases. And here we have two options. Either you have the edge as a service coming from JFrog, and then you can provision the edge in the region of your choice. Or you can self host the edges by yourself, and then place them exactly where you need them and simply register them with the cloud platform, and once the edge is registered to the platform, it operates like any other edge. And this gives you the ability to distribute to many different locations, having full flexibility over the location of your edges while enjoying all the benefits of using a SaaS-based solution. The third topic I would like to discuss is CDN-based distribution. And for the past couple of years, we have been providing CDN boosted downloads using JFrog Bintray. And as another step in the evolution of the JFrog solution for software distribution, we are now introducing this capability of CDN boosted downloads also as part of JFrog Platform and specifically as a new addition to Artifactory. So what is the use case for CDN based downloads? Think about the situation, whether you need to distribute publicly available software. This can be for example, a freemium version of your product. It can be SDKs, it can be plugins, or it can be device labels. In this case, we are talking about massive distribution, we are talking about a lot of users that need to consume this product, and we are talking about anonymous users, you don’t know who they are, and they can be located everywhere across the globe. So what we want to achieve is, first of all, a solution that will provide the same experience wherever the user is located. And second of all, that we’ll be able to handle these massive downloads. Another flavor of this use case is where you want to restrict in some way the users that can download for example, only users that have paid for the software or only users that are best to your website, and then you want additional authentication capabilities on top of the CDN. So let’s see how it works with Artifactory, and basically what we are doing is we are breaking the responsibility for serving the package downloads between Artifactory and the CDN. The initial download request will be served by Artifactory and Artifactory is responsible for identifying exactly which artifacts need to be downloaded. For example, in the case of Docker image, Artifactory will need to identify the correct tag and see what are the layers, that are part of the stack, and of course also to enforce authentication and authorization. Once Artifactory determines the actual artifacts, which needs to be downloaded, it redirects the request to the CDN, and the CDN will be serving the actual bytes from the CDN port which is the closest to the user. Now support for the CDN-based downloads can be configured per repository allowing you the choice to decide which repositories you want to include this capability and which repositories want to just allow simple direct download. And one last point, the CDN-based downloads are currently available on the path of the JFrog cloud solution, the SaaS-solution and are now enabled by Amazon CloudFront CDN. The next topic I would like to discuss is DevSecOps, or SecOps, and this is a very hot topic. We’ve been all hearing about it in the past couple of years. And usually when discussing security and SecOps, the focus goes to vulnerabilities. And to, how can I identify and detect vulnerabilities and how can I mitigate them as early as possible? And this is something we have been doing for the past couple of years with JFrog Xray. But today, I would like to discuss a different angle for this topic. And this is the ability to be able to trust the software packages you’re using and let me explain. An important part of being able to trust the packages you’re using is knowing exactly where they came from. So in the case of third party software, this is about knowing from which repository, which remote repository they came from and who downloaded them. In case of software that you’re building, it’s important to know which pipeline made them from which sources they came, which tests they passed, are the bytes that are going to provision in production are the same bytes that came out of the software pipe? And in order to answer all those important questions, it’s not enough to have just the artifacts, what you actually need is gathering, collecting metadata that kind of captures the journey of the artifact across the pipeline. This is starting from the moment the developers started coding until the minute the software is provisioned in production. And you want to gather all this metadata and manage it in a central location. And if you think about it, this is exactly what we are doing with the JFrog Platform. So starting from capturing information about your CI/CD pipeline and capturing your build information using the building for standards, moving forward to capturing all the information about security vulnerabilities and license compliance issues, using Xray, and moving to capturing the information about your release, what you’re about to launch to production, that is gathered with the release bundles and JFrog Distribution. And earlier this year, when we released the unified JFrog Platform, we also came out with a new service, we call it the metadata service, and this service aggregates all the different types of metadata that we have inside the JFrog Platform. And for us, the next logical step is to expose this metadata so you can query it and have a better ability to trust the software that you’re building and using. And for that we are working now on exposing an extensive set of APIs that will allow you to query all the different parts of metadata that we are managing across the platform. Those are going to be GraphQL-based APIs, and it’s going to be available very soon, so stay tuned in the next couple of weeks for more announcements. And with that, I would like to pass the stage or the virtual stage to the next part delivered by Avi Cavale, VP of Engineering, Avi the stage is yours.
Hi, everyone, my name is Avi Cavale, I’m here to talk about CI/CD.
It’s the subject that I’ve spent a lot of my time focusing on. And as Shlomi mentioned, this is all about CI/CD for the modern world. And what I’m here to talk about is all about how to evolve and not be left behind as the modern day CI/CD is kind of moving at like 1000 miles an hour. So before I get into it, the first thing is last year at the same exact conference, Dror and I talked about JFrog Pipelines for the very, very first time, we introduced it to all of you guys. And I want to start by saying happy birthday to JFrog Pipelines. It’s been a fantastic year at JFrog and we have more things to share about what we have been doing, at Pipelines on my team. So as everybody’s aware of this, enterprises are going through a massive digital transformation. I mean, Shlomi touched upon quite a few points of what has happened with the current situation, and this digital transformation has gone even more faster. And what I believe is, automation is super critical in order to actually achieve this digital transformation. Now, the challenge is that a lot of us are kind of saying things that I don’t even think we should be even admitting to like, things like, we have completely decentralized all our DevOps tooling. We kind of write all our automation code in an imperative fashion. Every time our Jenkins server doesn’t scale, we actually end up creating another Jenkins server and have now two instead of one, I mean, I mean, these are all crazy things that we’re talking about. Now, the problem is, what is exactly causing this? So it’s not about a tooling issue. It’s actually a fundamental philosophy that is different. And I’m not talking about DevOps philosophy or anything like that. And I love the comms acronym, and I love the automation part of it. Of course, that’s what we want to really focus on. The key is how do we actually go about doing the automation and then there are two options about it. And as Yoav mentioned, that there is a code way or at least a source code driven way of doing automation and then there is a binary way of doing automation. And, and we truly believe that the binary way is actually a lot more solid. And you have talked about it in terms of immutability and why we really need this in our team. So let’s look at what BinOps is and how you do CI/CD automation, and most importantly, JFrog Pipelines is the first binary-driven BinOps platform that’s out there for you to do your CI/CD. Now, let’s look at BinOps in action, right? So the first thing is, there’s a microservice that Team A is building, and they’re making some change and they create a pull request that actually ends up wanting this change to be merged into the mainline. Now the moment you merge, you actually end up creating a binary, which could be of anything, it could be a tarball or a Docker image. And now this binary has to go through some sort of unit testing and eventually what happens is this binary goes from development into a test repository. And that’s what we call a promotion. Now of course, there can be multiple of these microservices or multiple teams building different pieces of your application. And, for example, in this case, there are two microservices that’s happening. Now, once you have two tested components or binaries, what you really want to do, is to create a bundle that combines these two pieces of the application and creates it into something that you can actually release. But this is not good enough. And at this point, I want to actually test it. So what you actually do is you deploy it into a test environment and while you do this, you inject some settings into the test environment that allows you to actually test it. And if it works, you actually sign the bundle to make it immutable. And then you kind of make it ready to release, and this is what we call as a signed release bundle. Now the next step, of course, from here is to distribute it. And what you do is you release the bundle, distribute the application, so that it actually ends up going into multiple regions. In this case, I’m deploying it to three different regions. And I’m pushing the same immutable app into those regions so that it can be locally deployed. Now, you inject production settings into that particular deployment into each of these regions, and you end up having an app running in production. Now the key about all of this stuff is, if you see here, the binary was created, very, very close to the developer. And this is the key thing that we talked about when we talk about BinOps. So we still want to achieve DevOps, but it’s a fundamentally different philosophy of how to go about going, how we go about doing this. Now, let’s see what else we have, what does this buy you. I mean, this is the most important thing from your perspective. So the most important thing is we remove friction from your software delivery. I mean, this is true to our mission, true to our vision, making software liquid is all about removing friction. It accelerates your DevOps maturity. I mean, it helps you get faster to where you can have continuous flow of your software updates. And of course, the platform is enterprise great. I mean, we’ve been running a massive scale. It’s completely extensible. And we want it to be as future proof is possible. I mean, the world is moving at a very rapid pace and you want a platform that is keeping up to that kind of evolution and innovation that’s happening. And most importantly, as I mentioned, in BinOps, you want to shift left your binary creation as close to the developer as possible and then you move it along in stages. So these build once, and promote through stages and run it in production is what BinOps is all about. Now with that, I want to introduce you to Pipelines 1.6. We’ve been super busy for the last one year. And a lot of features have gone in, I’m just highlighting a few key features that’s out there. One, is we have Kubernetes runners, and this is something that we were passionate about, and we knew how to do it, but we needed to be super secure. And that’s where we have invested a lot of time to make sure that your Kubernetes build runners are very secure, even if you run it in a multi tenant kind of a fashion. Second, we have added fan in and fan out steps. This has been one of the most requested things that people wanted, it’s ability to be able to kind of split your tests into parallel streams, and then bring it back together into a common stream. And this is a super cool feature and the way we are built it is very intuitive and simple to use. Third, we have completely unified the platform. So the entire CI/CD workflows are all inside the JFrog UI. And last year, when we showed it kind of opened a new tab, now it’s all completely unified. And this makes it very easy. It’s a one stop shop for your entire lifecycle of how to take an application from code all the way to prod. And then this is my favorite feature. And I kind of put it in the end. But I want to talk about it a little bit more than my next slide. And what this is is all about extension development kits. Let’s go look at what it exactly is. So extension development kit. I talked about plugins in one of my slides a had up there like people love plugins, but it creates a lot of chaos. Because every time a new version of plugin comes in, you need a new Jenkins server in order to actually run it because the older teams using the older plugin don’t want to change, though, the entire DevOps score just because a new version of a plug in has come in. So we have actually gone and solved that. It’s a DevOps _… sorry, it’s an extension development kit that allows you to create custom DevOps actions as something that you can share within your organization. It has true versioning capability. And the most important thing is, if you have multiple versions running, all of them can run on a common platform. So you can actually coexist one next to each other. And this is exactly what I’m actually going to demo in my, in the next couple of minutes, where I’m going to actually show you how to create an extension development kit, create a new version of it, and run both of those side by side in a single pipeline. And this is really, really cool. And of course, what it does is it allows you to reuse these steps across your organizations and that’s the key part of it, is about reuse so that we can reduce the amount of imperative code that you’re writing in your organization. So let’s look at the demo. So pipeline ships with a bunch of steps than native steps that it comes right out of the box. So what I want to do as a user, I want to be able to extend this, I want to create my own native step. And that’s exactly what we’re going to do right now. So this is the JFrog Platform, it’s unified. As you can see, Pipelines is pretty much empty at this point, I have a few integrations that I have set up that I’m going to use. It’s an Artifactory, GitHub and Jenkins server, which also I’m going to demo. And then I have an empty repository that I have created so that I can actually author my extensions using the SDK. I also have an empty Pipelines repository, in which I’m actually going to create the pipelines to test what I’m actually doing. So let’s get into the actual code. So I’m using _ to do this. So my mapping of extensions and pipelines are here. So the first thing I want to actually do is create the step that I want to do. So what I’m going to do is I’m going to create a forest structure. This is kind of the path to create native steps, I want to call my step as DownloadTar. And then I want to start with a version 0.0.1. Now I need two files in here. One is what we call as the step model. And this is .yml. and then I have another file that I need, which is what we call as what happens when this particular step executes. And we call it on Execute.sh. So I already have code for this, I’m going to just quickly add that and it’s in my scratchpad. So I’m going to just take the code for the step model. And this is for the 0.01. I’m going to add this here. And then I’m also going to take the code for the business logic that happens when this thing executes. And that’s on the on Execute. So let me just commit this and so, git commit, git add ., git commit. So then I’m going to just push this to my Git repository. So while that’s getting pushed, so what I’m going to do is now hook this repository up into the pipeline’s product. So for that, we have added a new thing called extensions. Extension sources is where we go and this is very similar to how we add a pipeline source. I’ve already created a GitHub integration for myself. So I’m gonna use that. And this thing is present in this path on my GitHub and I want to use my master server. So if I create the source now what it’s going to do is it’s going to create this source and immediately start sinking and checking if there are any new steps that have been added into this. So while that’s happening, if you see my extensions already, I see the download target created and the 0.0.1 version is already there. Let me just walk you through what exactly is happening here. In the step model, I have a configuration, I have two fields, download URL and download path, which I need. So I’m going to download something from a URL and put it into the download path. And the actual business logic is right here. So if you use I’m using a very simple curl statement here to actually download this as a TAR file and put it into my download path. Alright, so very simple, straightforward. So this is just for illustration on how to actually use this. So let’s try to use this in a pipeline. So for that, I will go to my pipelines folder and create a new file called pipeline.yml. And in this, I’m going to now add my test code to actually run this particular pipeline. So this is now going to be added here and I’ll walk you through what exactly this is, this is the standard pipeline code, I’m adding a pipeline called demo_extn. The name of the step is the down_001 version. And it’s a type DownloadTAR. And if you see I’m giving it two pieces of information that was what was required by the step model. And I’m adding it here. And this is where I’m downloading is just GitHub tarball from a particular repository, which we have called DevOps recipes. So now let’s commit this guy and push this. [inaudible] And let’s push this guy. And so this got pushed to my git repository of pipelines. So my extension file is already there. Now what I’m going to do is I’m going to add a pipeline source so that I can actually create a pipeline for this, again, I’m going to use the GitHub repository. This one is in this particular repository, pipeline.yml, all of this is default, I’m going to create this. So now what’s going to happen is you’re going to go and try and read my definition of the pipelines and sync it. And internally, what would happen is you will see already a demo extension pipeline has already been created. And if I jump into it, you’ll see the down_001 version has been created. And I can actually see the .yml that we actually give in here. And the syntax version, syntax version is pointing to 0.0.1. So let’s try and run this and see what happens. So I’m going to fire this off. This immediately fires a build of that particular step, and it’s killed, and it’s already starting to run and run a setup process. So it’s pulling the container, business logic is executing. And then if I expand this, what it did is it did a curl, and then it also, I’m just doing _ to actually verify that this has happened. So let’s assume this is the 0.0.1 version and this is not good enough for me, a bunch of people are using it, but they’re doing a bunch of steps to extract this. So I want to give them a portion of this. So what I’m going to do is copy the same exact file, create a version two of this. And this is one of the biggest powers of a pipeline is that you can run both versions on the same exact server. And this is where we award the plugin hell that happens. So now let me just go back to my scratch pad that has code here for version two, which is just going to override my model here, it’s the same thing. But I’m adding a new thing called whether I should really unzip this or on top of this or not. Now I’m going to add, override the execute command here. So this time around, I’m actually entering it in case my unzip is actually set. So going into this, I’m going to change the business logic so now I have both versions 0.1 and 0.2 in here, let’s commit this and push this guy. Push, and then this, oops, sorry, did that get that. So let’s commit this and then then just push. So now what happens is this goes and creates a new portion. So if I go back to my extensions, now, you should see the extension sources sinking, doesn’t… yep, it synced. And now I got my second version. So both versions of the extension exist on the product at the same time. And now what I want to do is I want to actually, you know, extend this pipeline to add a little bit more stuff. So what I’m going to do is take the next part of my pipeline and add it to the existing pipeline. So here, I’m just going to add this. So what I’m going to do is once this runs, I’m going to run this thing and in this one I’m going to do, I want to try and run both of it together. So let’s just push this out, git command push. So what you should see now is the sync process will pretty much run almost instantaneously. And then I should actually see my second version of the thing also there. So if I go and look at the .yml, the first one is using syntax 0.0.1, second one is using 0.0.2. So let’s try and run this guy now, individually, so it creates another run. And let’s watch what actually this does. So this thing executed that was super fast. But if you actually expand this, in this case, it’s actually expanding and untarring this particular file. So the beauty of this is you can actually create pipelines where you can run both versions in parallel. So the advantage of this is, you no longer are bottlenecked by a single team trying to say, hey, you can’t upgrade a particular plugin I’m using or in this case, a particular extension native step I’m using. So pretty much everybody can be on a single platform and use both versions. That’s about the extension kit. One last thing I want to actually show you is we have also added a bunch of stuff. When you edit this, you can actually change the development lifecycle stage production, pretty much even retiring if you want to do it at a future date. So there’s a lot of support that’s been built in order to make this thing super easy for you to use. What you just now saw, is extensibility in action, and it makes your code super maintainable. And most importantly, we’ve also done a bunch of UI work that makes it observable. And this is the power of BinOps on JFrog Pipelines. And I want you guys to know this is available today. I want you guys to download it, try it out, give us feedback, and hopefully we can improve it together and evolve together. And now I want to leave you guys with one single thing that if you want to remember of this tech, you don’t need to compromise just because it has been unified. I mean, that’s the spirit of what we are doing at JFrog Pipelines. There’s this whole notion that if you centralize, then you lose the autonomous behavior that you have of your platform, you lose speed. On the other hand, you do get centralization. And of course, it’s centralized. This is an axis that a lot of people struggle with on which way I want to actually go. The beauty of JFrog Pipelines is we give you the best of both worlds. And that’s the power of JFrog Pipelines platform.
Thank you very much. So with that, I would like to conclude the session with a short summary of everything we have seen so far in the different announcements we made. And I start by talking about software distribution. We spoke about the new peer to peer capability that allows you to have more efficient downloads, faster downloads without applying loads to Artifactory. Then we spoke about the use case for hybrid software distribution. So how you can now combine the power of cloud with the flexibility of having edges located in the cloud or self-hosted. And finally spoke about CDN-boosted downloads and how we can self public downloads of software all across the globe in a fast and efficient manner. Then we spoke about metadata-driven security so why vulnerabilities is not everything important in security, and how we can leverage the metadata into JFrog platform in order to trust your software. And finally we heard Avi speaking about BinOps. Why does it matter and how we can achieve it with JFrog Pipelines and the JFrog Platform, and also heard about all the cool new features that are being released with the latest version of JFrog Pipelines, including the new extensibility SDK. And with that, I would like to thank you all for listening and please enjoy the rest of the conference. Thank you.