What’s New in the JFrog Platform?

2020 is definitely a year to remember for all of us. Like you, we’ve been working from home and dealing with all kinds of new challenges and processes. But while we’re working through it side by side with you, we’ve continued to drive exceptional value with enhancements to the JFrog Platform. This includes changes in all products that allow better CI/CD orchestration, better binary management, innovative distribution, enhanced security, creative hybrid architectures and a better end-to-end platform experience. Here are all the new features we will cover in this webinar:

End to End platform

  • Multi-steps authentication
  • GraphQL for Metadata
  • Observability: Logs analytics and Open Metrics
  • Webhooks


Distribution


Pipelines

  • Dynamic node pools with K8S
  • Matrix Builds
  • Extensibility Framework
  • Deeper Jenkins Integration

 

Webinar Transcript

Hi everyone. So my name is Or Gat, I’m part of the product team in JFrog and I mostly manage platform related and infrastructure related issues. So today I’m going to use the webinar to describe what we have done the last queue. Go through three major features that we released that are around the infrastructure and the way that you can, most of them are security related, but also how you can integrate with the JFrog platform, extract some information. And I hope it’ll be interesting.

So on our agenda today on the JFrog platform is the multi-factor authentication feature that we enabled during the last quarter. GraphQL for the Metadata service, Webhooks. And then [Ifgani 00:00:50] is going to take it over for the Observability: Log analytics and Open Metrics item.

So I’m going to dive right into the multi-factor authentication one. This is not unique to JFrog, so what is multi-factor authentication? It’s basically multifactor authentication means that you increase security by preventing users with just to one authentication factor to log in or to access data that you own on behalf of you in an application. So in our example, if I have my user name and password, it’s the sole meaning of authentication to the JFrog platform and if someone else gets access to my password, they can authenticate as me and get ahold of my data.

So as most model applications now enable, you can enable as an administrator on the JFrog platform and multi-factor authentication. So it allows administrators to set requirement for time based, one time temporary passwords, which is the abbreviation for the TOTP protocol as an additional authentication method. How exactly does it work? Once it’s enabled, the first time that the user logs in after the feature is enabled is expected to register an authentication method, a TOTP application, they can Google authenticator. It can also be other TOTP applications. Most of the single one time passwords follow that protocol.

Sorry. And then later on, every time that you log in, you’ll be asked for a single time password in addition to your application password, your JFrog platform password. The idea behind it is again to increase security by not only having something that I know which is a possible, but also probably something that I have, which is a device that is synced to my user on the JFrog platform and lets me generate single short time passwords to make sure that I do have the device as well. So let’s have a quick demo of the multi-factor authentication enabling JFrog. So one second. And this is a recorder demo, but I’m going to speak over it so if you have any questions, please fly away. In the chat of course. So I’m logged in as an administrator in this demo. And first of all, I need to enable the multi-factor authentication. I did that. So security, settings, we enable Google authenticator, multi-factor authentication. As I mentioned, it doesn’t have to be, it’s not uniquely for Google authenticator. It can be any other authenticator app.

That’s it. Once it’s enabled now, when the user logs in is going to be asked to register and then to use the single time password. So I’m generating a user just to prove a point, and it doesn’t matter if the user is an administrator or not, doesn’t matter which permission he is on the application. Going to set a quick password and then save, log out as the administrator and log in as this, the new user. Oh, so in the demo it got cut in the middle. One sec, let me stop sharing and I’ll find the… One second. Sorry about that, this is the problem with demos. It’s always, God is in the details as you know.

Okay. I’m going to re-share from the same point in time. I’m waiting for it to load so one second. You’re going to see my screen in a second. So this is after I already created the user, I’m going to click play and continue the demo. So I log out as that user and I log back in as the new user I just created and I provide a password. Now I need to be able to, I’m going to be asked to scan the barcode, basically if I don’t have an authenticator uploaded, you can download an authenticator app onto my mobile device and the then scan the barcode. In order to do that portal and show you what it looks like, I have an emulator on my machine here casting my phone. So I’m starting it up right now. And you’ll be able to see the device on my phone, but eventually the scanning of the barcode itself, you’re going to have to trust me on that.

So this is my device. And these are my kids by the way. So now if I go and open the Google authenticator app, I can, if it’s the first time I’m scanning, when it asks to scan a barcode, I’m allowing the camera to scan. I’m scanning, you lose the image here in the phone itself, because it’s a security feature of Google. But once I scan, I get a single time password on the device itself. I’m going to break the site and click next. And now I can use the verification code from the device in the application, in the JFrog platform and login.

And that’s basically it for the multi-factor authentication. If you have any question, please send them in and we can discuss them later. Okay. So next we’re going talk about GraphQL for the JFrog metadata server. In order to do that, we’re going to start off by talking about what is the metadata server, then I’m going to talk about what is GraphQL and then I’m going show them as well. Can you guys see my screen? The presentation? Sorry. Yeah?

Yep. Yep we can-

So what is the metadata server? It’s a service provides unified metadata across all JFrog services within the JFrog cluster. Data enrichment for multi-products. It’s the backend for the package native UI screen. So this is your landing screen, the JFrog platform seven version. As a non admin user the first scale that you land on is the package search scan. So the metadata server is basically the backend for that UI. It has a generic model for all the packages that the JFrog platform supports. And it has an improved data modeling for better scanning, searches, filtering and so on. So again, this is the screen that you land on. This is actually when you dive into a package version, a specific package version, but you can see all the different packages in the JFrog platform. You can dive into a package, a specific package version, and then you can start reading information about the package version. All of this information is basically metadata on package that accumulated in the metadata server.

Some of the data is data provided with the package when it’s uploaded to JFrog. Some of the data is calculated by the JFrog platform, for example, download counts and others. And some of the data is data that you can only get when you have multiple services within your JFrog platform and you get enriched data from additional services like XRay data, distribution data, where is the package deployed to and other services as well. As an example, here, you can see, I can see a package with a version, I can see that it’s licensed is MIT. But specifically if XRay scans the NPM package, finds a dependency uses a different license plate that affects the actual license that should be presented for that specific package. Then the package license name will be changed here.

So you get enriched metadata for multiple JFrog services within a single UI. But how do you query this information in automation flows not for the UI? For that we have the metadata GraphQL API. GraphQL is a query language that enables you to query your APIs and fetch specific data that you need and only specific data that you need. Using one endpoint, you can query just the data that you want to retrieve in adjacent format it’s very easy to consume. The metadata GraphQL provides a simple way to fetch packages data stored in the metadata service. And since GraphQL enables you to query objects in a hierarchical structure, it allows you to choose which fields to include in a response. We are going to see an example in a second.

Moreover, a lot of additional filters you can apply. There’s a very good tool in GraphQL so you can have your own device for testing your queries. We also provide one with a platform, which I will show you in a minute called GraphiQL. It’s a model that we provide with the JFrog platform to test your queries and to see, and to learn how to build a query, to see that you get relevant result set and so on.

So this is basically the tool it’s called GraphiQL playground. It’s provided with the JFrog platforms, the endpoint, it’s not very critical, but you can find everything in documentation, but metadata API, query GraphiQL, and you can build your own query. It has auto completion for field name and filters and so on, and it has complete documentation of all the objects you can query on what type are they, how they’re related interconnected with other entities and objects and so on. And eventually of course, can test your query and see that you get the result. One very cool thing to see here and you can see it in the demo right now, I’ll show my screen again.

So this is me, this is my GraphiQL connected to an instance that I work with. I can write my own query, I can get description of the entities as I described. So is it, there’s a delay between the time I click and the time you see it. So I’m going to try to speak a bit slower. I can see an entire list of the objects that I can query, and I can dive into each object and see which fields it represents. It has. I can query on them as well. And I can see the relationship for example, between the package and the versions, which are there, here. I can see the filter that I can apply on a version. And based on the documentation here using the query builder, I can start building my own query and click play and get the results that I want.

I’m going to stop the demo here, and I’m not going to show it because the token is available and I don’t want to expose real life tokens in the production environment I’m using. But basically this lets me add fields as I go along. As soon as I start typing, it gives me auto completion for the objects that I want to query and the cool thing GraphiQL, I only get a result based on the fields that I’m queried. So if I don’t query for created date, I won’t get the created field in the result set and so on. So this is GraphiQL. I’m going to stop sharing my screen again.

The one very cool thing that I would like to share about GraphiQL is very, very important, has a lot of capabilities, the query building is awesome. It lets you use your own set of tools in order to build your own queries and test your code before you use it and it provides a very nice interface, which is programmatical interface to the metadata server. And you can query, of course, [over QL 00:15:16]. You can use the query that you built [over QL 00:15:19]. You can use basic HTP methods in order to query the GraphQL metadata.

The final thing I’m going to dive into today is the Webhooks. So basically Webhooks are just a simple way to set up integration between your JFrog platform and external applications. That’s the idea behind Webhooks. So examples could be Jira, Slack, Mail servers, even CI-CD solutions. And it’s not the CI-CD plugins we develop externally. You can use within the platform itself, but it is a way to trigger an event via call back URL in an external application based on events that happened in the JFrog platform. So any event that you already still, for example, Artifact deploy/delete, Build promotion, Docker pushes. There are several domains of events that you can use already, and we keep extending them as we go along. Whenever a selected event occurred in the platform, the platform will send an event details to the provided URL address. It’s quite simple as it sounds.

So the example here would be the first thing I do is I give it a name. I give it a cool description, but most importantly, I provide the URL. Like where would I like that trigger event to be sent? Then I decide on which domain I want events to open. I can create multiple events, but I can only select currently one domain per event, multiple actions in each domain per event. So for example, now I want for every artifact when it’s uploaded or deleted, then in the artifact domain, I will select artifact because the artifact was deleted.

Then I need to select which repositories I want the events to listen to. So I can select either using patterns, any local, any remote, same way you do when you set up a permission target to set like a couple of repositories I want to listen to. And then I can add additional information which might be relevant for the integration, for example, like a security token, additional custom headers that I would like to send and so on. And that’s it once I save this happens. So again, I’m going to pause and show you a demo. Okay. I see my screen, right? Yep. Cool. So the first thing I do is I need to create a listener on the other side. So I’m just using a Webhook website, which generates unique URL just for all my tests, where the events can be sent to that URL and then I’ll be able to see requests as they come in, pass them and take a look at what data was sent.

So I’m going to copy the URL from here. I’m going to define a new Webhooks under the administrative console, general Webhooks. I already have a couple, so we’re going to dive into the editing of them, see them. So it’s enabled by default. The name has to be unique and the description is also always very nice. This is for testing Webhooks on the Webhooks site. Specifically, the events that I register is when a property is being added to something. I place the order and now I check, I choose which domain I want use. So it’s artifact properties and only when an artifact property was added, do I want to get the event. As I said, you cannot mix between domains, but within a single domain, you can select as many actions as you want as the Webhook trigger.

Then I have to select a repository. It’s just a simple repository, a generic one in order to, for the sake of the demo. But as I said, you can use patterns and whatnot. And then if there’s authentication or additional header values that are required, they can do that as well.

I’m going to skip through it and I’m going to show you in the demo eventually that I also can add additional custom headers and we see them in the demo later on the website. This is the decking. One very important thing to do at the end is you can test, I’m sorry, I’m going to play it back a little bit. You have a test button where you can actually test, sitting here, under the play button. But you can test the connection and see events as they fly to the server. See what the event looks like. Okay. So if I click test a Webhooks trigger, you can see the payload, it’s a sample text and a sample event, in what event did they register to. I deleted those events because now I’m going to do the same as real events in the platform. So now I’m going to go to the repository that I’ve selected and I’m going to deploy an artifact. Because I have a deploy artifact Webhook.

I’m going to go to the web server itself and see the event as it came in, see the header values here. So header one, V1. If authentication then needed to happen, four headers that can also be a solution. And then the actual raw data as it came in as a post event. So an artifact was deployed. What is the artifact name. Where was it deployed to even the check sum so that I can program how the receiving server will behave based on the event that just happened in our platform. So events is basically, as I said, a simple way to create integrations with external applications by sending payloads for specific events to define in your JFrog platform. Takes a couple of seconds to stop sharing. And that is basically it. I’m going to give presenting rights to [Ifgani 00:22:01]. [Ifgani 00:22:01] take it away.

Thank you all. And short comment, we will have a short Q and A session by the end of the presentations. So be with us. And I already see that there are a couple questions in the chat, so we will answer those after the completing the content itself.

Okay. So let me walk you through quickly about the open metrics as part of our observability agenda. My name is [Ifgani Karasic 00:11:08], I’m the product manager with JFrog for Artifactory and Distribution. So we are going to talk about a few topics, but let’s start with Open Metrics. OpenMetrics the factor is a standard for cloud native, highly scalable metric collection protocol. And it’s a wide format and therefore it can be adopted very easily if you’re talking about integration with observability tools that being open source or vendor provided. The fact that this standard is picking up very significantly, makes us think that this is something that we also can provide as an added value for our customers.

And for that matter, basically we added the collection of the open metrics, the metrics for the system in the open metrics format. So in order to benefit out of that functionality, you can enable it via configuration file. Once the configuration is enabled, you will have the endpoint according to the OpenMetrics standard, which is the API metrics endpoint. The metrics are exposed there in the text based format. And for instance, if you would like to configure your permit or any other tool that is supporting OpenMetrics, it’s a very easy task to do. And you have here a code snippet that basically shows how to configure that and then once that configuration is done, then basically you can use additional visualization tools like for example, Kibana in order to get pretty good kind of projection of visualization of those metrics for your artifactory instance.

For the full set of the metrics I suggest to review our Wiki page. They’re constantly adding more and more metrics to that. If you believe there are some important metrics that they also would like us to be exposed in the open format, just let us know. And this is the extensibility that we’re constantly going to maintain on top of our solution.

So this is about open metrics. And now I would like to switch to JFrog distribution, which is part of the JFrog platform. And we have a lot of exciting content that is currently delivered in that domain. Today we’re going to talk briefly about Peer-To-Peer, download capabilities, offline distribution, and distribution via CDN. So this is something that is already released, and therefore you can kind of try it out immediately after you’re exposed to that functionality.

So I’ll start with the Peer-To-Peer downloads. So for the Peer-To-Peer downloads, when we look at the use cases that our customers are using for the distribution, the demand that we got was around the optimization of the last mile of the distribution. Let’s say if you have some runtime deployment that needs to be upgraded to the latest version of the Docker image that was recently released, basically we do have some burst in the concurrent downloads and therefore this is the challenge that we need to deal with. So we were scratching our head, how we can kind of help our customers to deal with that challenge. And eventually we took the concept of Peer-To-Peer download, but adopted it into the enterprise grade solution like JFrog platform.

So what is the Peer-To-Peer artifact download capability? First of all, it’s secure propriety developed Peer-To-Peer protocol, which is fully in line with the JFrog platform. And secondly, we are using artifactory as a content and discovery for the artifact, which means that if the client is looking for specific content, this content can be found on JFrog artifactory, but the download process itself will be little bit different and I will elaborate on that immediately.

They’re going to introduce a new component, which is a peer. So the peer it’s a standalone component that can be deployed in any environment. It can be virtualized, it can be standalone, it can be also bootstrapped into your cluster and so on and so forth. All those peers are working as an internal method of peers where the communication with the artifact and between the peers is handled by this proprietary Peer-To-Peer protocol that I’ve mentioned. So what we’re gaining out of that approach? First of all, it’s going to be robust for the concurrents of the downloads. It’s going to be resilient for the number of growing client. And since a peer component is a standalone component, basically we’re talking almost management free solution. You don’t need to maintain the life cycle of the peer because the life cycle of the peer is serving the one purpose to recash the downloaded file and share that file with other peers.

So let’s talk about the typical topology of the Peer-To-Peer distribution, and then switch to the demo. So on the left hand side, you have your own main data center where the JFrog platform is deployed, including artifactory, XRay, distribution and others. On the right hand side, you can see the production environment in this case it’s going to run in the Kubernetes cluster. And in Kubernetes cluster, we do have the tracker component deployed as a part of the JFrog platform deployment and standard distribution process of the release bundle is basically used to push the content into that production environment. Now, the question that we’re trying to answer is, what will happen if there are many clients, let’s take an example of the Docker client and all those clients needs to be updated, or all those clients are going to request the same Docker image, the latest Docker image that we push this through the distribution?

And this is exactly where the Peer-To-Peer network is going to help us because we are going to bootstrap the peer component as a part of the nodes in our cluster. And basically we are going to use those peers, not just for downloading the content, but also sharing that content across different locations. So now let’s switch to the demo.

So in our demo, we’re going to see the Kubernetes cluster with Artifactory Edge Node deployed as I mentioned, and we do have pods with the Docker clients. Each client is going to be configured to work against a specific peer. So it’s one to one kind of correlation, but in the real case or real scenario, you can use different approaches. You can serve several clients by the same peer and so on, so forth. So the fact how we are deploying the peers network, it’s really up to you. And the use cases that we’re going to request a simultaneously the download of some Docker image.

Okay so sharing my screen. Okay. Hope you can see it. Okay. So this is our Kubernetes cluster. We are starting with existing environment in which our Edge Node is already up and running. The Docker image is basically pushed to that using the distribution, JFrog distribution process and for the JFrog distribution process, basically we are using, then we’re going to use the Peer-To-Peer method. So first and first we’re going to deploy automatically bootstrap the peers. So the peers are configured as a standalone application so they’re going to install them and pre-configure them using some bootstrap configuration. So right now you can see that we have peers from zero to five, which means total of the six peers are up and running. The step after that, we are going initiate our Docker clients. Docker clients are the clients that are actually going to request the image that was distributed and they will take care in next phase in our deployment scenario.

Docker clients are also automatically pre-configured to use the peer as pass through proxy for the requests. And therefore, when you’re doing the Docker full command, basically the request is rerouted to our pre-configured peer. Peer is going to request that artifact from the artifactory, from the content look up perspective. And once this artifact is found, basically the node operation will be started. So the optimization that we’re going to see is basically happened when the second client is going to request the same file, because in this case, the file will be already cashed on the first period, downloaded it, or part of this file previously. And this is something that we can see in the demo that any subsequent downloads that will happen here are basically going to happen not only from the artifactory, but also from the other peers.

So what you’re doing here, and I stop for a second to demo. We are chunking the downloaded file into the two megabytes pieces. So for example, if our file is 68 gigabytes, we are currently have 35 pieces over there. So the first download will happen from the artifactory, as you can see. But the second download request will take 18 pieces of that particular file from artifactory. But 70 pieces are coming from the first peer that are already has those pieces available. And then we’re going to assemble that file on the peer level and serve it to the client. At the same time, we are going to keep that file as a cashed file using the local LRU cash or serving that file for the next downloads by other peers. So this is the same concept as a Peer-To-Peer network is working. But now what we did, we basically managed to adapt that into the reality or into the use case of our customers that are looking for the simultaneous download by multiple clients.

The advantage on that method and again, we’re talking about highly scalable system, but the advantage of this method is that we are pretty resilient to the amount of the downloads called downloads because the downloads is an advantage rather than disadvantage because everything is cashed locally in your network. So it significantly reduced amount of the traffic outgoing and incoming traffic to your internal network because files being cashed locally.

With that, I will switch to the Offline Distribution, yet another method that also bothers our customers because they are talking about the network that are physically isolated. This is relation can be by the connectivity or by a kind of security protocols that we have in place, but still the requirement is the same, there is a need to perform software distribution in those isolated environments, taking into account all those boundaries. It’s very popular in military environments or oils rigs or financial institutes or healthcare. So there’s a lot of air network in the world. And therefore the question, how to support the distribution over there? So the support for distributions is going be quite they simplified using JFrog distribution, you just need to export the release bonded version from your main distribution location, download the exported file, and then import that after using all the practices for accessing the airgap environment, you can put it in some media and then import it on the respective Artifactory Node or Artifactory Edge in the air environment.

Short demo for this one. Okay. So we are now in the distribution, right? So we’re going to create the release bundle, sign this release bundle, and basically start distributing it. You can see that our bundle contains some Docker application. We can see the bill of material for that application in the distribution itself, including the past mapping, including all the properties for the files and so on, so forth. So nothing changed here from a distribution standpoint. And now we’re going to export that version, so the export operation is basically going to create an asynchronous manner the full export of the file, including all the metadata, including the bill of materials for the files, including everything, all the information that is needed in order to be able to import that. You can see the release bundle is created as an exported release bundle in the release bundles special repository. And after that, we can run the download operation.

Again, because the export, the processing is an asynchronous one, it can take some time up until the release bundle export will be completed. You don’t need to wait for that, you’ll get a vision indication or using a Webhook that are presented previously, get an indication that the export of the release bundle is completed. And then basically can download it and start the import operation.

So what is important for the import operation? And now I’m switching to the Artifact Edge Node, and let’s assume that we managed to go into the area environment, and now we have all the rights to deploy it here. So the import operation is going to be exactly the same as import as regular distribution. So it’s not just dropping the file into some specific location, but it’s complete the distribution process, which includes also the verification of the content, validation of the signing for the release bundle, approving the release bundle bill of materials and so on, so forth. So it’s a very complex operation that will basically assure that the content that currently is imported it’s not just a bunch of the file, but it’s matching the metadata definition of the respondent that needs to be exported. So this is the distribution, I think it’s pretty clear what is the use case that we’re trying to handle here.

So some points to keep in mind while talking about the offline distribution, like any other operation that is supported using rest API or UI, just use UI for the sake of the demonstration. It is enforced by permissions, it’ll ask for the specific permissions for creating the export of the release bundle, for importing the release bundle. I already mentioned it is asynchronous operation. So you have the ability to pull from the status for the export, you don’t need to wait for that to be completed. The storage that you’re using for the exported archive is the artifactory storage, which means that you can kind of apply any storage practices that you have also for the exported content. You can clean it up in zoom retention policies. You move a copy and whatever. And I already mentioned that the distribution regular flow is going to be used for importing the content.

So this is the offline distribution. Shortly talking about the CDN distribution, yet another capability that we are also providing to our customers. And here we’re going talk about our software as a service customers. And the use case that we’re trying to address with this capability or the requirement that our customers are raising is related to what we call external distribution. Means that you need to deliver the content to the external audience. It can be free software that you would like to distribute like privacy educate like free packages whatever, or it also can be some partner or customer specific content that you’ll need to propagate across the board. Most of the customers are not willing to maintain the download centers by themselves, they’re looking for a simple size based solution. But on the other hand, it still needs follow all the enterprise readiness capabilities like ability to manage commissions, ability to provide the capability for authenticated and non-authenticated external download and so on, so forth.

So we tried to understand what would be the best approach for that and basically what we manage to achieve is to connect existing CDNs framework. So for example, today we’re using the Amazon platform for our software as a service to customers that are running [inaudible 00:41:31]. An advantage of that method is that you can enable that functionality, the CDN content distribution, you can enable it by the repository, it’s going to fully follow the role based access of JFrog platform. You can create geographical restrictions from the CDN end also we are going to support authenticated and non-authenticated downloads using [download or signed URL 00:41:57].

No demo for this one because it’s really simple and short one, you just go to your artifactor, which should be preconfigured part of the software service solution. Then enable the checkbooks of enabled CDN download. Once this CDN download is enabled basically all the requests to that particular repository for download, and you can see it on the right hand side over there. All those requests will be automatically redirected with three or two status code. But the actual download will be from the cloud for CDN because this is the foundation that we’re using for CDN at the moment.

But forward going, yes, we are going look after extending the offering for the other CDNs, like for example, Google cloud CDN [inaudible 00:42:55] and whatever. Again, technologically wise or technically speaking, there’s no limitation for us to do it it’s just enablement an additional development that we really need to do it around functionality. Today the most demanding one was related to [inaudible 00:43:14] but tomorrow they’re looking for the adding more content so please don’t hesitate to get into our public [inaudible 00:43:21] for more CDNs to be supported.

That will be all on my side. Thank you for listening. We will take a question shortly. I see that questions are coming in in our chat. So we will answer by the end. With that I would like to pass it to Manisha who is the Director of Product Management at JFrog. And she’s going to walk us through what’s new in JFrog pipelines.

Hello my name is Manisha Sahasrabudhe and I’m the Director of Product for JFrog pipelines. In today’s session I’m going to walk you through some of the new product features we’ve added over the last few months. So I’m going to share my screen in a minute and I just have a few slides and after that we’ll walk through a quick demo.

Okay. So before we start, I wanted to remind you where JFrog pipeline sits in the overall platform. So pipelines is that part of the platform that provides you with end to end CI-CD automation and helps you continuously deliver your software updates. It integrates seamlessly with the rest of the JFrog platform so you’ll get this advantage of a consist experience across all your DevOps workflows. The pipelines unique advantage is that our approach to a CI-CD is to have these declarative reusable steps that we call as native steps with which your CI-CD pipelines can just be assembled. Now, what does assembled really mean? So if you look at how CI-CD automation’s done today, in most cases, it’s highly custom scripted. So you have all these fragile scripts that can break very easily, it’s very difficult to scale and so on.

So we want to move away from this highly scripted approach and we want to take a very declarative approach where you have a bunch of prepackaged building blocks, and you can just piece them together to form your end-to-end workflows. So no scripting should be required ever to create your pipeline. And as a simple example of how scripted versus declarative looks, if you look at the left hand image, that’s the scripted approach, you have to actually install your CLIs. You’re calling SVK command CLI commands in order to do a simple [document 00:45:50]. And if you look at the right hand side, that’s the declarative approach. So all the intelligence, all the code is already written, and it can be reused by just setting some configuration values in your Yamo and that’s how you actually define the step. So it’s very human readable, it’s reusable, and it’s very, very easy to understand.

So, as I said earlier, the JFrog platform already ships with a bunch of native steps, which are these prepackaged building blocks. But obviously we know that most organizations are going to have their own custom workflows that they want the independence to set as they want. So we’ve also introduced pipeline extensions, which let you define your own steps and resources and then reuse them across your pipelines. We have an extensions gallery in the UI so that you can look at all the extensions in your system and read the documentation for them and understand how to use them and so on.

Also multiple versions of the same extension can be used side by side in the same pipeline. And this addresses a bit pain point with some tools, which is called plugin hell, where if you have a plugin based architecture, different versions of the same plugin cannot coexist in the same instance, there’s all kinds of conflicts and then at that point, you get into this other problem called strawl because you bring up different instances, just to use different versions of plugins.

So we are completely avoiding that by giving you the ability to run these extensions or these steps with different versions side by side, in the same pipeline. The next thing we’ve introduced is templates. And what this helps you do is create these reusable pipeline definitions so that you can get up and running with a pipeline in just a few minutes. So with templates, you can define these pipelines with a helm style templating and similar to helm, you can just provide them the values to actually customize that template and actually create your pipeline. And we’ll do all the merge of these two and create your pipeline automatically. So with extensions and templates you see that you can achieve this utopia of completely unscripted pipelines because you have these building blocks and then you even have these templates that put together your building blocks into some typical scenarios that exist in your organization.

The other cool thing, which we introduced recently is matrix builds, which allows you to split large test suites across multiple nodes. So if you have like this long running test suite, which takes 45 minutes, you can potentially split it across three build nodes and get results in 15 minutes by dividing it by three. And so that is a big time saver. You can also use the matrix build feature to test against multiple language versions. So different versions of GS, for example, or even test against multiple platform. So you could test against Linux, Ubuntu, [Centroid 00:48:53] at the same time and in the same pipeline. So it’s a very, very powerful feature.

And last but not the least, I want to talk about the pipeline of pipelines view. So one of the biggest trends of JFrog pipelines is how easily you can create what we call a pipeline of pipelines. So you can have these individual pipelines per teams. You know, most people have a CI pipeline, which is for developers. Then you have test pipelines, which is for the test team, and then you have release pipelines. You could also have a dev set ops team create a base image for you, which is the dev set ops pipeline. And all these pipelines are actually dependent on each other and you can string them together with whatever logic you want very easily through the use of resources and resources are used to pass information between your dependent pipelines.

So this workflow, it was always very easy to create with JFrog pipelines. But what we’ve added recently is the ability to actually visualize this complete dependency of pipelines. And this is what is going to give you a really end-to-end picture of your software delivery workflow. You can look at it, you can easily see what’s failing, you know where your bottlenecks are and so on and so forth.

So this is again, another very cool thing that we’ve added recently. Before we end, I am going to show you a quick demo of extensions. So what I have here is a get repository, which has two extensions that I’ve defined that I’m going to actually add the platform and then create a pipeline from it. So you can see the folder structure under this depository. At the top level, I have the step and resources folder, which is basically the extension type. So I have one extension, which is a step and one extension, which is a resource. Then under these top level folder, you have the name space folder. So you can organize your different extension steps and resources into name spaces so that it makes it easier to find them and just group them together based on what they do.

So in the resources I have a source control name space, and on the steps, I have an ansible name space. And then my extension step name is ansible playbook because what it does is it runs ansible playbook. And then under this you can see I have three files. So I have an unexecute.sh which is the script that’s going to run when the step is used in a pipeline. So this is actually what executes, when the step is used.

I clicked out of it. So I have to go back in, okay. So I have a readme.md. That’s going to actually give you the documentation for this extension, which is going to also show up in the UI so that people can know how to use their extension. And then I have a stepModel.yml that actually defines what the inputs for this extension should be. And if you look at the GitRepo, so my other extension is a resource of type GitRepo and what’s school about this is there’s actually a native resource, a JFrog ships called GitRepo. So you can override it as long as it’s in a different name space. If you don’t like what comes out of the box, you can just customize it and define your own GitRepo.

Again, here, very similar to the steps you have three files, the readme.md is the documentation for the extension oninput.sh is optional. What that gives you is the ability to run some script when this resources use as an input for a step in your pipeline. And then you have the resourceModel.yml that tells you what the settings of this resource should be. So that’s how my extensions look and the way you add them to the JFrog pipelines is by adding your Git reprository as an extension source. So I’ve added the same one pipeline extensions as an extension source, which means we pass the contents of that repository and then actually create your extensions. So in my case, in my repository, there are some other extensions that my instance has, but because of that repository, these two were created. You can have version extensions so these are the latest versions. So anytime I make any changes to that repository, they come up as latest.

And then I can actually do a release from GitHub in order to create an actual released version. So I also have released version of the step ansible playbook and the resource GitRepo. Now, how do you use it in a pipeline? I have a pipeline that actually uses both of both the versions of the ansible playbook step. I have the first one that uses the 1.0version of the step and the second step in the pipeline uses the latest version of the step. And the only difference between these two versions is one extra echo that I’ve added to the latest version. So if you look at the logs, you see that in the step execution logs, it just runs an ansible playbook and for this demo it would just say, hello world. So it doesn’t do very much. And it just runs the playbook and exits and just says, yes, I successfully ran the playbook.

So this is the 1.0 Version of that step. Now, if I look at the second step in this pipeline, this is the latest version. And now if I look at this step execution, you’ll find an extra line, which is that echo statement that I added. So as you can see you can use different versions of the same extension, just side by side, within your same pipeline. And this is extremely powerful for anyone who’s ever dealt with plug-in conflicts or issues like those kind of issues. So with extensions and templates, what we are really trying to do is to give you these key package, building blocks, that help you assemble your pipelines. In future quarters, we are going to continue building more and more extensions, more and more native steps and enrich disability so that eventually you can achieve this utopia of completely assembled pipelines. That’s all I had for today. Thank you.

Thank you Manisha. So we have a short slot for the questions, a question from [Miankit 00:55:32] about support for CDN distribution ahead of the cloud font. I can say we do plan to do it but we don’t have immediate plans right now analyzed for that, but this is something that we are considering. A question about-

A question from, sorry, again, I’m going to take it. A question from Peter about MFA and is it possible to enable MFA just for specific groups? The answer currently is no. MFA is a feature is enabled for the entire platform or for none of the users at all.

There’s another question from Peter regarding the JFrog version where MFA supported. So I believe it’s seven, six might be a little bit later. You can check in the release notes. I think it’s seven, six, but one important thing to note here currently available for cloud only be available for all within one of the next versions of JFrog platform. And he also is asking how [inaudible 00:56:42] is basically [cubic key 00:56:43] and others? So the answer is currently no, the only method of additional authentication method supported is TOTP. We consider emails, we consider hardware keys as well. It’s probably going to be based on requirements. So I would urge you to open a JIRA or a salesforce ticket with a requirement of a specific key that you need and we’ll consider that as we go along.

Okay. I think we’re done for today.

 

Trusted Releases Built For Speed