DEMO – Multi Cloud Security Software Supply Chain

Victor Szalvay, Mitali Bisht, Shankar Hariharan
Product Manager, Software Engineer, Product Manager, Google Cloud, JFrog

Google and JFrog will demonstrate how to securely accelerate deployment of docker container images in a hybrid configuration spanning on-premises and public cloud.

In this demo you will learn how to initiate builds with Google Cloud Build, managing the binaries with JFrog Artifactory and scanning for security vulnerabilities and license compliance with JFrog Xray.

The resulting containers are then deployed through test, staging and production using Google Cloud Deploy.

 

 

 

Video Transcript

Good afternoon, everyone. My name is Shankar Hariharan and I am a senior product manager at JFrog. I lead the product partnerships group here. I have both Mitali from JFrog and Victor from Google with me today. Mitali and Victor, would you like to introduce yourselves?

Hi, everyone. This is Matali Bisht. I’m a software engineer at JFrog working as a partner engineering team and working on a different community related project. Thank you.

Hi everybody. My name is Victor Szalvay. I’m an outbound product manager. I work at Google Cloud, specifically in our DevOps product suite.

Thank you, Victor and Mitali. Firstly, a very warm welcome to everyone attending this session at DevOps Cloud Days. We are really excited to have you here. Today we are going to start with talking about software security for DevOps with JFrog and the Google Cloud Platform.

We will also do a demo about Artifactory and Xray, which are our flagship products in the JFrog Platform. To those of you who are new and do not know about these products, Artifactory is our binary repository management solution and Xray is our universal software composition analysis solution. We will also review them in a bit more detail.

A little bit background about JFrog, we were founded in 2008. Now over 1,000 employees globally today across 11 offices. We are truly a hybrid company with both SaaS as well as self-hosted offerings across multiple clouds.

We are a trusted solution by more than 6,000 customers across different verticals. As you can see, JFrog powers the majority of the Fortune 100 including software updates of companies, such as Netflix, HBO, Google, Twitter, VMware, and really making it easy to manage as well as deliver their software at a speed that supports world class services.

Now I’m sure all of you have seen this infinity loop of DevOps before. At JFrog we call ourselves a liquid software company. We are an end-to-end DevSecOps platform that is used to power all of the software updates. We really have built a platform that allows you to go from code to cloud.

Now, let us take a moment and look at how the JFrog Platform enables freedom of choice for DevOps. If you are a software company and you have a number of teams within your organization, your developers are building applications, working on different technology stacks, let’s say in Java, in Node.js, Docker, et cetera.

Now you can clearly see here there is a need to optimize as well as standardize on all of your requirements across the DevSecOps lifecycle. Now, what are these requirements? This could be really around… How do you store your artifacts and dependencies. How do you secure your binaries? How do you distribute your binaries, orchestrate your pipelines, automation, and more.

When you talk about all of these different things, you need really consistency across your DevSecOps lifecycle. In addition to the consistency, you also need a unified platform that provides you a single pane of view across the DevSecOps lifecycle. This is where the JFrog Platform really shines. As you can see here the JFrog Platform is an end-to-end DevSecOps platform built to scale, radically universal and providing continuous security with a rich integrated partner ecosystem.

Now, before we jump into the reference architecture and the demo for all of you who are new here let me introduce to you JFrog Artifactory and JFrog Xray. Now JFrog Artifactory is considered as the single source of truth for all your binaries as well as your dependencies. It is the core of the JFrog Platform, and it really takes an open approach and provides freedom of choice to you to integrate popular CI/CD, as well as monitoring tools within your ecosystem.

In addition, it is also used to proxy remote repositories. It will work against all of your CI tools that are existing in the market and we have integrations with almost all of them. You use Articafactory to really manage your built artifacts for your software releases across your company, your business unit, or global organization. Now Artifactory is a leader when it comes to binary repository management solution.

Artifactory is universal. It integrates with 27 different package managers. It provides a full system of record, which means it provides you metadata for all of your supported package format. It also provides a checksum-based storage, which essentially means if you have the same binary across different repositories, you are really optimized for storage. It provides a rich set of automation tools primarily using the REST APIs, the CLIs and the plugins. As I said earlier, it has a rich set of integrations with automation tools for CI/CD.

Once you have these artifacts in Artifactory, you also want to understand the security and the compliance risk associated with these artifacts and how can you mitigate them? As you know, 80 to 90% of the software written today is really open-source software, and it exposes to security vulnerabilities, license compliance issues, and performance issues.

This slide is really eye opening. As you see in the last year, there has been an increase in the number of supply chain attacks. Really up by 650% last year. We know that before we want to look at how this is really impacting security at organizations, we want to really define what software supply chain attack is. It’s basically a technique in which a hacker slips malicious code or component into a trusted piece of software or hardware.

Here is a timeline of events below that is listed, but more importantly there are a lot more events that have happened and contributed to software supply chain attacks in the recent past. Now some of the recent events also include the Equifax events in 2017 where Java dependency for apache stretch really cost a data leakage for about 140 million people.

Similarly, in 2008, when Cisco was really sued by the Free Software Foundation, because they used GPL license in some of their products. A lot of damage they had to suffer because of that. The key question where this is leading to us is it is important to understand what software is infected with vulnerabilities. Am I using them? How do I fix them without impacting the developer’s speed? This is precisely what Xray helps you answer. It is an automated software security solution for your entire software lifecycle from code creation to release distribution and in production.

It has deep native integration with Artifactory and provides a single pane of glass view. It also helps drill down all dependencies of each build package. If you’re using Docker, it’ll help you to recursively and deeply scan all of the Docker image as well as layers. It also provides comprehensive vulnerability intelligence, coupled with different sources of databases, including vulndb which is used in our Xray product. Having given this introduction for Artifactory and Xray, I think it’s a good time to switch it over to Victor who is going to talk about the JFrog Google Cloud DevOps reference architecture followed by a demo by Mitali. Victor over to you.

Hey, thanks, Shankar. As Shankar pointed out, one of the really exciting things is the optionality that you have when it comes to working with the components that you like within your DevOps tool chain, both within the Artifactory and JFrog landscape, but also within Google Cloud. Today we’d like to walk you through a demonstration of a hybrid use case scenario where we are using both Google Cloud components, but also we’re using JFrog Artifactory and Xray in this workflow. Starting in the bottom left, you’ll notice that we are just kicking things off with the developer doing a PR or a commit. This Git-based action will trigger a build in a product called Cloud Build. Cloud Build is a flexible DevOps automation platform. It allows you to run steps in discreet containers. The first set of steps will basically generate an npm build. That build will be beamed up to JFrog Artifactory.

From there it will also be scanned with an Xray. We’ll have some vulnerability results that we’ll look at right there within the build process. Next cloud build will continue to containerize the application that we’re actually building. Mitali in a minute will walk through the application in more detail and show you exactly what we’re building. It’s a sample application for purposes of this demo. Once we’re containerized, we’re also going to be moving it back up to Artifactory to store in a Docker repository. At that point, we’re going to do a secondary scan to make sure that the underlying OS and the container itself does not have vulnerabilities that have been introduced. We’ll see two different scans. The next thing we’ll do is trigger a release. The release, once it’s triggered will flow into cloud deploy, Google Cloud Deploy is a new product.

You might have heard of Cloud Build, Cloud Deploy is a new product. Basically, its purpose is a fully managed continuous delivery platform for Google Kubernetes Engine or GKE. What this product lets you do is define a release candidate and progress that release candidate through any number of environments on its way to production. You can also insert approval steps at any step along the way. For example, in this case, we’ll have three different environments we’re going to progress through which you see at the top there, test, staging and prod. We’ll be moving through those environments on the way to production. In that last step, as we approach production, we’ll have the opportunity to see how it works with an approval gate, so that particular step requires an approval. We’ll see that inside as well. I think now I’m going to hand it over to Mitali, who’s going to walk us through the actual demo and we’ll get going with some information first. Thanks Mitali.

Thank you, Victor. In this demo, I’m going to show a cloud build file, which is cloud build YAML, which is being used to do the JFrog build off that image and push it at the Artifactory. In that cloud build YAML, we are using JFrog CLI. JFrog CLI is a very powerful tool to work with Artifactory and Xray because it has all the build-in commands. The various commands that we’ll be using there is first to configure the Artifactory server. Then we will use JFrog rt npm install to verify the dependencies of the app. Then we will use npm publish to pack and deploy the app. At the end is built publish where we will be pushing the build-info to the Artifactory, which can be used further for the various CI servers, like Jenkins and all. At the end, we are going to do a Docker build of that app to containerize and push that Docker image to the Artifactory. For that we will need two repos in Artifactory, one for putting the Docker repos and another are the npm repos to put all the dependencies and the app itself.

Also, for Xray we would be needing watches and policies to be configured. Let me go back to our JFrog Platform. This is a SaaS platform. This is called as JPD, that means JFrog Platform Deployments. Here you can see all the JFrog products to be at one place, Artifactory distribution pipelines and Xray. For the demo perspective, I have already created repos for Docker and npm. For npm repositories we have three repositories, which is local, remote and virtual.

If I do Cloud Days, I have a Cloud Days local here, Cloud Days… Local repositories is nothing which is actually local to your instance and all the app actually deployed there. Cloud Days Docker will be used to put your image, your Docker image of your app in your Cloud Days Docker local repository. In terms of remote repositories, remote repositories are very helpful when you want to use a remote URL. I have set up a Cloud Days remote repo. At the end, we will have a virtual repo. Virtual repo, you can call it as a group repo, which is used to group multiple, local and remote repository. If I see my virtual repo and I go inside it, I can see that I have Cloud Days local and Cloud Days remote inside it. Now, when I have my repository set up, I need to have Xray to know that these repositories need to be watched for vulnerabilities.

For that, we can see all these things. We can index them for the Xray. They are already being indexed, so if you see Docker, a Cloud Days Docker, Cloud Days local and [inaudible 00:15:56] are being indexed for Xray to watch for the vulnerabilities. In order to define what vulnerabilities you want to see, whether high, low, medium, and what action you want to take, we have to set what is policies. Policies are basically a set of rules that you can define to show what kind of vulnerabilities or severity level that you have been looking for. So here I said Cloud Days high policy, and also for watches… So policies are index list until they are applied on something. Watches are the place where we make use of those policies. In the Cloud Days watch, which I have said, I’m using the policy Cloud Days high.

Also, if you see resources here, I’m using my build, which is my Cloud Build and then build scanning for it. As Victor mentioned, we are having two scans here. One is our build is getting scanned and also our repository, which is the Docker repos is getting scanned. If you want to take any action, suppose if you want to, make the build fail, if there’s any severity or if you want to stop the download of that, you can all define that here in your watches. Now, going back to the app, we have a very simple app.

It’s an npm app, a one page app, which shows a logo of GCP and Google Cloud and JFrog. In that I have cloudbuild.yaml and clouddeploy.yaml, so cloudbuild.yaml as that shows, we have different steps to configure the Artifactory. Here I’m using GCR image, which is Google Cloud registry. We have pushed the npm JFrog CLI Docker image there, and for each step, it spins up its own Docker container and run these steps. So it’s important to define the directory that we want to work on. So here we’re using JFrog CLI Docker image to configure the Artifactory and do all those steps that I have defined here. One more thing to note here that for the security purpose, we have all these secrets to be refrained in Secret Manager in Google Cloud. In order to access Artifactory, we need access token. That access token is being defined in Secret Manager and referenced here. Once, if I trigger… If I make any changes in this file and do a get push, I have a trigger in Cloud Build.

I have triggered this trigger in Cloud Build, which actually gets run and start spinning up all those steps. If I go and see my dashboard, I recently have one run and the last run show all these steps what it is doing there in the Cloud Build. Once this Cloud Build will be done, what it’ll do, it will create an npm build in our Artifactory and also it will create a Docker image of this app. Let’s go back to our JFrog Platform to see the results of the build. First go through the packages and we are looking for workshop app, which we have built here, the npm package. If I type… Here is the workshop app n-package. The first version we can see here, which has been deployed. I click on this version. I can see the README file, which is actually here has been pulled from the GitHub and then I can see the builds which are associated with this. So the build is two, which have been defined here.

We can see the repositories associated with that, so we have created Cloud Days local, which is associated with this build, and we can see the Xray data pertaining to this. We have one license violation here. So Xray shows both vulnerabilities and license violations. We can see the details of it here and what can be done to remediate that. If you go to the descendants here, it shows you a nice view of your dependencies. Now let’s go and see your Docker image. The Docker image that we have pushed we call it as npm app.

That’s how we go back to the packages and go to the npm app. So this is the latest that has been pushed today with the Cloud Build. If you see the repository associated, it’s the Could based Docker that we have created. Now drilling further down into that version of the Docker image. We can again see the repositories and going back to their Xray data. We can see all the vulnerabilities that are associated to this particular image. Now we find here 13 vulnerabilities and if I choose any of them, I can see a very nice view of the data, what it says. It shows you the severity, it shows you the CVSS score. Also, it shows you the main important thing is fixing the version. Most of the vulnerabilities get fixed if you actually update the component. Here it’s saying that 2.2 0.5-r4 is the safe version.

Right now it’s using less than that. Further, going down in the summary, you can see what this vulnerability is talking about. Also, what are the version which are vulnerable. Shankar was talking about that Xray gives you… Scan your image or scan your software, and it does a deep recursive scanning. What it means that it shows you… It scans each layer of your Docker image. It scans each and every JAR of your software or any other thing. This impact path actually shows you where this vulnerability or in which particular dependency it is residing. Also, it provides you some references to learn more about the vulnerability. I want to highlight one more thing here with a [green flower 00:22:55]. Any of the CV which is marked as this, it has further more rich information about the severity.

In JFrog, we have our own research team which are working day to day on analyzing each and every severity and actually providing a good remediation part for that and see whether they are actually that critical, medium, high or not. If you see the severity provided by the outside data base is critical, but when JFrog team has analyzed it manually, it find that it is medium, so we can prioritize when we want to fix it. Also, you might want to go inside JFrog research severity reasons to see it explains what information about the severity and its path.

Source advisory, also it gives you again versions and descriptions and impact parts. Now, since we have seen these vulnerabilities, if you fix any of the version and push a new Docker image from the action, you can scan for violations again, and you can even assign the custom issues from that. Also, you can export the data of all these vulnerabilities in an Excel file from here. That gives you are quite handy tools to work with. Last but not the least that we want to see is npm build that we have created. If I type and I build a npm build, so I go here… Recently with this Cloud Build, I have created build number two, which I have pulled. I click on build number two, and I can see, okay, workshop/app, npm app is residing here. It gives me number of artifacts that it has. It gives me number of dependencies it has. Alongside it shows you Xray data for this build too, because this was also scanning for the vulnerabilities and it shows you all security and licenses violations.

It shows you whatever licenses that it has discovered and unknown one. At the end it gives you buildinfo.JSON that actually we can use for our continuous integration and delivery. That’s it about all the components that Artifactory Xray shows. Once this build is done, what it actually does is create a release in cloud deploy and for the cloud deploy, I’m handing more to Victor to explain more about it.

All right. Thanks, Mitali. Cloud deploy, as I mentioned, is a new product that pretty much focuses on continuous delivery for GKE. The first thing we’ll do is take a look at how a cloud deploy release and pipeline is configured. This is pipeline configuration. At the top there, you can see that this is a KRM style format, so this is a declarative way to approach continuous delivery. You can see that around line 7 through 10. You can see the various stages that we’ve defined. Now those correspond to that test staging and prod environment, which we talked about earlier. The targets there are further defined underneath. For example, starting with line 13, you can see that the test target is being defined. You can give a description, but more critically on lines, 18 and 19 you can define the underlying Google Kubernetes Engine cluster for which that target corresponds.

You can see there that the project is defined the region, the location, where that cluster is running, and also the name of the cluster. We’ll see these clusters in GKE in a moment, but this just provides some mapping. Now, further down, you’ll see the staging cluster, which makes sense, the staging target, and the last line at the bottom there defines the production cluster. Now call your attention to line 36, where we basically in a declarative fashion can define an approval requirement. In this case, by adding that true flag to the required approval, we’re basically saying that for that particular target, there must be a manual approval gate prior to actuating out to that environment. All right. Let’s go take a look at cloud deploy and how this works. Within cloud deploy here and you’ll see that those three environments are noted there, test, staging and prod in a pipeline style.

You’ll also notice that zero pending option right between staging and prod. If we had pending approvals for that production, at least we’d see them queued up there, but right now we don’t have any. If we wanted to take a look at the release details, we can click on this. A release in cloud deploy is basically the summation of the hydrated configuration and references to the artifacts, the images that have been defined for that particular release. In this particular case, you can see that we have some annotations. For example, the commit message or the commit SHA that’s been associated with this particular release. We can click into the artifacts tab. Here is where we see the internals of what goes on within a release. You can see that the build artifact there, that is the image that Mitali pushed to the Docker repo.

You can also see in that render source section, this contains the pre-rendered Kubernetes configuration, the pre-rendered manifests. Down below in the very bottom section the target artifacts, that contains the various rendered manifests along with other information like pipeline details and everything, so that we can recreate those releases as needed. You’ll notice that there’s test, staging and prod there represented, and that’s because upon release creation, we actually snapshot and render all of the environmental manifests at the same time. So we render everything and snapshot it and store it. Okay. So let’s go back to the pipeline and take a look.

Now, of course we have CLI and other forms of API style approaches to this, but within the UI you can promote. We just did that release 27 minutes ago. We’re now going to promote it. Of course, we want to know what we’re promoting, right. What was there and what is going to be replaced. I can go into the manifest diff and take a look and see that the main thing that’s changed here is my image and if I’m satisfied, I can click promote below. So we’ll go ahead and do that.

You can see that the rollout has started. Once it’s fully actuated, we’ll see an update here to indicate that it’s either been successful or failed. Now, if we click the view latest rollout, I think we’ll be able to see that there are logs as well. The rendering already took place. That was part of the release creation, but we can also look at the actual rollout box. Now we’re going to go and take a look at that and see that has been successfully done. You can see the actuation was successful, and we’re going to go back and confirm that within the main UI. Okay. So the actuation has taken place. Now the next step would be to promote it into production. We’re going to go through the same type of process. We’ll do a promotion. We can compare what was there, what’s going to come. We can also diff the manifest again to take a look at that. If we’re satisfied with everything and it all looks good, we can go ahead and click promote.

In this case, the interesting thing is that we had an approval gate set up. This isn’t going actuate out automatically. We first need to manually approve this. In this particular instance, I’m a super user, but you can imagine that there’s an operations team and that team has special permission to do these approvals. So going into the approval queue, I can see that I’ve got an approval queued up. You can again, see similar information. What’s there now. What I’m about to approve. I can also diff the manifest again. Basically at the bottom, it gives me the opportunity to either approve or reject the rollout. I’m going to go ahead and improve it. Now, again, it’s letting me know that I’ve got nothing queued up within my approval queue, and I’m back at the main pipeline view, watching the actuation take place. That’s cloud deploy. Hopefully it gives you a sense for how we would do continuous delivery out to GKE in a safe and controlled fashion. Back to you, Shankar.

Thanks, Victor. Hopefully this was exciting for you. We saw how Artifactory could be used as a single source of truth for all of your buying and dependencies and how really Xray allows you to quickly see your software security issues. See how you can prioritize them and how you can fix them efficiently. What this gives you is this gives the security teams a peace of mind and developers the ability to software faster. If you’re excited and want to experience the power of JFrog and Google Cloud, you can now get a 30-day free trial of the pro team on Google Cloud. Follow the instructions. Go to the Google marketplace at the given link, select JFrog Cloud pro team and hit subscribe. You should be ready to go running with the JFrog Platform. Hope all of you get to experience this power. Thanks for attending this session. We will now open it up for Q and A. Thank you.

 

Trusted Releases Built For Speed