How Google Cloud and JFrog create a secure software supply chain

Richard Seroter
Director of Outbound Product Management, Google Cloud
Google Cloud

Everyone’s talking about secure software supply chains.

How do you verify and manage dependencies? Build applications safely with known base images? Ensure that only trusted images get deployed to production?

Google Cloud is at the forefront of this movement, and is offering the tools, services, and integrations that make it easier to do the right thing.

 

Your Log4shell Remediation Cookbook

[Webinar] Continuously Securing the Software Supply Chain

 

Video Transcript

Hi, everybody. My name’s Richard Seroter and I’d like to run as root and I take most of my Docker files out of other people’s GitHub repost. It felt good to get that off my chest. You might say, that seems like a terrible person to do a keynote about security, and Google Cloud, and Jfrog. I don’t know. Maybe you’re right, but here we are.

Honestly, I’ve learned a lot over the years, in the year and a half even since I’ve joined Google, about what does a good secure software supply chain look like? Where are the places where it’s not terrible for developers like me, you, and others? How can we actually do security where it’s not so hard? Maybe we make it easier to do the right thing. We’re going to talk today, about why we need a secure software supply chain. What is a secure software supply chain? What are Google and JFROG doing as part of that path to production, to make this safe to go fast and be secure at the same time?

Stay tuned. Here we go.

All right. As I said, my name’s Richard Seroter. I’m a director of outbound product management, here at Google Cloud. I cover things like containers, server list, DevTools, and part of this now, is even our secure software supply chain work. I’m excited to be here with you today, telling you about how Google, JFrog work together, help make it easy to do good stuff.

Let’s get started. Now, of course, most things in life are some sort of composition, right? Whether I’m building a car, a house, whatever it’s going to be, it’s a combination of factors. Things come together and those pieces make something better. Software’s like that, I got my code, however I’m writing it, whatever it looks like, some configuration, deployment specifications, scripts, a bunch of things make up that artifact, that then becomes part of our production deployment. Pretty standard stuff, we’ve all been doing this for a while. Maybe 10, 15 years ago, you were just shipping zip files around. Now, you’re doing containers. Whatever it is, but how we’re packaging, how we’re dealing with artifacts, is really important in this secure world.

Now what’s tricky here, and where we’re going to spend a lot of our time today, is it really just takes one insecure link in that whole software supply chain to cause chaos. As we think about it, software isn’t just I write code and it’s in production, you and I know that. For some of you, it may be a whole set of heroics to get it to prod. Others of you, maybe can do it hourly. But you still recognize there’s a set of steps, going through build steps, going through packaging steps, going through others, to make sure that I can actually ship that code. Any one of those places, if something goes wrong, I have trouble. I want to click on that a little bit. I want to look at where are the attack vectors? When you and I are trying to get our code to production, where are the places where something goes wrong? Where can it go wrong? Let’s look at these spots.

First off, I could have just bad codes submitted. Hey, look at an example where you had researchers recently, who were trying to intentionally, for research purposes, introduce vulnerabilities into the Linux Kernal. They got stopped later on but, the fact that they were able to get pretty far is interesting. There’s a lot of ways that bad code can actually get checked into source control in the first place. A first risk and I think we’re all familiar with that.

Then, you get the actual source code system compromised a bit here. Here, the attacker, in the past we’ve seen this recently, compromised some Git servers for different package management and things like that, and it adds commits. All right. So, all of a sudden, I think I’m safe but, actually, the place I’m checking in my source code is now behind the scenes adding stuff to my code. That’s no good.

There’re other places, even the build pipeline could get altered. Right? It could be outside-in changes to that build pipeline. Again, I don’t really see it, I’m just checking in my code, other things happen, and now I’ve something bad has happened there. Hey, look, I could also have the build system itself be compromised. You know, in this case, this can be places where, SolarWind is a great example, where there’s a bad actor who gets some bad behavior and some malicious code, that gets added to every build and then shipped out to everybody as a result of that. That seems like the worst case scenario is again, somebody who has kind of a static build environment and then being able to compromise that. And it’s just nothing really sweeps it or detects that, that pain. That’s a big one.

Dependencies, and we’ll talk about this more again in a second, this is an underrated one. Look, if you and I are building anything in a modern language today, it’s a lot of dependencies. Right? If I’m a .net developer, I’m all in on NuGet, I’m using NPM as a JavaScript developer, I’m using Maven for Java. I’m doing all these different things. 80% of my app may not be code I’ve even written. It’s packages that do database discussions and interactions. I have things that call web services. I have things that do calculations. I have things that do all sorts of things. And so what happens if I actually have a bad dependency in the first place, or even have a good dependency that I add to my application, but then the actor actually makes a change to it and introduces intentional vulnerability later, and I’m just updating my packages? I might and not have added it in the first place be if I knew that was there, but now I’m not looking as much. So, that’s a big one. The dependency is a tricky place to check out.

Then of course, I might actually inject bad artifacts outside of the CI/CD process. We’ve seen that happen as well. So you have to be careful to that one.

You can also see the package manager get compromised a little bit here, here, where you can actually have a replica. Right? It can have a mirror of the package manager. And instead I’m using that mirror that has some bad packages included in that. That’s not uncommon that can happen to you as well.

I can also have users tricked here to maybe use the wrong package. Oh, I thought I was injecting this HTTP module, but instead my dependency is an HTTP1 module and that actually has a bunch of bad stuff in there. So, just by look-alike names, look-alike provider names, I can add the wrong package. That’s a tricky one.

And then finally, just get to production. And I could have unprotected systems, I could have bad credentials, I could have a lot of things where, even elevated credentials get me in trouble in prod.

So, all of these places, there’s even more, there’s other spots here, but these are some core ones we’ve identified, even as part of our Google process, to make sure that we’re hardening and securing these environments. You’re going to face all these too. So thinking about each one of these is important as if we do talk about dependencies, right, and where trust starts to come into play. Look at the top 1,000 projects on GitHub. I think 70 some thousand is the medium contributor count in some of these. So, you might have a 100,000 people touching your code in some way when you ship it, because you’re using all these open source dependencies with lots of contributors, which is awesome, but I have a lot of hands on these codes. And how can I really tell that that third or fourth order dependency is secure? All of a sudden, I can’t put my head in the sand on this one. I have to have more transparency and intelligence about, what is going on here? What are my dependencies’ dependencies? Are those safe? What’s going on there? I can’t leave this to the InfoSec team. I can’t leave this to my production platform, SRE team, whatever. This is, all of our responsibility. Right? Secure code is not something we outsource. It’s something we in source. It’s something we care about here.

So, a couple things we have to think about. First, how do I rethink or reconsider? How do I trust the external supply chain? Numbers show up to 84% of commercial code bases have at least one active open source vulnerability. That’s not good. Right? This is how hacks happen is through these vulnerability channels, in many cases. The second thing we have to rethink and consider is, how do we do security to keep up with velocity? Pretty uniformly, when we look at our data, I’m sure your own environment, we’re shipping more stuff to more places more often. Holy cow, that could be a nightmare, if I’m injecting vulnerabilities. If I have more components in my code, it’s more distributed, more services, more things like that, I’m shipping it more often, and I’m dropping it to public clouds, private clouds, edge locations, different DevTest environments. One vulnerability hidden in one place, my blast radius is enormous now.

So, how am I thinking about all this and not getting so terrified that I give up coding and go become a librarian? There’s days we all feel like that. Nope. How instead, do we say, I want to keep shipping amazing software, but I want to do it safely, I want to do it in a way that is not making it easy to inject vulnerabilities, and a way that even if something bad happens, I’ve got such a great process in place that I’m catching that, and closing it, and fixing it. It’s absolutely achievable.

We’re seeing this move beyond the realm of just tech people talking about stuff. We’re seeing executive orders here in the US Government saying, there’s an expectation now, on secure software supply chain. We expect this from our own suppliers of software. We expect this from the industry. And now everyone’s on notice. What is it going to mean when every company, from banks to retailers to others, is now putting attention on, our businesses run on software, all of a sudden one vulnerability means a huge data leak. It means something where there could actually be safety at risk. Like this is serious stuff. So again, we don’t want to throttle innovation, but how can we maybe put things in place that makes it easy to do the right thing?

Things we’ve learned at Google. Some ways to build really secure software. Can we establish trust? Can we verify trust? Always verify. A zero trust model doesn’t just assume it came from this input, must be cool. Nope. I’m going to establish it, and I’m going to verify it, I’ve got to maintain it. I’m not done once I ship it to prod and say, all right, everything’s good. We’re updating software constantly, it’s running in a very dynamic environment, that’s fluid. I have to maintain that chain of trust. Can we do all this without it being super painful? I really think we can. So let’s talk about that.

How does the tech start to come together to make this more possible? And so, again, as you look at your process here, it doesn’t feel overly complicated and it shouldn’t. We think about where does my supply of code come from. I’m coding it, I’m importing it, I’m doing build. That’s where I start establishing trust. How do I trust what I brought in? How do I trust what I’m doing? How do I trust the build? Cool. Now it’s time for deploying it. How do I verify? How do I verify what happened earlier? And don’t just blindly take that content and deploy it to production. Nope. I’m going to do an extra verification. And once I’m in prod, how do I continually maintain that trust? Simple, but really important.

How do we use that as a framework to talk about the rest of this? Well, let’s think about that first part, establishing trust. Where did this stuff come from? What’s my package. What’s going on there? So we built something pretty cool. Open source insights. Three, go to deps.dev/ right now, not right now, enjoy this talk, after this talk, but go check it out. And we built this service to actually help developers understand their open source dependencies. Here, you get a full graph of all the dependencies of the software, even the maintainers, what depends on that software, which is pretty cool. We’re scanning millions of packages and we’re updating that info sometimes even hourly. So it’s a really good up to date look at all sorts of packages from code. Today we support Cargo for Rust, Go Modules, Maven for Java devs, NPM for Node.js developers, Python Package Index for Python.

So, you can go in and say, hey, I’m about to add this package, let me find out a little about it. We’ve got to flip this. Instead of just, we’ll bring in whatever, what earns the right to be in your code. You should have some high expectations here. What gets the right to sit in my application code? Am I going to pull in this random library? Let me check it out first, see if it’s cool. Let me check to see what it else it depends on. So it’s a pretty cool tool increases visibility. We’ll continue to do more at Google on even making this easier to consume within your system. So, check this out. It’s a great way to start to establish some trust in your code.

How do you check more stuff on these projects? It’s one thing to say, what’s the dependency graph. That’s important, but now let me find out a little more. How much do I trust this package? So, we’ve built a pretty cool project here, the Scorecards project. What this does is it tells you open source project’s dependencies are safe. These scorecards are automated, they check a number of things you can download and run this yourself against different packages, and that open source insight site actually uses this. And we call out all of the tests. There’s no secrets here. We’ll show you, it’s a half dozen, dozen, different tests we run.

They check all kinds of things like, does this have checked in binaries? That’s a little bit of a red flag. You shouldn’t be checking in binaries. That should be part of a build process. That’s not in the source code. Does this project have CI tests allocated to it? Does it have code reviews that are required as you do pull request? Are the contributors from at least two different orgs? So you don’t accidentally maybe get a myopic viewpoint of security, you’re getting some different diverse viewpoints. Does it use fuzzing tools? Is it injecting garbage just to see what kind of breaks and if you can hack into something? Does it use static code analysis tools more?

So, it’s a great way to at least get, again, a spot check on some of your systems. You might run this in your CI pipeline. You might run things like that, just to make sure, maybe I don’t even let this thing build if it has a certain score that I don’t like. It starts to give you more of this power of how do you establish trust. And you’re not just necessarily a victim or a hostage of whatever packages you pull in. You’re making sure these packages have earned the right to be in your code.

So, let’s look at that process. As we get started, we’re establishing trust. It starts on my desktop. You and I probably spend 80% of our time building an app writing code, it’s on our desktop, or a cloud IDE, or whatever, testing it, doing whatever. And then we’re starting to think about how do we add it to a chain of trust? Cloud Build, and we’ll talk about in a second, does some cool stuff here. You might be using Artifactory in tools, as well. You might be using JFrog pipelines. You might be using Xray. We’ll talk about all those things. But this is where you start to establish trust.

One of the things I think is kind of important and may be a little underrated, especially now in this container world, where’s my container source? Where’s my base image coming from? Like I mentioned earlier, I’m terrible Docker files, It’s not a good idea for me to do it. Where are you pulling your base image from? Is it from a vendor? Is it from the Docker Hub? Is it from whatever? How about you don’t worry about it at all?

So, if you’ve heard of Cloud Native Buildpacks. This is a CNCF project. Buildpacks kind of came historically from Heroku and Cloud Foundry, some places I’ve been part of. So, what this does, basically, in a nutshell, is it takes source code, determines the right stack for that programming language and framework, and then builds up the stack, including a secure base image. And so, what comes out of it is a container image that’s based on a hardened operating system, and the right kind of lockdown stack, and it’s kind of provided by the buildpack.

So the Google Cloud buildpacks are using our slim down OS. They’re using the right sort of hardened, again, configuration, builds up a nice tight stack into a small container image, and I can kick it off from Cloud Build by just a single command if I want to. I can do this on my desktop, gcloud builds submit, and just get a container image that I can run anywhere. I can take that image and deploy it to Cloud Run, GKE, GKE Anywhere with Anthos, run that Compute Engine. So what a cool way to say, again, this isn’t hard, frankly, this is easier than doing it my myself. How about I just take my source code, run it through Cloud Native Buildpacks, get a container image, and I never had to deal with containerization, base images, ordering sequencing of a Docker file. I don’t want none of that, but I do want a container so now I can deploy it anywhere. So great way to be early on in establishing trust because you’re using a trusted tool chain and a trusted OS stack from a company like Google.

And you’re using this anyway. If you’re using things like Google App Engine, Google Cloud Functions, they’re, we’re using build packs underneath there to generate that image that actually gets deployed. So it’s already there, which is pretty cool. And again, if you’re using Cloud Build, you’ve got that. Even if you’re using JFrog suite, you might use this tool chain to generate your container image and then drop it in Artifactory, rock on, however you want to do it. But look at Cloud Native Buildpacks, really cheapy, free, easy to use.

So, here we are, we’ve got some code. We’ve done some of our build. We’ve started to generate some attestations, which I’ll talk about in a minute, we do that by default with Cloud Build, it’ll actually add the attestation so you can prove that’s what built the container image. And you start to hit what we are calling SLSA Level 1. What the heck is SLSA besides a delicious condiment for your chips? Let’s talk about SLSA. This stands for supply chain levels for software artifacts, or SLSA. These are practices, we’ve learned at Google, for how do you secure the software supply chain? It’s a really good framework here. It’s actually now getting referred to in some of the follow up things to that executive order, which is great. I’d love to see this become a standard. It’s based on some best practices. And this is great for you to start to frame this out. Like, what do we need to do to have a secure software supply chain? Again, it’s not Infosects responsibility, it’s not some production ops team, all of us. All of us can take on some ownership here.

So, look at level one, basic protection in SLSA. Here, if you look at the table, scripted builds, I have some provenance, so I can prove what built the thing. All right, it’s not bad, that gets me somewhere. Number two, medium protection. Sounds good. This gives me some further checks. Am I using a build service? Can I have a service generated provenance, nothing manual there? Do I have some great version control story? Sounds good. Then I get to advance protection stage three. Now I’m adding some new things, different retention of source control with a verified history, I’m getting some additional isolation in my build. Again, if I want to avoid a compromised build server, where someone can just smuggle in some bad malicious code, because that build server never seems to get restarted, everything is very stateful, not if I’m using an ephemeral containerized environment where every build happens in a container and that gets blown away. Right now there’s nothing left behind for the next build. It’s always fresh. It’s always new. So are we starting to do more of those things?

And then you hit stage four, maximum protection. Here’s where you have this full suite of things that you’re able to do. And maybe everything doesn’t deserve that, maybe everything doesn’t need it. I don’t know. Maybe we’ll come up with level five for extreme protection, but for now level four is the furthest one. And these are a lot of activities that you can go through and say, is it dev team? We’ll take on some of this. Our product ops team, our security team, our platform teams. Awesome. This gives you a common language for really thinking about how you secure the supply chain. I think that’s pretty cool.

If you look at Cloud Build, I mentioned Cloud Build’s a great tool chain. It’s our serverless build service that just runs in Google Cloud. There’s no infrastructure under it that you deal with. It’s all ephemeral, everything builds in containers, you kind of pay by the minute. We’ve continued to make some cool updates here. There’s Private Pools now, which give you more isolation. This actually runs in a more sandbox, upsetting away from everybody else, and so you can turn that on today, which is pretty cool. We’ve done some more stuff there with attestations, as I mentioned. So everything by the faulting cloud build immediately gets an attestation attached to Artifact Registry. And then you’ve got that extra metadata.

In this whole process you might be using, as well, JFrog to store your images, and Artifactory. Terrific. Awesome. But the key is, are you working through these steps? Are you thinking of your build service establishing trust? And then you’re verifying trust throughout the process.

Now you’re also, here, going to be doing some vulnerability scanning. This is where Xray’s awesome, and Artifact Registry does some stuff too, but clearly Xray’s doing some terrific work here, making sure I’m able to do some scanning, checking into that code because then, I’m in my repository, my artifact repository, but now I’m not done, I want to ship it. The whole value is to get this thing to production. And so in this case, how do I now take that artifact and now make sure I’m verifying it on the way to prod? This is where we have some pretty cool stuff called binary authorization.

Binary authorization is actually something we have internally, that we use at Google, that’s turned into a service we now make available to everybody. But this is a way to actually check those attestations and say, is this container allowed to get to this production service? It’s a really simple experience for the user. Behind the scenes, it’s pretty powerful thing. So let’s talk about that.

The key reason you use binary authorization is to make sure the only things that you trust get to production. That’s it, in a nutshell. It’s actually doing some policy enforcement before it gets deployed, whether that’s to GKE, Cloud Run, Anthos environments, as well, Offprem. So, it’s doing some signature verification, it’s also seeing what’s allow listed, hey, where can this come from. Maybe we don’t allow images from the Docker Hub. Maybe we don’t allow images that weren’t through this build service. Terrific. You’re adding policies and now nothing’s allowed to get shipped. I’m doing a really great verification step right there before things hit production. So, I’m making sure I can check, is it built by a pipeline I trust? Does it pass those sort of checks? Did it go through vulnerability scanning? So you’re able to add a really good check here, right at the end here, and making sure that before it gets to pro you’re doing another key verification step

And we’ve continued to improve this service. So, what’s really important is not just the sort of verifying the trust, but maintaining the trust. It’s one thing to just check it, but then what happens if I make it to prod, and then pull in a dependency update, or something that has something weird in it. Nope. With this, we’re constantly verifying, making sure that that container and production is the one that’s supposed to be there. So, even after you’ve gotten it there, there’s no way to sidecar it in and maybe slip another change in there. We’d be making sure that we’re constantly checking, keeping you safe, making sure that that thing still meets the policies that you defined earlier. So pretty powerful stuff. Make sure that even in that deploy environment, you’re enforcing compliance.

Likewise, as I mentioned, we have a great Artifact Registry at Google Cloud. It’s a terrific service globally available. We’ve added vulnerability scanning there as well, as well as some fuzzing support. If you’re a Go developer. Again, for the combined story, Artifactory is terrific, you should be using it wherever you can. It’s awesome. So pick a good registry. It doesn’t matter to me, arguably, just use what makes the most sense, but put this into your pipeline. Make sure you’ve got a good story where you are doing vulnerability scanning with things like Xray. You are doing things with attestations.

Because what’s important here is, again, once I get to that runtime environment, what’s that look like? How am I doing from identity management perspective? What am I doing from my infrastructure story? It’s not just deployment pipeline. Once I’m there, what’s around it? Am I in a really vulnerable place where I can easily hack my way in? Am I in something where there’s other ways to inject vulnerabilities from a security perspective at the operating system level, things like that?

As we think more about that, what’s my platform that I’m going to? You may be spending time here already but, part of this supply chain is, what are the services that are doing things like storing my secrets? So, as I’m making my way through the pipeline, or as I’m in production, how am I making it really hard to access things that people shouldn’t? How are you storing secrets? If I go back 10, let’s call it 15 years, when I’m building .net applications, I was just encrypting values and sticking them in the web config file. That was okay at the time. But nowadays I shouldn’t be sticking secrets in my config files. I shouldn’t be sticking in that stuff, of course, in code. I can use secrets managers. Maybe, maybe it’s HashiCorp Vault, fine use that. That’s terrific. Maybe it’s something else native in one of the other clouds. Go for it.

If you’re in Google Cloud, Secret Manager’s pretty cool. It’s a global service. I can do regional data storage if I want, everything is replicated automatically. I have a global name space. Everything’s versioned by default, follows some principles of least privilege, integrates with our cloud audit logging. And what’s neat is it can work with GitHub actions. If you’re a Spring Boot developer, it just pulls those secrets in automatically. If you’re Terraform, I can manage all my secrets in there. And, if I’m developing my app, it integrates with all the SDKs, whether it’s .net, Go Java, Python, PHP, Ruby. All of our SDKs work with Secret Manager to pull the secrets into your code. So really, really nice way, if I want to secure software supply chain, I’m also being ruthless about what’s actually in the code, and I’m doing some late binding of things like secrets, and Secret Manager’s pretty cool.

Likewise, Serverless is a pretty cool story when you think about secure software supply chains and what does security really look like? Why is that? Well, look at the function as a service platform, like cloud functions. I don’t even see an operating system. I’m not responsible for patching it, securing it, hardening it. That’s all platform provided. In no way can I accidentally do the wrong thing there. I’m just handing you code. Now, as we talked about earlier, I could still write a 10 line function that imports, let’s say, five packages to do its thing. That might now all of a sudden be a 10,000 line of code function because I’ve pulled in dependency. So, I still can’t fix that automatically. You still have to have all that diligence to check your packages, look at their history, look at their vulnerabilities. But I have shrunk the attack surface with Serverless. I don’t have as much infrastructure. I have as much network stuff I’m messing with. There’s fewer places where I could have bad vulnerabilities. So, Serverless is a really good security story because I’m shrinking how many things I have to be responsible for as a developer and more of that’s going into a managed platform.

Cloud Run is a tremendous service. This is a scale to zero container environment. But it’s more than just what you think of traditional Serverless because, I can run 16 gig container images. I can do things like persistent storage. I can do 80 concurrent requests to a single service and then scale to 1000 instances in a couple of seconds. So it’s really robust. I can run a lot of things here, even setting minimum instances. So I don’t even have to scale to zero. I might scale to one for certain workloads. So both of these are really powerful environments for a growing number of application types and they have a good security story in your supply chain. And what’s new from a security perspective here is, if you’re a cloud run developer, I can have customer managed encryption keys. Bring your own encryption key, encrypt your things at rest. I can’t see it as Google. Nobody else can. It’s your stuff. It’s awesome.

Likewise, as native integration with Secret Manager and I like the integration, because if I’m a developer, I can say, I do want to use secrets and I have two choices. I can inject them into environment variables or I can actually mount that as a volume into the container image, and then just access it in my code, which is pretty nice. So, I have a few different ways to, at run time, a late binding, pull things into my application, keep it safer along the way.

Finally, Cloud Run GA’ed recently the binary authorization support. You like that check? You like that attestation check that says, no deploying from this place to Cloud Run. Awesome. Super easy to use now. And again, you don’t have to do much work. These are checkbox features, which actually do a ton under the cover. So you’re really well set with Cloud Run.

Likewise, Cloud Functions. Cloud Functions, again, pretty cool underrated function as a service platform. We also just added integration with Cloud Secret Manager so you can pull in secrets to your functions, and because functions are often playing a glue role, maybe that function responds to something, calls out to Twilio, calls out to the maps API, you’re probably storing some API keys. Cool. Stick those into secret manager. Do that safely. So, a lot of nice ways if I’m building modern software, and I’m trying to do it quickly, and I’m shipping, and velocity is awesome. Can I not compromise security and actually do some really cool native things?

The final big service I would talk about here is GKE. GKE is probably the best way to run Kubernetes in the public cloud. It’s the most configurable, secure, feature rich Kubernetes available from any cloud provider. It’s pretty great. We work really hard at making it awesome. Now what we did earlier in 2021 was leadership autopilot. If you haven’t come across this, it’s pretty cool. So it’s GKE, and GKE is just Kubernetes, just operationally amazing. But we took GKE flipped on all the right default options from a security perspectives, all those scaling, all those fun things. And then we took responsibility for it. So you, as a user say, I’d like a GKE autopilot cluster. We spin up a control plane and, behind the scenes, we stand up whatever nodes are needed, you just pay per pod. We scale it. We get paged if something goes wrong, our SREs. We auto scale it, we provision it, we update, we maintain it, which is awesome. So, you get the Kubernetes API surface and none of the work.

Now, why is that a great software supply chain story? You can imagine, we’re turning on all the right security features. We’re then running and operationalizing this at Google scale for you. We’re shrinking your attack surface again and giving you a super hardened, protected environment. So Autopilot’s a perfect part of a great secure software supply chain. There’s nothing like it from any other provider. We work really hard to make that awesome. So, look at that. How do you stretch back from your run time back all the way to the desktop, through your build tool, through your vulnerability scanning, there’s an amazing set of technology now, that integrates pretty nicely to give you a secure software supply chain.

And look as we look at how JFrog, Google Cloud work together, first call to action, go check this out in the cloud marketplace. Get Artifactory installed, get these other JFrog experiences installed into your Google Cloud account and have a great hardened artifact registry that works great with our other services in Google Cloud. Get on that. Same with Xray, I’m doing vulnerability work because I’m actually making sure I’m keeping my packages safe. You can deploy this to Google Cloud hosted model. You can do kind of do a DIY model, but get going on that as well. It’s awesome. And then try some of these Google Cloud Features. Use Buildpacks to generate a container image. See what you think. Try deploying the Cloud Run. We give you a couple of million free requests every month to try out with no cost. Try GKE Autopilot and see that maybe you can build a secure software supply chain without actually compromising velocity. Instead, I can actually go fast and stay safe.

I hope this stuff’s exciting to you as you think about what is the right way to do this sort of work. Secure software supply chain has never mattered more. You spending time on that is great for you professionally, because this is going to be a super in demand skillset. Look at frameworks like SLSA, apply that to some of your work, and help make the software a competitive advantage and not a risk at your company.

All right. I appreciate time. You can find me on Twitter at @rseroter and yell at me there. Tell me what you think. Hope you enjoy the rest of the conference and hopefully see you soon.

Release Fast Or Die