Using Containers Responsibly

Tools to package your applications and services into container images are abound. They’re easier to use and integrate into your CI/CD pipelines now more than ever. We can appreciate these advancements in the form of time savings and decreasing complexity when deploying to a cloud native environment, but we cannot completely ignore the details involved in these technologies. It’s tempting to take simplicity for granted, but sometimes we do this at the expense of keeping our software safe and secure!

In this webinar, I’ll discuss the different tools available to us today to package our software into container images and where we want to shore up our processes with regard to both efficiency and security. To address security concerns in other areas of your pipeline, we will also explore the benefits of using JFrog Artifactory as your official container image registry and how to incorporate JFrog Xray for scanning and maintaining your confidence in the security of the content of your images.

BOOK A TECHNICAL SESSION!

Video Transcript

Hi, everybody. Welcome to this webinar, Using Containers Responsibly. I hope you’re all having a great day. Today, in this webinar, we’re just going to be discussing a little bit about how you might be using containers, some things that maybe you haven’t thought about, some details, some high level information. But before we get started, let’s go ahead and talk about some housekeeping items. We’re always asked if there’s going to be a recording of this webinar. Yes, there will be a recording, and we will be sending it out after the webcast.
Also, you are on mute and cameras are not shown as well, so don’t be shy. Go ahead and use the platform as you would like, move around those windows however you need. Make sure to join us in Q&A at the end, as well. And also during this webinar, go ahead and ask your questions as we go. We have folks online that will be able to help answer those during the webinar so that they can be answered in the context that you ask them.
All right, introductions. I’m Melissa McKay. First and foremost, I am a developer. Have been for many, many years, all the way from an intern, fresh out of school to a principal engineer. So lots of time working on various kinds of different projects. Later on in my career, I was primarily focused on Java server side applications. Also some Node thrown in there, some Python. Rarely do I meet a Java developer these days that isn’t doing something else as well.
I did become a speaker. This was something that I was really interested in doing and it just made sense to make the jump to become a developer advocate. I’ve now been with JFrog for a while in this position. I’m enjoying it. Even through the pandemic, I’ve really enjoyed being able to engage folks online. Now we’re starting to travel again, so this is a really good opportunity to meet developers, find them where they are, be able to have these conversations, especially with new projects and everything coming out, finding out what their woes are and hopefully being able to help and make lives easier.
I am a Java champion and a Docker Captain, so I try to keep on top of the latest and greatest in those two technologies. Here on this slide is my Twitter handle and my LinkedIn. Feel free to reach out, ask questions, anything like that. I am available and certainly pass on any questions that I can’t answer to those that can. Today, on the agenda, we’ll be talking about just how containers are used today and maybe how that has changed over time. We’ll then be talking about building them responsibly. I won’t go into too much detail here, but I will pick on some of the most common things that I see in Docker files and then a couple of other suggestions and things to think about when you’re building your own containers. We’ll talk about where we should be concerned with containers in our software pipeline, how we should manage them. We’ll just touch on that and then we’ll talk a little bit about securing our containers, what our options are, what we have available to us.
So I remember that there was a time when using Docker containers in production was considered particularly risky and not something that I did early on in my career. Certainly even though the concept of containers has been around for a long time, watching them become so widely used over the past decade has been an incredible experience. This diagram here actually comes from a page on the Cloud Native Computing Foundation’s website. This doesn’t have anything specifically do to do with containers, but I do like how it describes the different stages of projects and the types of users that adopt a project over time.
And I think it’s a good one to apply to container usage or to even Docker usage over time. Like I said, containers are nothing new. They’ve been around a long time, but it took a while for the use of those to catch on in production environments. If you were to ask me today where we are right now, I would guess we are somewhere near the peak of this diagram, maybe a little bit to the right, starting to look at the conservative adopters. There’s an argument that we’re not quite there yet, but I think we’re pretty close to that.
There are some reasons that we can point out, events that have happened in the past, reasons why we’ve seen this explosion of container usage. One of them is in 2013, of course Docker became open source. That was a pretty big development. In 2015, though, even more happened. In fact, on June 22nd 2015, the establishment of the Open Container initiative was announced. And this is an organization that’s under the Linux Foundation. It had the goal, still has the goal, of creating open standards for container run times and image specification. Docker is a heavy contributor and they have donated some of their implementations, some specs. But in the announcement that was made of this new organization, it was said that over 20 organizations were involved in this.
So it was true that containerization had evolved to this point, to such an extent that a number of organizations wanted to work towards some common ground for the benefit of all. One month after the OCI was established, the Cloud Native Computing Foundation, or the CNCF, was established. And part of that announcement was the official release of Kubernetes 1.0, which was donated by Google to the CNCF. So along with containers themselves becoming more widely used, we now have advancement in the orchestration of these containers as well. And it seems that 2018, around that year can be viewed as the year when containers crossed over into a popularity zone.
It’s been very interesting to see this explosion of widespread use of containers and also the beginning of research being done by different companies on their use in production environments. And here’s one example of that. This is reports that were done by Sysdig. This information came from those. Sysdig is a company that provides a really powerful monitoring tool. It’s a troubleshooting tool for Linux. You’re probably aware of this, if you’ve been working in production environments quite a bit.
But one thing to note is I went back in time and tried to find the earliest that made sense to report on, and in 2017 they had a report where they analyzed 45,000 containers. Now these are all containers that they had access to, obviously containers that were using Sysdig. They didn’t really have a diagram or anything to list the run times that were being used, because 99% of those were Docker at the time, so it didn’t make sense to break them out. The next year in 2018, they repeated this process, did the same type of reporting, reporting on different run times that were in use and they observed 90,000 containers. And here we start seeing other container run times besides Docker coming on the scene. So that’s pretty interesting to look at.
In 2019 though, the report jumped up to 2 million containers. Today, that’s not a large number, but back then it seems like a pretty big jump from 90,000 to 2 million. They say it includes both SaaS and on-prem users. These links to these reports are on the slides. They’re definitely worth taking a look at. There’s some interesting information in there. This particular one shows a growth of containerd and I want to note that Docker as a run time, although it’s being less and less used these days, Docker actually uses containerd as it’s run time now. So that explains why containerd is becoming more and more popular and you see runtime usage of Docker decreasing. Doesn’t mean that Docker has gone away or is less popular, just means that the runtime involved is more aligned with the orchestration that’s available today.
Another Sysdig report, 2020 and ’21, we’re still looking at 2 million containers. They do specify in this report that this is only a subset of customer containers, so there’s more than 2 million now. And then the last report I’ll show just an interesting diagram. This is 3 million containers for 2021 and ’22. Pretty interesting to see the division of the run times here. I found more evidence that supports that turning point in 2018, that’s provided by a Datadog, which is another organization that provides monitoring solutions for applications. And I took this particular graph from a report that was posted in 2018. It was called Eight Surprising Facts about Real Docker Adoption. This graph takes data that was collected from 2014 to 2018. And you can see the progression of adoption increasing with 25% of 10,000 companies now adopting Docker. Really interesting. Also in the methodology for this report, they said that data was being taken from 700 million containers. That’s pretty wild. Again, there’s a link there to that report if you’re interested in taking a look at that.
In 2018, Datadog also started focusing more on orchestration and looking at and observing runtime usage much like like those Sysdig diagrams that I displayed earlier. And this quote was taken from the Datadog research report called Eight Emerging Trends in Container Orchestration. It was posted at the end of the year, 2018 and December. And again, the link is noted here. So if you get a chance, check it out because there’s a lot of other interesting observations that are made here and in later reports. But the quote that I pulled out from the top of this report was containerization is now officially mainstream and 25% of Datadog’s total customer base has adopted Docker and other container technologies. Half the companies with more than 1000 hosts have done so pretty incredible. Back when I might ask an audience at a conference whether anyone’s using containers, maybe a scattering of hands would go up. Now it’s a lot. Anyone that is dealing with cloud native infrastructures, dealing with applications that are composed of microservices, now it’s a pretty popular thing.
Now, just because something is popular does not mean that it’s secure, especially in cloud native environments. You can’t take that part for granted and you also can’t take that performance or efficiency for granted. How you are packaging your application or service into containers will make a huge difference on both fronts. So don’t think that just because the technologies are more advanced today, that you don’t have anything left on your plate to do, but use them. There are ways that you can basically cause yourself some issues and use them incorrectly.
Before we talk about that, we will get into a little bit of detail on that, let’s just talk about what happens in a typical software pipeline and even before we started adding containers to the mix. What all is involved in our development and delivery process? So this is a typical pipeline that’s displayed here. It has a number of different steps. It’s huge. It’s complicated. Don’t expect you to be able to see everything that’s here, all the little tiny logos on this screen and all the text. But just note that basically it goes all the way from initial development, through continuous integration, through going through integration with build servers, with build tools and dependency managers, going through testing processes and then ultimately being deployed into a production environment. Now, what’s missing in this particular diagram are the steps that are involved in monitoring and other operating tasks that should be happening after deployment into production.
You generally see those steps, those aspects in the infinity software development life cycle diagrams, but today we’re going to focus on the steps that happen up to deployment. Containerization. It can be argued, and I’ve heard this before, that maybe this shouldn’t be a concern of a developer or anyone even close to that side of the pipeline. But the problem is, for now at least, containerization is often part of the, well, it is part of the build process. And knowing how something is being built and then later understanding how it’s going to be deployed, that clearly affects decisions that are made by developers, clear back at the design stage. So we’ve seen this with the advent of containerizing microservices for example. So we can’t, developers aren’t going to be able to just let this go. If you are dealing with writing applications that are intended to be in developed in and or deployed in a cloud-native infrastructure, in an environment like that, you’re going to need to learn how to work with containers.
This is a more simplified version of the pipeline that was shown in the previous slide. Goes all the way, starts at development, goes to continuous integration, goes through QA testing, then maybe a release process, ultimately to deployment. Where in this process should we be concerned about containers? We already go through this process with just our Node application, our Python application, our Java application. Where do we have to be concerned with containers now? Where does that fit in?
And it turns out, like I said, containerization is part of building and part of deploying and devs need to be able to do those things. In fact, we do those things repeatedly every day when we’re working on our projects. So devs design, they code, they build, they test, they troubleshoot, they repeat all of that. Devs need to be able to reproduce problems, especially if they’re working on bug fixing. They need to be able to reproduce a problem that might require a specific version of an app to be run in order to reproduce the problem. And that’s going to be in a container. You want to be consistent with where the problem is being discovered.
Being able to dev test or sanity test a bug fix or a new feature even, that might involve deploying to a development environment. Or even running a container on your local development machine. It makes sense to be able to deploy in pretty much the same way that the application or service would be deployed in a production environment, which would be in a container. So that follows that devs need to understand how to build and run containers.
Okay, the continuous integration process, I usually think of just build servers in general. Updates are merged in source control. This is where new artifacts are built. This is where automated unit testing happens. Artifact storage on success of the build and the tests. Alerts are sent and builds fail if it doesn’t pass the unit test, things like that. And then that process is repeated over and over again. The artifacts here that are being referred to are not just the libs in the libraries that are used in the source code of the application, the container image itself is included in this list. The container image is considered an artifact, so we definitely need to be concerned about it here too.
Same for QA testing. This artifact and all the other artifacts that are concerned in our application, they’re all going to need to be retrieved. We need to provide feature verification. This is where you might run further integration testing. That could be manual or automated. This is where when all the tests pass, this is where you might go through a round of promotion of all of these artifacts, which means staging them, getting them ready for the next step in the pipeline. And then again, repeat.
Releasing. This might involve another artifact promotion. You may be creating release bundles at this point. These artifacts again are going to be container images along with other artifacts. A release bundle will likely include the container image. And then finally, deployment. Obviously to deploy something, you need the artifact to deploy. And that is the container image.
So clearly we have plenty of places that we should be concerned about, but in my opinion, most of our security and efficiency concerns with regard to containers really can be addressed near the beginning of the pipeline in the development and in the continuous integration stages. These are the stages that result in artifacts and container images that will potentially move all the way to production. And this is where the container image that will be used to launch our production containers are produced. So it makes sense to focus on these areas.
There are quite a few methods used to build containers. So let’s move into building container images responsibly. That’s primarily the task that developers are going to be doing and build servers are going to be doing. So it makes sense to spend some time on this portion. How and when you build your container images will make a big difference in both security and in efficiency and performance. Under how category, you can choose solutions with or without Docker. I advise people mostly just start with Docker Desktop, to get your feet wet, especially if you’re new to containers. The documentation is excellent. They do a really good job of walking you through the entire process and explaining exactly what it means to be a container image, what it means to run a container and things that are happening under the covers. It has all of the features that you need to build, to run, to store containers, to push them into a registry, whether it be public or private and takes care of the caching mechanism as well. And also launching containers on your local machine. So it’s pretty advantageous.
If, for whatever reason, you do not want to use Docker, another option for you might be Buildah. If you’re a Linux shop, this is probably something you’ve already looked into and considered. It’s just another alternative to building images. The other thing to consider is whether or not you need to write a Docker file. When I talked about before how I’ve heard that containerization maybe shouldn’t be in the developer’s lap, and one of the arguments that I hear the most is why do we need to learn to write another thing? Now we have to learn how to write a Docker file. And it can get pretty overwhelming sometimes when all of the things that developers are asked to learn and asked to do these days, but I’ve found that writing the Docker file, it gives me a little more control over how these images are built and produced. So I personally prefer using a Docker file. I think it’s a pretty standard way to communicate how the layers are built and what exactly is included in your containers.
If you do not like writing Docker files or are looking for ways to get out of that, there are solutions for you. There are options. Buildpacks is one of them. Another is using build plugins. So if you’re already using Maven or Gradle, you can simply add one of these plugins to your palm file or your Gradle build file and be able to use it that way. Jib is another option that’s also used as a plugin.
When do you build these containers? Obviously during active development, developers are going to be building these all the time. One thing that drives me nuts is, and it’s happened to me over and over in the past, is when a change is made and stuff is checked in, but maybe the developer forgot or just didn’t try to build and run the container on their machine. So maybe all the unit tests pass and everything, but the moment you try to launch the container, there could be something wrong. Maybe some configuration isn’t quite right, something like that and the container just doesn’t run. It dies immediately.
Not helpful when that gets pushed into source control and the next developer who pulls it has to figure all that out. So developers need to be able to run these on their machine. Also, like we talked about earlier, for troubleshooting purposes, it’s another reason.
During continuous integration, obviously builds are going to be happening there all of the time. So that’s another time when you would be building container images. Other times, there are other times that I’ve seen container images built. We’ll discuss that in a minute. I don’t believe that is best practice. So I’ll address that in a later slide.
Since using a Docker file is pretty common, let’s start there. But first I want to talk about dependencies. That’s going to be the biggest part of this. I know this is an overused graphic with the iceberg, but the concept that software is potentially made up of a ton of components that a developer doesn’t necessarily have firsthand knowledge of, cannot be understated. Applications and services that are built today are more complex than ever. Developers generally don’t want to recreate the wheel if it isn’t necessary.
And this means pulling in a lot of libraries code that you didn’t necessarily write yourself. They could be open source components or they could also just be other libraries that have been written by other teams internally. It doesn’t even not necessarily need to be open source. It could be just another team has responsibility over that part of the software. So clearly we need to pay attention to everything that is coming in to the build because you could potentially be bringing in things that are vulnerable or things that could make you susceptible to attack.
Let’s talk about some of those things. This is a very contrived Docker file. It was written to illustrate some points to consider when building these container images. But don’t get me wrong, if you are going to go look for examples online on how to build Docker files, you will more than likely find Docker files that suffer from some of these same issues that we’ll talk about here. On these examples that you find online are meant to be just that examples. Simple, simple for demonstration purposes and not necessarily production worthy. Obviously that doesn’t just apply to container image building examples. Other code you find online too, you really need to understand it. Take time and read the documentation. Don’t just copy paste stuff.
So let’s walk through this one and pick out the issues. I won’t pick out everything. I’ll pick out a few that are the obvious and that I see pretty common. Number one. That from line, this is a parent image. Docker files can be written in a way where there’s a hierarchy. You can start from a base image or a parent image, and then the rest of the Docker file is adding to that. So this is what we have here on line one. We have from untrusted parent image. Obviously you’re not going to see something, so obviously a problem named such, but I see this a lot. People will pick a particular base image and just use it because they’ve seen it used elsewhere, without doing the due diligence to figure out if this is an actual image that is safe to use. In fact, let’s take a moment and talk about official base images. Official images. You can find them on Docker hub and I’m just going to show you, let’s just go to Docker Hub and take a look.
Let’s do a search for an alpine image and see the alpine image is a Docker official image. So Docker has a team that is dedicated to keeping track of these images, making sure that they’re open, that it’s obvious what’s in them, that they are managing updates and paying attention to news of new vulnerabilities coming out and making sure that everything’s up to date. So if you’re going to use an image from Docker hub, a public registry, these images are available to anyone and can be posted by anyone, best use a official image, unless you have some other resources that tell you that the image you’ve chosen is if it’s not an official image, you need some other reason to consider it trusted.
One way to be able to trust an image is to have its original Docker file, the original artifacts and files that were used to build that image to begin with. And one way to look at an official image, just pull up a search engine. And if you just search for Docker official images, first you’ll get the documentation link, but go down to the first link, that’s a GitHub link, and this is where it gets interesting. So this is the GitHub repository where official images actually live. There is a library directory. In this library directory, you’ll see all of the images that are official images.
We pulled up alpine earlier. If you drill down into that, you’ll see a line in here. And all of these are consistently done this way. You’ll see a Git-repo. Let’s take a look at this. Git-repo. This is where the alpine official image is managed. And in this repo we should be able to find the original Docker file. Now, some of these may have branches for different versions or different directories for different types. Let’s just go to the latest version of alpine and now we can see that we have directories that represent the different types that you can build. If we just go into… Let’s go into this top one here, drill down, and we should be able to find a Docker file like this. And here you go. Here is the original Docker file for the alpine image.
Now there’s some question, obviously, are you going to be able to open this up and look at it? That will take a little bit more work on your part. But notice this first line. It says from scratch. To me, this tells me this is a base image, meaning you can’t go back any further. Some of these official images will have another parent image listed here. Not scratch, but it will be another official image that you need to repeat this process for, in order to go all the way back to the point where you’ve reached scratch. So if you’re ever curious how these official images were built, this is how you can find the original Docker files for these.
Okay, moving on. All right, lines two through four. The problem with those lines, there’s no version specified. So in this example, the parent image didn’t have all of the packages necessary that we needed for whatever it is we’re trying to run here. And so some packages were installed. And line three, we have some package, no version. In line four, we have an old vulnerable package. It does have a version specified, so there’s a little bit more control there. But it’s vulnerable, hasn’t been updated and we even know it’s vulnerable. So that’s pretty shameful. I see this all the time. It’s easy to forget that OS packages need to be managed the same way as our libraries and our source code, the packages that are built from those.
So make sure that you’re always specifying your versions. The reason is because the next time this needs to be built, this image needs to be built, you’re not going to get the same image. You’ll never get the same image again. You likely will get a newer package, since you didn’t have the version specified. And that can cause you quite a bit of troubleshooting trauma, especially in your continuous integration process. That’s generally where I see this happen because in continuous integration, you like to build something fresh, where there’s no cash involved of old packages and resources. So that is an opportunity where you want to be able to build without suffering from those moving parts that can cause things to break, and then it takes a while to figure out what happened.
Okay, line six, this copy statement. This could be an efficiency and performance problem. If you have not set up a dot Git ignore file. You could be copying things you shouldn’t. Basically, this is saying copy everything from my working directory into the image. You could be copying secrets, you could be copying local configuration that really shouldn’t be in a production environment. You could be copying maybe test files, artifacts or logs that you really shouldn’t be putting in a production image. All it’s going to do is make it bigger and bulkier.
Also, this will increase build time as well. The reason for that is because when you’re doing a build, all of those files need to be sent to the Docker Daemon, and then all of those are parsed through. They’re all available in order to be used to copy, and in this case, we’re copying everything and then all of those are moved to the image. So just that process of moving all those files in order to use it as the Docker context, that can cost you a lot of time in your builds. Especially in continuous integration, where likely you’re going to be building repeatedly throughout the day.
All right, line seven. I see this a lot too. It really bothers me to see curl statements, another w-get statements, things like that. To me, those indicate an external resource that you don’t necessarily have control over. Now, it’s one thing if this is being pulled in from a private repository that you manage, but I’ve also seen the case where this might be an installation script from another organization, maybe for a product or something that you’re including in your image, that you need to use their script to install.
A better way would be to bring that script internal and manage it yourself. That way you’re not on someone else’s timeline of updates, because that script could be updated out from under you, it could be moved out from under you, it could be deleted and then all of a sudden and all of your stuff is failing. So try to avoid lines like number seven. Also, number seven requires curl. So if you don’t already have curl installed, you’re going to have to install it in order to even run that line.
Lastly, nine. Nine includes an entry point. It’s running a start script. That is actually running as a route, by default. So you really should obey the principle of least privilege. Let that script only have permissions for what it requires. Give it, create a group, create a user, and let the script run as that user and group, but running it as route it better have a really good reason to be doing that. These are just a few problems that I come across frequently in Docker files. It’s definitely not an exhaustive list, but it’s a good place to start.
So best practices. Again, use trusted or official parent base images. Don’t use bulky parent images. Utilize multi-stage builds. I see Docker files often that might use bring in Maven or NPM, something like that. Multi-stage builds are a way to actually do the build in a intro section and then only pull in what you need in a final section and to keep that image really small. So take some time to look at the documentation for multi-stage builds if you are actually building your software with a Docker file. Specify versions of all packages, use a dot ignore file, git-ignore file. I’m sorry, that’s not a dot git-ignore file. That should be dot Docker ignore file. It is like a git-ignore file, but it’s not the same thing. All right. And then making your external resources internal and then do not run your processes as route.
All right, options for you. If you want to use Docker files, you’re happy with that. You can obviously use Docker Desktop. It is not free for organizations that have a certain number of, I believe it’s make a certain amount of money, have a certain number of employees, but consider the advantage of having that support when you’re using an open source tool, you’re going to need to support that. So look at that cost and see if that makes sense for you. I would do it if it makes sense. It’s consistent, it’s easy to install across the board on Mac and Linux and Windows. It’s just a really good developer tool to have in your toolbox.
Buildah a also uses Docker files. I’ll put the link here, if you want to check out Buildah. It is intended for Linux, so you’ll probably be more happy if you’re working in a Linux shop. Docker is not required to run Buildah. So there’s your alternative. You will need Podman to start and manage containers. Buildah doesn’t do everything for you that Docker or Desktop does, for example, but a combination with other tools like Podman, you’ll get what you need done.
Here’s some options that do not require a Docker file, Buildpacks, I’ll put a link here for that. These are strongly opinionated builds. They detect the type of app you’re working with. So for example, Python, Buildpack, might look for a particular Python specific file, like setup.py requirements.text. And Node Buildpack might look for a package lock JSON file. And then there’s a build phase part of that process, maybe running a PIP install or an MPM install.
So the pros there. They’re pretty simple. These Buildpacks are maintained by projects that are a part of the CNCF. The Cloud Native Buildpacks project specifically is part of the CNCF. You will need to install a tool in order to work with it called Pack CLI, and they have some good tutorials there for you to play with if you’re just learning. Build plugins. The other option we talked about there was Jib. That is the plugin you can add to Maven or Gradle file. It does a really good job of taking Java projects and instead of having a fat jar, it splits it up in such a way that makes the image a little bit smaller. Pretty easy to add that plugin.
And then one thing about it, is it builds and pushes the image to a registry. I have mixed feelings about the pushing of the image to a registry. Seems like a developer wouldn’t want to be doing that. You might only want to push to development specific registry, something like that. It certainly wouldn’t be the same one that your continuous integration would be pushing to, for example. The Spring Boot Docker plugin for Maven and Gradle, that is also easy. You just add, read the documentation obviously, add the plugin. It actually uses Paketo Buildpacks. I didn’t know that at first because I didn’t read the documentation. I just wanted to run it and see how it works. And I realize that it’s actually pulling these external images, these Buildah images. That’s okay. Just you might want to consider relocating those images to under your control and your management in your private registry, so that you can handle updates appropriately. And you may want to do a custom Buildpack for example.
All right, managing these containers. I just talked about registries. Whenever we are building our container images or we’re pulling them for the purposes of launching the containers in a deployment environment, we need to get these base images or our final production images from somewhere. Where are we storing these containers? And something I’ve seen missed a lot, by default, if a registry isn’t specified or an image isn’t tagged with a registry, Docker hub is going to be assumed. So this line one from our previous Docker file that I had up there, we probably want to change that to something; add our private registry there, tag that image clearly, make it a trusted image, not an untrusted image. Move it to our registry and then refer to it as such.
One other thing to mention about managing containers, once an image has been built during continuous integration, it should not be rebuilt anywhere along the pipeline. Instead, as that version of container image passes tests and other verification processes, it should be promoted. That means moved or copied to a QA, a staging, and then finally a production registry or repository. This way you can be assured that exactly what was tested is what is getting deployed.
Ultimately, let’s talk about securing these things now. Ultimately, we want to be able to launch containers from a specified container image and be reasonably confident that container won’t be immediately vulnerable to attack. Obviously there’s additional infrastructure and design concerns here, but one of the easiest and best things that we can do is regularly scan our container images for known vulnerabilities and for new vulnerabilities that are discovered over time. And it used to be that security like this was something that was tacked on at the end, but now there are ways to detect issues earlier in the development process. That includes scanning before checking in, scanning after your CI builds or during your CI builds, during and after testing, scanning your release bundles, scanning periodically or on-demand even to get new information.
There’s a number of different ways that you can utilize JFrog Xray. I’ve listed several of them here. We have IDE plugins. There’s Frog Bot for Github repos. JFrog Docker Desktop extension. The JFrog CLI also has an API for Xray. And then of course the JFrog platform. And I would like to show you, just show you what Docker Desktop scan looks like with the Jfrog extension. Here’s the extension here. You can add it through this process. You can set it up to connect to an existing environment that you have. You can also create, it’ll give you an option to create a new environment if you don’t have one already.
This accesses everything I have available to me on my local machine. I’m just going to take one of these here, scan it and see what comes up. And we get a pretty exhaustive list of vulnerabilities that we need to take a look at. We can drill down into each one of these and get a summary of them, more information. We can also find out exactly what layer is concerning so we know exactly what we need to be updating. So pretty nifty.
Take some time, check out some of the others. And I think I can show you one more, actually. If I can remember my login information. I want to show you what it looks like in the JFrog platform. So if you already have a SaaS version available to you, you can actually go to the packages view, look for any Docker packages available that you’re interested in. I’m going to choose this one and just drill down into this version of this Hello package. I have some X-ray data here, and again, it tells me more information about all the CBEs. I can drill down into each one of these and find out more information. So lots of stuff I can do here. A lot of this is also going to be available to you using the JFrog CLI as well, so you can make decisions along your pipeline on what to do without having to use a GUI like this.
All right, that is that. I’d like to open it up for any questions. And it looks like we had a couple come in. One was do you have any workshops? Yes, we do have workshops. I didn’t get into a lot of implementation here. This is more about just getting your mind working, making sure you’re thinking along the right lines and the right things when you’re building your pipelines and building containers. If you were to Google JFrog upcoming workshops, you will get to a page that includes a list of workshops that are coming up. They are very fairly frequent and at various time zones, so take advantage of those.
Let’s see. Another one here. Oh, is using the JFrog Container Registry free? Can I try it out? You absolutely can. Let me show you, I have a free instance here. This is a free tier that I’ve signed up for. If you go to JFrog.com, there’s a button that says try for free or start for free. Go ahead and click on that and you can sign up for an account and you can play with the JFrog Container Registry.
Once you have your environment set up, login, there is a quick setup section here. This is where you can set up your Docker repositories if you like. But what I want to point out here is this learning center. There’s actually a video here on exactly how to do that with Docker. So I would start there and see how far you get and then refer to documentation after that.
All right. Next, where can I learn more about JFrog Xray? Where can I try it? Same place. You can also play with JFrog Xray in there. You can learn how to set up watches and policies, that kind of stuff. If you go to JFrog.com and navigate to the resources section, there’s quite a lot available about Xray there. It’s another option for you. All right, what IDEs are supported by the JFrog IDE plugin? That is a very good question. And I’m sorry I keep jumping back and forth to a browser, but I just want to bring up this link here.
This is our IDE integration documentation and it gives you all of the IDEs that we currently support. Of course, my favorite is IntelliJ. I use that one all the time, and I do use this plugin. It’s very nice. I can make changes to my palm file and I can tell immediately, without even checking my code in, whether there’s a problem with a package that I’ve added. So pretty cool.
I would also recommend Frogbot as well. That would be a good one to integrate into your source control, if you’re using GitHub repositories. Here’s the link here. Check that one out if you get a chance. It’s pretty cool.
All right, I think that’s it. That’s all we have time for today. Like I said, this webinar recording will be sent to those of you who couldn’t make it live today. For the rest of you, thank you for coming. If you have any more questions, we will try and collect those and get back to you. You’ll probably get some follow up information as well.
So once again, thank you all for coming and good luck with working with containers in your pipelines.

Trusted Releases Built For Speed