Using Containers Responsibly

Tools to package your applications and services into container images are abound. They’re easier to use and integrate into your CI/CD pipelines now more than ever. We can appreciate these advancements in the form of time savings and decreasing complexity when deploying to a cloud native environment, but we cannot completely ignore the details involved in these technologies. It’s tempting to take simplicity for granted, but sometimes we do this at the expense of keeping our software safe and secure!

In this webinar, I’ll discuss the different tools available to us today to package our software into container images and where we want to shore up our processes with regard to both efficiency and security. To address security concerns in other areas of your pipeline, we will also explore the benefits of using JFrog Artifactory as your official container image registry and how to incorporate JFrog Xray for scanning and maintaining your confidence in the security of the content of your images.

BOOK A TECHNICAL SESSION!

Video Transcript

Hi everybody , welcome to this webinar “using containers responsibly”.

Everybody welcome to this webinar using containers responsibly i hope you’re all having a great day um today in this webinar we’re just going to be discussing a little bit about how you might be using containers some things that maybe you haven’t thought about some details some high level information but before we get started uh let’s go ahead and talk about some housekeeping items um we’re always asked if there’s going to be a recording of this webinar yes there will be a report a recording and we will be sending it out after the webcast also you are on mute and cameras are not shown as well so don’t be shy go ahead and use the platform as you would like move around those windows however you need make sure to join us in q a at the end as well and also during this webinar go ahead and ask your questions as we go we have folks online that will be able to help answer those during the webinar so that they can be answered in the context that you ask them all right introductions i’m melissa mckay first and foremost i am a developer have been for many many years um all the way from an intern fresh out of school to a principal engineer so um lots of time working on various kinds of different projects uh later on in my career i was primarily focused on java server-side applications also some node thrown in there some python rarely do i meet a java developer these days that isn’t doing something else as well um i did become a speaker this was something that i was really interested in doing and it just made sense to make the jump to become a developer advocate i’ve now been with JFrog for a while in this position i’m enjoying it even through the pandemic i’ve really enjoyed being able to engage folks online now we’re starting to travel again so this this is a really good opportunity to meet developers find them where they are and be able to have these conversations especially with you know new projects and everything coming out finding out what their woes are and hopefully being able to help and make lives easier i am a java champion and a doctor captain so i try to keep on top of the latest and greatest in those two technologies here on this slide is my twitter handle and my linkedin feel free to reach out ask questions anything like that i am available and certainly pass on any questions that i can’t answer to those that can today on the agenda i will be talking about uh just how containers are used today and maybe how that has changed over time uh we’ll then be talking about building them responsibly um i won’t go into too much detail here but i will pick on some of the most common things that i see in docker files and then a couple of other suggestions and things to think about when you’re building your own containers we’ll talk about where we should be concerned with containers in our software pipeline how we should manage them and we’ll just touch on that and then we’ll talk a little bit about securing our containers what our options are what we have available to us so i remember that there was a time when using docker containers in production was considered particularly risky and not something that i did early on in my career certainly even though the concept of containers has been around for a long time watching them become so widely used over the past decade has been an incredible experience this diagram here actually comes from a page on the cloud native computing foundation’s website this doesn’t have anything specifically due to do with containers but i do like how it describes the different stages of projects and the types of users that adopt a project over time and i think it’s a good one to apply to container usage or um to even you know darker usage over time like i said containers are nothing new they’ve been around a long time but um it it took a while for the use of those to catch on in production environments if you were to ask me today where we are right now um i would guess we are somewhere near the peak of this diagram maybe a little bit to the right um starting to look at the conservative adopters um there’s an argument that we’re not quite there yet but i think we’re pretty close to that there are some reasons that we can point out events that have happened in the past reasons why we’ve seen this explosion of container usage one of them is in 2013 of course docker became open source that was a pretty big development in 2015 though even more happened in fact on june 22nd 2015 the establishment of the open container initiative was announced and this is an organization that’s under the linux foundation it had the goal still has the goal of creating open standards for container runtimes and image specification docker is a heavy contributor and they have donated some of their implementations um some specs but in the announcement that was made of this new organization it was said that over 20 organizations were involved in this so it was true that containerization had evolved to this point to such an extent that a number of organizations wanted to work towards some common ground for the benefit of all one month after the oci was established the cloud native computing foundation or the cncf was established and part of that announcement was the official release of kubernetes 1.0 which was donated by google to the cncf so along with containers themselves becoming more widely used we now have advancement in the orchestration of these containers as well and it seems that 2018 around that year can be viewed as the year when containers crossed over into a popularity zone it’s been very interesting to see this explosion of wide widespread use of containers and also the beginning of research being done by different companies on their use in production environments and here’s one example of that this is reports that were done by systig this information came from those systig is a company that provides a really powerful monitoring tool it’s a troubleshooting tool for linux you’re probably aware of this if you’ve if you’ve been working in production environments quite a bit but um one thing to note is i i went back in time and tried to find you know the earliest that made sense to report on and in 2017 they had a report where they analyzed 45 000 containers now these are all containers that they had access to you know obviously containers that were using systig they didn’t really have a diagram or anything to list the runtimes that were being used because 99 of those were docker at the time so it didn’t make sense to break them out the next year in 2018 they repeated this process did the same type of reporting reporting on different run times that were in use and they observed 90 000 containers and here we start seeing other container runtimes besides docker coming on the scene so that’s pretty interesting to look at in 2019 though the report jumped up to 2 million containers today that’s not a large number but back then it seems like a pretty big jump from 90 000 to 2 million um they say it includes both sas and on-prem users these links to these reports are on the slides they’re definitely worth taking a look at there’s some interesting information in there this particular one shows a growth of container d and i i want to note that docker as a runtime although it’s being less and less used these days um docker actually uses container d as its run time now so that explains why container d is becoming more and more popular and you see runtime usage of docker decreasing doesn’t mean that docker has gone away or is less popular just means that the run time involved is more aligned with the orchestration that’s available today um another systig report 2020 and 21 we’re still looking at 2 million containers they do specify in this report that this is only a subset of customer containers so there’s more than 2 million now and then the last report i’ll show just an interesting diagram this is three million containers for 20 21 and 22. pretty interesting to see the division of the runtimes here found more evidence that supports that turning point in 2018 that’s provided by a data dog which is another organization that provides monitoring solutions for applications and i took this particular graph from a report that was posted in 2018 it was called eight surprising facts about real docker adoption this graph takes data that was collected from 2014 to 2018 and you can see the progression of adoption increasing with uh 25 percent of 10 000 companies now adopting docker really interesting also in the methodology for this report they said that data was being taken from 700 million containers that’s pretty wild again there’s a link there to that report if you’re interested in taking a look at that in 2018 uh datadog also started focusing more on orchestration and looking at and observing runtime usage much like much like those cystic diagrams that i displayed earlier and this quote was taken from the data doc research report called eight emerging trends in container orchestration it was posted at the end of the year 2018 in december and again the link is noted here so if you get a chance check it out because there’s a lot of other interesting observations that are made here and in later reports um but the quote that i pulled out from the top of this report was containerization is now officially mainstream and 25 of data datadog’s total customer base has adopted docker and other container technologies half of the companies with more than 1 000 hosts have done so so pretty incredible back when i might ask an audience at a conference whether anyone’s using containers maybe a scattering of hands would go up now it’s a lot anyone that is dealing with cloud native infrastructures dealing with applications that are composed of microservices now it’s a pretty popular thing now just because something is popular does not mean that it’s secure especially in cloud native environments you can’t take that part for granted and you also can’t take that performance or efficiency for granted how you are packaging your application or service into containers will make a huge difference on both fronts so don’t think that just because the technologies are more advanced today that you don’t have anything left on your plate to do but use them there are ways that you can basically you know cause yourself some issues and use them incorrectly uh before we talk about that we will get into a little bit of detail on that um let’s just talk about what happens in a typical software pipeline and even before we started adding containers to the mix what all is involved in our development and delivery process so this is a typical pipeline that’s displayed here it has a number of different steps it’s huge it’s complicated don’t expect you to be able to see everything that’s here all the little tiny logos on this screen and all the text but um just note that basically it goes all the way from initial development uh through continuous integration through going through integration with build servers with build tools and dependency managers going through you know testing processes and then ultimately being deployed into a production environment now what’s missing in this particular diagram are the steps that are involved in monitoring and other operating tasks that should be happening after deployment into production you generally see those steps those aspects in the infinity software development life cycle diagrams but today we’re going to focus on the steps that happen up to deployment containerization uh can it can be argued and i’ve heard this before that maybe this shouldn’t be a concern of a developer or anyone even close to that side of the pipeline but the problem is for now at least containerization is often part of the well it is part of the build process and knowing how something is being built and then later understanding how it’s going to be deployed that clearly affects decisions that are made by developers uh clear back at the design stage so we’ve seen this uh with the advent of containerizing microservices for example so we can’t developers aren’t going to be able to just let this go if you are dealing with writing applications that are intended to be and developed and or deployed in a cloud-native infrastructure in an environment like that you’re going to need to learn how to work with containers this is a more simplified version of the pipeline that was shown in the previous slide it goes all the way you know starts a development goes to continuous integration goes through qa testing then maybe a release process ultimately to deployment where in this process should we be concerned about containers um we already go through this process with you know just our node application or python application or java application where do we have to be concerned with containers now where does that fit in and it turns out like i said containerization is part of building and part of deploying and devs need to be able to do those things in fact we do those things uh repeatedly every day when we’re working on our projects so uh devs design they code they build they test they troubleshoot uh they repeat all of that um devs need to be able to re reproduce problems especially if they’re working on you know bug fixing they need to be able to reproduce a problem that may be required that might require a specific version of an app um to be run in order to reproduce the problem and that’s going to be in a container you want to be consistent with where the problem is being discovered being able to dev test or sanity test a bug fix or a new feature even that might involve deploying to a development environment or even running a container on your local development machine it makes sense to be able to deploy in pretty much the same way that the application or service would be deployed in a production environment which would be in a container so that follows that devs need to understand how to build and run containers okay the continuous integration process i usually think of just build servers in general um updates are merged uh in source control this is where new artifacts are built this is where you know automated unit testing happens um artifact storage on success of the build in the tests um alerts are sent and builds fail um if the you know if it doesn’t pass the unit tests things like that and then that process is repeated over and over again the artifacts here that are being referred to are not just the libs in the libraries you know that are used in the source code of the application the container image itself is included in this list the container image is considered an artifact so we definitely need to be concerned about it here too same for qa testing this artifact and all the other artifacts that are concerned in our application they’re all going to need to be retrieved uh we need to provide uh feature verification um this is where you might run further integration testing that could be you know manual or automated um this is where when all the tests pass this is where you might go through a round of promotion of all of these artifacts which means you know staging them getting them ready for the next step in the pipeline and then again repeat releasing and this might involve another artifact promotion you may be creating release bundles at this point these artifacts again are going to be container images along with other artifacts a release bundle will likely include the container image and then finally deployment obviously to deploy something you need the artifact to deploy and that is the container image clearly we have plenty of places that we should be concerned about but in my opinion uh most of our security and efficiency concerns with regard to containers really can be addressed near the beginning of the pipeline in the development and end the continuous integration stages these are the stages that result in artifacts and container images that will potentially move all the way to production and this is where the container image that will be used to launch our production containers are produced so it makes sense to focus on these areas there are quite a few methods used to build containers so let’s move into building container images responsibly that’s primarily the task that developers are going to be doing and build servers are going to be doing so it makes sense to spend some time on this portion how and when you build your container images will make a big difference in both security and in efficiency and performance you can choose you know under the how category you can choose um solutions with or without docker i advise people mostly just start with docker desktop to get your feet wet especially if you’re new to containers the documentation is excellent they do a really good job of walking you through the entire process and explaining exactly what it means to be a container image what it means to run a container and things that are happening under the covers it has all of the features that you need to build to run to store containers to push them into a registry whether it be public or private and has it you know takes care of the caching mechanism as well and also launching containers on your local machine so it’s pretty advantageous if for whatever reason you do not want to use docker um another option for you might be buildup if you’re a linux shop this is probably something you’ve already looked into and considered it’s just another alternative to building images the other thing to consider is whether or not you need to write a docker file when i talked about before how you know i’ve heard that containerization maybe shouldn’t be in a developer’s lap and one of the arguments that i hear the most is you know why do we need to learn to write another thing now we have to learn how to write a docker file and it can get pretty overwhelming sometimes when all of the things that developers are asked to learn and ask to do these days but i’ve found that writing the doctor file it gives me a little more control over how these images are built and produced so i personally prefer using a dockerfile um i think it’s a pretty standard way to communicate how the layers are built and what is it what exactly is included in your containers uh if you do not like writing docker files or looking for ways to get out of that there are solutions for you there are options build packs is one of them um another is using build plugins so if you’re already using maven or gradle you can simply you know add one of these plugins to your pom file or your your gradle build file and be able to use it that way jib is another option that’s also used as a plug-in when when do you build these containers obviously during active development developers going to be building these all the time one thing that drives me nuts is and it’s happened to me over and over in the past is when you know a change is made and stuff is checked in but maybe the developer forgot or just didn’t you know try to build and run the container on their machine so maybe all the unit tests pass and everything but the moment you try to launch the container there could be something wrong you know maybe some configuration isn’t quite right something like that and the container just doesn’t run it dies immediately not helpful when that gets pushed into source control and the next developer who pulls it has to figure all that out so uh developers need to be able to run these on their machine also like we talked about earlier for troubleshooting purposes it’s another reason during continuous integration obviously builds are going to be happening there all of the time so that’s another another time when you would be building container images other times there are other times that i’ve seen container images built we’ll discuss that in a minute i don’t believe that is best practice so i’ll address that in a later slide since using a docker file is pretty common let’s start there but first i want to talk about dependencies um that’s going to be the biggest part of this i know this is an overused graphic with the iceberg but the concept that software is potentially made up of a ton of components that a developer doesn’t necessarily have firsthand knowledge of cannot be understated applications and services that are built today are more complex than ever developers generally don’t want to recreate the wheel if it isn’t necessary and this means pulling in a lot of libraries code that you didn’t necessarily write yourself they could be open source components or they could also just be you know other libraries that have been written by other teams internally it doesn’t even you know not necessarily need to be open source it could be you know just another team has responsibility over that part of the the software so clearly um we need to pay attention to everything that is coming into the build because you could potentially be bringing in things that are vulnerable or things that could make you susceptible to attack let’s talk about some of those things this is a very contrived docker file it was written to illustrate some points to consider when building these container images but don’t get me wrong if you’re going to go look for examples online on how to build docker files you will more than likely find doctor files that suffer from some of these same issues that we’ll talk about here these examples that you find online are meant to be just that examples simple simple for demonstration purposes and not necessarily production worthy obviously that doesn’t just apply to container image building examples other code you find online too you really need to understand it take time and read the documentation don’t just copy paste stuff so let’s walk through this one and pick out the issues i won’t pick out everything i’ll pick out a few that are the obvious um and uh that i see you know pretty common number one uh that from line this is a parent image um doctor files can be written in a way where there’s a hierarchy you can start from like a base image or a parent image and then the rest of the doctor file is adding to that so this is what we have here on line one we have from untrusted parent image obviously you’re not going to see something so obviously a problem named such but i see this a lot people will pick pick a particular base image and just use it because they’ve seen it used elsewhere without doing the due diligence to figure out if this is an actual image that is safe to use um in fact let’s take a moment and talk about official base images official images you can find them on docker hub and i’m just going to show you let’s just go to doctor hub and take a look let’s do a search for an alpine image okay and see the alpine image is a docker official image so docker has a team that is dedicated to keeping track of these images making sure that they’re open that it’s obvious what’s in them that they are managing updates and paying attention to news of new vulnerabilities coming out and making sure that everything’s up to date so if you’re going to use an image from docker hub the public registry these images are available to anyone and can be posted by anyone best use a official image unless you have some other resources that tell you that the image you’ve chosen is you know if it’s not an official image you need some other reason to can just consider it trusted one way to be able to trust an image is to have its original docker file the original artifacts and files that were used to build that image to begin with and one way to look at an official image just pull up a search engine and if you just search for docker official images um first you’ll get you know the documentation link but go down to the first link that’s a github link and this is where it gets interesting so this is the github repository where these official images actually live there is a library directory in this library directory you’ll see all of the images that are official images we pulled up alpine earlier if you drill down into that you’ll see a line in here and all of these are are consistently done this way you’ll see a git repel let’s take a look at this git repo this is where the alpine official image is managed and in this repo we should be able to find the original docker file now some of these may have um you know branches for different versions they may have different or or different directories for different types let’s just go to the latest version of alpine and now we can see that we have directories that represent the different types that you can build if we just go into let’s go into the this top one here drill down and we should be able to find a docker file like this and here you go here is the original you know the initial docker file for the alpine image now there’s some question you know obviously are you going to be able to open this up and look at it that will take a little bit more work on your part but notice this first line it says from scratch to me this tells me this is a base image meaning you can’t go back any further some of these official images will have another parent image listed here not scratch but it will be another official image that you need to repeat this process for in order to go all the way back to the point where you’ve reached scratch so if you’re ever curious how these official images were built this is how you can find the original docker files for these okay moving on all right lines two through four um the problem with those lines there’s no version specified so in this example the parent image didn’t have all of the packages necessary that we needed for you know whatever it is we’re trying to run here and so some packages were installed and lines for you we have some package no version in line four we have an old vulnerable package it does have a version specified so there’s a little bit more control there but it’s vulnerable hasn’t been updated and we even know it’s vulnerable so that’s pretty shameful um i see this all the time um it’s easy to forget that os packages need to be managed the same way as our libraries and our source code the packages that we you know that are built from those so um make sure that you’re always specifying your versions um the reason is is because the next time this needs to be built this image needs to be built um you’re not going to get the same image you’ll never get the same image again you likely will get a newer package since you didn’t have the version specified and that can cause you quite a bit of troubleshooting um trauma especially in your continuous integration process that’s generally where i see this happen because in continuous integration you like to build something fresh where there’s no cash involved of old packages and resources so that is an opportunity where you want to be able to build without suffering from those moving parts that can cause things to break and then it takes a while to figure out what happened okay um line six this copy statement this could be an efficiency and performance problem if you have not set up a dot get ignore file um you could be copying things you shouldn’t basically this is saying copy everything from my working directory into the image you could be copying um secrets you could be copying local configuration that really shouldn’t be in a production environment you could be copying maybe test files um artifacts or logs that you really shouldn’t be putting in a production image all it’s going to do is make it bigger and bulkier also this will increase build time as well um the the reason for that is because the more files when you’re when you’re doing a build all of those files need to be sent to the docker daemon and then all of those are parsed through they’re all you know available in order to be used to copy and in this case we’re copying everything and then all of those are moved to the image so just that process of moving all those files you know in order to use it as a the docker context um that can cost you a lot of time in your builds especially in continuous integration where likely you’re going to be building repeatedly throughout the day all right line seven i see this a lot too it really bothers me to see you know curl statements uh another wget statements and things like that to me those indicate an external resource that you don’t necessarily have control over now it’s one thing if this is being pulled in from like a private uh repository that you manage um but i’ve also seen the case where this might be an installation script from another organization maybe for you know a product that or you know something that you’re including in your image that you need to use their script to install a better way would be to bring that script internal and manage it yourself that way you’re not on someone else’s timeline of updates because that script could be updated out from under you it could be moved out from under you it could be deleted and then all of a sudden all of your stuff is failing so try to avoid lines like number seven also number seven requires curl so if you don’t already have perl installed you’re going to have to install it in order to even run that line um lastly 9 9 includes an entry point it’s running a start script that is actually running as a route by default so you really should obey the principle of least privilege let that script only have permissions for what it requires give it a you know create a group create a user and let the script run as that user and group but running it as root better have a a really good reason to be doing that these are just a few problems that i come across frequently in docker files it’s definitely not an exhaustive list but it’s a good place to start so best practices again use trusted or official parent base images don’t use bulky parent images utilize multi-stage builds i see docker files often that might use you know it might bring in maven or npm you know something like that um multi-stage builds are a way to actually do the build in a intro section and then only pull in what you need in a final section and to keep that image really small so um take some time to look at the documentation for multi-stage builds if you’re actually building your software with a docker file specify versions of all packages use a dot ignore file git ignore file um i’m sorry that’s not a dot get ignore file that should be dot docker ignore file um it is like a get ignore file but it’s not the same thing all right and then making your external resources internal and then do not run your processes as root all right options for you if you want to use docker files you’re happy with that you can obviously use docker desktop it is not free for organizations that have a certain number of i believe it’s it’s make a certain amount of money have a certain number of employees but consider the advantage of having that support uh when you’re using and you know open source um tool you’re gonna need to support that so look at that cost and and see if that makes sense for you i would just i would do it if it makes sense it’s consistent it’s easy to install across the board on mac and linux and windows it’s just a really good developer tool to have in your toolbox and builda also uses docker files um i’ll put the link here if you want to check out buildup it is intended for linux so you’ll probably be more happy if it’s you know you’re working in a linux shop um docker is not required to run builda so there’s your alternative you will need podman to start and manage containers builder doesn’t do everything for you the docker desktop does for example but a combination with other tools like podman you’ll you’ll get what you need done here’s some options that do not require a docker file build packs put a link here for that these are strongly opinionated builds they detect the type of app you’re working with so for example python build pack might look for a particular python specific file like setup.py requirements.txt a node build pack might look for a package lock json file um and then there’s a build phase part of that process maybe running a pip install or an npm install um so the the pros there you know they’re they’re pretty simple uh these are maintained these build packs are maintained um by um projects that are part of the cncf um the cloud native build build packs project specifically is part of the cncf um you will need to install a tool in order to work with it called pac cli and they have some good tutorials there for you to play with if you’re just learning build plugins uh the other option we talked about there was jib that is a plugin you can add to you know maven or gradle file it does a really good job of taking java projects and instead of having a fat jar it splits it up in such a way that makes the image a little bit smaller pretty easy to add that plugin and then one thing about it is it builds and pushes the image to a registry i have mixed feelings about the pushing of the image to registry seems like a developer wouldn’t want to be doing that you might only want to push to you know development specific registry something like that it certainly wouldn’t be the same one that your continuous integration would be pushing to for example uh the spring boot docker plug-in for maven and gradle that is also easy you just add you know read the documentation obviously add the plugin it actually uses paquetto build packs i didn’t know that at first because i didn’t read the documentation i just wanted to run it and see how it works and i realized that it’s actually pulling you know these external images these build images builder images um that’s okay just you might want to consider relocating those images to under your control and your management in your private registry so that you can handle updates appropriately and you may want to do you know a custom bill pack for example all right managing these containers i just talked about registries whenever we are building our container images or we’re pulling them for the purposes of launching the containers in a deployment environment we need to get these base images or our final production images from somewhere where are we storing these containers and something i see missed a lot by default if a registry isn’t specified or an image isn’t tagged with a registry docker hub is going to be assumed so this line 1 from our previous docker file that i had up there we probably want to change that to something you know add our private registry there tag that image clearly make it a trusted image not an untrusted image move it to our registry and then refer to it as such one other thing to mention about managing containers uh once an image has been built during continuous integration it should not be rebuilt anywhere along the pipeline instead as that version of container image passes tests and other verification processes it should be promoted that means moved or copied to a qa a staging and then finally a production registry or repository this way you can be assured that exactly what was tested is what is getting deployed ultimately let’s talk about securing these things now ultimately we want to be able to launch containers from a specified container image and be reasonably confident that the container won’t be immediately vulnerable to attack obviously there’s additional infrastructure and design concerns here but one of the easiest and best things that we can do is regularly scan our container images for known vulnerabilities and for new vulnerabilities that are discovered over time and it used to be that security like this was something that was tacked on at the end but now there are ways to detect issues earlier in the development process that includes scanning before checking in scanning after your ci builds or during your ci bills during and after testing scanning your release bundle scanning periodically or on demand even to get new information there’s a number of different ways that you can utilize JFrog x-ray i’ve listed several of them here we have ide plugins uh there’s frogbot for github repos JFrog docker desktop extension the JFrog cli also has an api for x-ray and then of course the JFrog platform and i would like to show you just show you what docker desktop scan looks like the with the JFrog extension here’s the extension here you can add it through this process you can set it up to connect to an existing environment that you have you can also create and it’ll give you an option to create a new environment if you don’t have one already this accesses everything i have available to me on my local machine i’m just going to take one of these here scan it and see what comes up and we get a pretty exhaustive list of you know vulnerabilities that we need to take a look at we can drill down into each one of these and get a summary of them more information we can also find out exactly what layer is concerning so we know exactly what we need to be updating so pretty nifty take some time check out some of the others and i think i can show you one more actually remember my login information i want to show you what it looks like in the jfrab platform so if you if you already have a sas version available to you you can actually go to the packages view um look for any docker packages available that you’re interested in i’m going to choose this one and just drill down into this version of this hello package i have some x-ray data here and again it tells me you know more information about all the cves i can drill down into each one of these and find out more information so lots of stuff i can do here a lot of this is also you know going to be available to you using the jfrox cli as well so you can make decisions along your pipeline on what to do without having to use a gui like this all right that is that um i’d like to open it up for any questions and it looks like we had a couple come in um one was do you have any workshops yes we do have workshops i didn’t get it a lot i didn’t get into a lot of implementation here this is more about just getting your mind working making sure you’re thinking along the right lines and the right things when you’re building your pipelines and building containers um if you were to google JFrog upcoming workshops you will get to a page that includes a list of workshops that are coming up they are very fairly frequent and at various times time zones so take advantage of those let’s see another one here oh is using the JFrog container registry free can i try it out you absolutely can let me show you i have a free instance here this is a free tier that i’ve signed up for if you go to JFrog.com there’s a button that says try for free or start for free go ahead and click on that and you can sign up for an account um and you can play with the JFrog container registry once you have your environment setup login there is a quick setup section here this is where you can set up your docker repositories if you like but what i want to point out here is this learning center there’s actually a video here on exactly how to do that with docker so i would start there and see how far you get and then refer to documentation after that all right um next where can i learn more about JFrog x-ray where can i try it same place you can also play with JFrog x-ray in there um you can learn how to set up watches and policies that kind of stuff if you go to JFrog.com and navigate to the resources section there’s quite a lot available about x-ray there it’s another option for you all right what ides are supported by the JFrog ide plugin that is a very good question and i’m sorry i keep jumping back and forth to a browser but i just want to bring up this link here this is our ide integration documentation and it gives you all of the ides that we currently support of course my favorite is intellij i use that one all the time and i i do use this plugin it’s very nice i can make changes to my palm file and i can tell immediately without even checking my code in whether there’s a problem with the package that i’ve added so pretty cool i would also recommend frogbot as well um that would be a good one you know to integrate into your source control if you’re using github repositories um here’s the link here check that one out if you get a chance it’s pretty cool all right i think that’s it that’s all we have time for today uh like i said this um webinar recording will be sent to those of you who couldn’t make it live today for the rest of you thank you for coming if you have any more questions we will try and collect those and get back to you you’ll probably get some follow-up information as well so once again thank you all for coming and good luck with working with containers in your pipelines you’re all having a great day um today in this webinar we’re just going to be discussing a little bit about how you might be using containers some things that maybe you haven’t thought about some details some high level information but before we get started uh let’s go ahead and talk about some housekeeping items um we’re always asked if there’s going to be a recording of this webinar yes there will be a report a recording and we will be sending it out after the webcast also you are on mute and cameras are not shown as well so don’t be shy go ahead and use the platform as you would like move around those windows however you need make sure to join us in q a at the end as well and also during this webinar go ahead and ask your questions as we go we have folks online that will be able to help answer those during the webinar so that they can be answered in the context that you ask them all right introductions i’m melissa mckay first and foremost i am a developer have been for many many years um all the way from an intern fresh out of school to a principal engineer so um lots of time working on various kinds of different projects uh later on in my career i was primarily focused on java server-side applications also some node thrown in there some python rarely do i meet a java developer these days that isn’t doing something else as well um i did become a speaker this was something that i was really interested in doing and it just made sense to make the jump to become a developer advocate i’ve now been with JFrog for a while in this position i’m enjoying it even through the pandemic i’ve really enjoyed being able to engage folks online now we’re starting to travel again so this this is a really good opportunity to meet developers find them where they are and be able to have these conversations especially with you know new projects and everything coming out finding out what their woes are and hopefully being able to help and make lives easier i am a java champion and a doctor captain so i try to keep on top of the latest and greatest in those two technologies here on this slide is my twitter handle and my linkedin feel free to reach out ask questions anything like that i am available and certainly pass on any questions that i can’t answer to those that can today on the agenda i will be talking about uh just how containers are used today and maybe how that has changed over time uh we’ll then be talking about building them responsibly um i won’t go into too much detail here but i will pick on some of the most common things that i see in docker files and then a couple of other suggestions and things to think about when you’re building your own containers we’ll talk about where we should be concerned with containers in our software pipeline how we should manage them and we’ll just touch on that and then we’ll talk a little bit about securing our containers what our options are what we have available to us so i remember that there was a time when using docker containers in production was considered particularly risky and not something that i did early on in my career certainly even though the concept of containers has been around for a long time watching them become so widely used over the past decade has been an incredible experience this diagram here actually comes from a page on the cloud native computing foundation’s website this doesn’t have anything specifically due to do with containers but i do like how it describes the different stages of projects and the types of users that adopt a project over time and i think it’s a good one to apply to container usage or um to even you know darker usage over time like i said containers are nothing new they’ve been around a long time but um it it took a while for the use of those to catch on in production environments if you were to ask me today where we are right now um i would guess we are somewhere near the peak of this diagram maybe a little bit to the right um starting to look at the conservative adopters um there’s an argument that we’re not quite there yet but i think we’re pretty close to that there are some reasons that we can point out events that have happened in the past reasons why we’ve seen this explosion of container usage one of them is in 2013 of course docker became open source that was a pretty big development in 2015 though even more happened in fact on june 22nd 2015 the establishment of the open container initiative was announced and this is an organization that’s under the linux foundation it had the goal still has the goal of creating open standards for container runtimes and image specification docker is a heavy contributor and they have donated some of their implementations um some specs but in the announcement that was made of this new organization it was said that over 20 organizations were involved in this so it was true that containerization had evolved to this point to such an extent that a number of organizations wanted to work towards some common ground for the benefit of all one month after the oci was established the cloud native computing foundation or the cncf was established and part of that announcement was the official release of kubernetes 1.0 which was donated by google to the cncf so along with containers themselves becoming more widely used we now have advancement in the orchestration of these containers as well and it seems that 2018 around that year can be viewed as the year when containers crossed over into a popularity zone it’s been very interesting to see this explosion of wide widespread use of containers and also the beginning of research being done by different companies on their use in production environments and here’s one example of that this is reports that were done by systig this information came from those systig is a company that provides a really powerful monitoring tool it’s a troubleshooting tool for linux you’re probably aware of this if you’ve if you’ve been working in production environments quite a bit but um one thing to note is i i went back in time and tried to find you know the earliest that made sense to report on and in 2017 they had a report where they analyzed 45 000 containers now these are all containers that they had access to you know obviously containers that were using systig they didn’t really have a diagram or anything to list the runtimes that were being used because 99 of those were docker at the time so it didn’t make sense to break them out the next year in 2018 they repeated this process did the same type of reporting reporting on different run times that were in use and they observed 90 000 containers and here we start seeing other container runtimes besides docker coming on the scene so that’s pretty interesting to look at in 2019 though the report jumped up to 2 million containers today that’s not a large number but back then it seems like a pretty big jump from 90 000 to 2 million um they say it includes both sas and on-prem users these links to these reports are on the slides they’re definitely worth taking a look at there’s some interesting information in there this particular one shows a growth of container d and i i want to note that docker as a runtime although it’s being less and less used these days um docker actually uses container d as its run time now so that explains why container d is becoming more and more popular and you see runtime usage of docker decreasing doesn’t mean that docker has gone away or is less popular just means that the run time involved is more aligned with the orchestration that’s available today um another systig report 2020 and 21 we’re still looking at 2 million containers they do specify in this report that this is only a subset of customer containers so there’s more than 2 million now and then the last report i’ll show just an interesting diagram this is three million containers for 20 21 and 22. pretty interesting to see the division of the runtimes here found more evidence that supports that turning point in 2018 that’s provided by a data dog which is another organization that provides monitoring solutions for applications and i took this particular graph from a report that was posted in 2018 it was called eight surprising facts about real docker adoption this graph takes data that was collected from 2014 to 2018 and you can see the progression of adoption increasing with uh 25 percent of 10 000 companies now adopting docker really interesting also in the methodology for this report they said that data was being taken from 700 million containers that’s pretty wild again there’s a link there to that report if you’re interested in taking a look at that in 2018 uh datadog also started focusing more on orchestration and looking at and observing runtime usage much like much like those cystic diagrams that i displayed earlier and this quote was taken from the data doc research report called eight emerging trends in container orchestration it was posted at the end of the year 2018 in december and again the link is noted here so if you get a chance check it out because there’s a lot of other interesting observations that are made here and in later reports um but the quote that i pulled out from the top of this report was containerization is now officially mainstream and 25 of data datadog’s total customer base has adopted docker and other container technologies half of the companies with more than 1 000 hosts have done so so pretty incredible back when i might ask an audience at a conference whether anyone’s using containers maybe a scattering of hands would go up now it’s a lot anyone that is dealing with cloud native infrastructures dealing with applications that are composed of microservices now it’s a pretty popular thing now just because something is popular does not mean that it’s secure especially in cloud native environments you can’t take that part for granted and you also can’t take that performance or efficiency for granted how you are packaging your application or service into containers will make a huge difference on both fronts so don’t think that just because the technologies are more advanced today that you don’t have anything left on your plate to do but use them there are ways that you can basically you know cause yourself some issues and use them incorrectly uh before we talk about that we will get into a little bit of detail on that um let’s just talk about what happens in a typical software pipeline and even before we started adding containers to the mix what all is involved in our development and delivery process so this is a typical pipeline that’s displayed here it has a number of different steps it’s huge it’s complicated don’t expect you to be able to see everything that’s here all the little tiny logos on this screen and all the text but um just note that basically it goes all the way from initial development uh through continuous integration through going through integration with build servers with build tools and dependency managers going through you know testing processes and then ultimately being deployed into a production environment now what’s missing in this particular diagram are the steps that are involved in monitoring and other operating tasks that should be happening after deployment into production you generally see those steps those aspects in the infinity software development life cycle diagrams but today we’re going to focus on the steps that happen up to deployment containerization uh can it can be argued and i’ve heard this before that maybe this shouldn’t be a concern of a developer or anyone even close to that side of the pipeline but the problem is for now at least containerization is often part of the well it is part of the build process and knowing how something is being built and then later understanding how it’s going to be deployed that clearly affects decisions that are made by developers uh clear back at the design stage so we’ve seen this uh with the advent of containerizing microservices for example so we can’t developers aren’t going to be able to just let this go if you are dealing with writing applications that are intended to be and developed and or deployed in a cloud-native infrastructure in an environment like that you’re going to need to learn how to work with containers this is a more simplified version of the pipeline that was shown in the previous slide it goes all the way you know starts a development goes to continuous integration goes through qa testing then maybe a release process ultimately to deployment where in this process should we be concerned about containers um we already go through this process with you know just our node application or python application or java application where do we have to be concerned with containers now where does that fit in and it turns out like i said containerization is part of building and part of deploying and devs need to be able to do those things in fact we do those things uh repeatedly every day when we’re working on our projects so uh devs design they code they build they test they troubleshoot uh they repeat all of that um devs need to be able to re reproduce problems especially if they’re working on you know bug fixing they need to be able to reproduce a problem that may be required that might require a specific version of an app um to be run in order to reproduce the problem and that’s going to be in a container you want to be consistent with where the problem is being discovered being able to dev test or sanity test a bug fix or a new feature even that might involve deploying to a development environment or even running a container on your local development machine it makes sense to be able to deploy in pretty much the same way that the application or service would be deployed in a production environment which would be in a container so that follows that devs need to understand how to build and run containers okay the continuous integration process i usually think of just build servers in general um updates are merged uh in source control this is where new artifacts are built this is where you know automated unit testing happens um artifact storage on success of the build in the tests um alerts are sent and builds fail um if the you know if it doesn’t pass the unit tests things like that and then that process is repeated over and over again the artifacts here that are being referred to are not just the libs in the libraries you know that are used in the source code of the application the container image itself is included in this list the container image is considered an artifact so we definitely need to be concerned about it here too same for qa testing this artifact and all the other artifacts that are concerned in our application they’re all going to need to be retrieved uh we need to provide uh feature verification um this is where you might run further integration testing that could be you know manual or automated um this is where when all the tests pass this is where you might go through a round of promotion of all of these artifacts which means you know staging them getting them ready for the next step in the pipeline and then again repeat releasing and this might involve another artifact promotion you may be creating release bundles at this point these artifacts again are going to be container images along with other artifacts a release bundle will likely include the container image and then finally deployment obviously to deploy something you need the artifact to deploy and that is the container image clearly we have plenty of places that we should be concerned about but in my opinion uh most of our security and efficiency concerns with regard to containers really can be addressed near the beginning of the pipeline in the development and end the continuous integration stages these are the stages that result in artifacts and container images that will potentially move all the way to production and this is where the container image that will be used to launch our production containers are produced so it makes sense to focus on these areas there are quite a few methods used to build containers so let’s move into building container images responsibly that’s primarily the task that developers are going to be doing and build servers are going to be doing so it makes sense to spend some time on this portion how and when you build your container images will make a big difference in both security and in efficiency and performance you can choose you know under the how category you can choose um solutions with or without docker i advise people mostly just start with docker desktop to get your feet wet especially if you’re new to containers the documentation is excellent they do a really good job of walking you through the entire process and explaining exactly what it means to be a container image what it means to run a container and things that are happening under the covers it has all of the features that you need to build to run to store containers to push them into a registry whether it be public or private and has it you know takes care of the caching mechanism as well and also launching containers on your local machine so it’s pretty advantageous if for whatever reason you do not want to use docker um another option for you might be buildup if you’re a linux shop this is probably something you’ve already looked into and considered it’s just another alternative to building images the other thing to consider is whether or not you need to write a docker file when i talked about before how you know i’ve heard that containerization maybe shouldn’t be in a developer’s lap and one of the arguments that i hear the most is you know why do we need to learn to write another thing now we have to learn how to write a docker file and it can get pretty overwhelming sometimes when all of the things that developers are asked to learn and ask to do these days but i’ve found that writing the doctor file it gives me a little more control over how these images are built and produced so i personally prefer using a dockerfile um i think it’s a pretty standard way to communicate how the layers are built and what is it what exactly is included in your containers uh if you do not like writing docker files or looking for ways to get out of that there are solutions for you there are options build packs is one of them um another is using build plugins so if you’re already using maven or gradle you can simply you know add one of these plugins to your pom file or your your gradle build file and be able to use it that way jib is another option that’s also used as a plug-in when when do you build these containers obviously during active development developers going to be building these all the time one thing that drives me nuts is and it’s happened to me over and over in the past is when you know a change is made and stuff is checked in but maybe the developer forgot or just didn’t you know try to build and run the container on their machine so maybe all the unit tests pass and everything but the moment you try to launch the container there could be something wrong you know maybe some configuration isn’t quite right something like that and the container just doesn’t run it dies immediately not helpful when that gets pushed into source control and the next developer who pulls it has to figure all that out so uh developers need to be able to run these on their machine also like we talked about earlier for troubleshooting purposes it’s another reason during continuous integration obviously builds are going to be happening there all of the time so that’s another another time when you would be building container images other times there are other times that i’ve seen container images built we’ll discuss that in a minute i don’t believe that is best practice so i’ll address that in a later slide since using a docker file is pretty common let’s start there but first i want to talk about dependencies um that’s going to be the biggest part of this i know this is an overused graphic with the iceberg but the concept that software is potentially made up of a ton of components that a developer doesn’t necessarily have firsthand knowledge of cannot be understated applications and services that are built today are more complex than ever developers generally don’t want to recreate the wheel if it isn’t necessary and this means pulling in a lot of libraries code that you didn’t necessarily write yourself they could be open source components or they could also just be you know other libraries that have been written by other teams internally it doesn’t even you know not necessarily need to be open source it could be you know just another team has responsibility over that part of the the software so clearly um we need to pay attention to everything that is coming into the build because you could potentially be bringing in things that are vulnerable or things that could make you susceptible to attack let’s talk about some of those things this is a very contrived docker file it was written to illustrate some points to consider when building these container images but don’t get me wrong if you’re going to go look for examples online on how to build docker files you will more than likely find doctor files that suffer from some of these same issues that we’ll talk about here these examples that you find online are meant to be just that examples simple simple for demonstration purposes and not necessarily production worthy obviously that doesn’t just apply to container image building examples other code you find online too you really need to understand it take time and read the documentation don’t just copy paste stuff so let’s walk through this one and pick out the issues i won’t pick out everything i’ll pick out a few that are the obvious um and uh that i see you know pretty common number one uh that from line this is a parent image um doctor files can be written in a way where there’s a hierarchy you can start from like a base image or a parent image and then the rest of the doctor file is adding to that so this is what we have here on line one we have from untrusted parent image obviously you’re not going to see something so obviously a problem named such but i see this a lot people will pick pick a particular base image and just use it because they’ve seen it used elsewhere without doing the due diligence to figure out if this is an actual image that is safe to use um in fact let’s take a moment and talk about official base images official images you can find them on docker hub and i’m just going to show you let’s just go to doctor hub and take a look let’s do a search for an alpine image okay and see the alpine image is a docker official image so docker has a team that is dedicated to keeping track of these images making sure that they’re open that it’s obvious what’s in them that they are managing updates and paying attention to news of new vulnerabilities coming out and making sure that everything’s up to date so if you’re going to use an image from docker hub the public registry these images are available to anyone and can be posted by anyone best use a official image unless you have some other resources that tell you that the image you’ve chosen is you know if it’s not an official image you need some other reason to can just consider it trusted one way to be able to trust an image is to have its original docker file the original artifacts and files that were used to build that image to begin with and one way to look at an official image just pull up a search engine and if you just search for docker official images um first you’ll get you know the documentation link but go down to the first link that’s a github link and this is where it gets interesting so this is the github repository where these official images actually live there is a library directory in this library directory you’ll see all of the images that are official images we pulled up alpine earlier if you drill down into that you’ll see a line in here and all of these are are consistently done this way you’ll see a git repel let’s take a look at this git repo this is where the alpine official image is managed and in this repo we should be able to find the original docker file now some of these may have um you know branches for different versions they may have different or or different directories for different types let’s just go to the latest version of alpine and now we can see that we have directories that represent the different types that you can build if we just go into let’s go into the this top one here drill down and we should be able to find a docker file like this and here you go here is the original you know the initial docker file for the alpine image now there’s some question you know obviously are you going to be able to open this up and look at it that will take a little bit more work on your part but notice this first line it says from scratch to me this tells me this is a base image meaning you can’t go back any further some of these official images will have another parent image listed here not scratch but it will be another official image that you need to repeat this process for in order to go all the way back to the point where you’ve reached scratch so if you’re ever curious how these official images were built this is how you can find the original docker files for these okay moving on all right lines two through four um the problem with those lines there’s no version specified so in this example the parent image didn’t have all of the packages necessary that we needed for you know whatever it is we’re trying to run here and so some packages were installed and lines for you we have some package no version in line four we have an old vulnerable package it does have a version specified so there’s a little bit more control there but it’s vulnerable hasn’t been updated and we even know it’s vulnerable so that’s pretty shameful um i see this all the time um it’s easy to forget that os packages need to be managed the same way as our libraries and our source code the packages that we you know that are built from those so um make sure that you’re always specifying your versions um the reason is is because the next time this needs to be built this image needs to be built um you’re not going to get the same image you’ll never get the same image again you likely will get a newer package since you didn’t have the version specified and that can cause you quite a bit of troubleshooting um trauma especially in your continuous integration process that’s generally where i see this happen because in continuous integration you like to build something fresh where there’s no cash involved of old packages and resources so that is an opportunity where you want to be able to build without suffering from those moving parts that can cause things to break and then it takes a while to figure out what happened okay um line six this copy statement this could be an efficiency and performance problem if you have not set up a dot get ignore file um you could be copying things you shouldn’t basically this is saying copy everything from my working directory into the image you could be copying um secrets you could be copying local configuration that really shouldn’t be in a production environment you could be copying maybe test files um artifacts or logs that you really shouldn’t be putting in a production image all it’s going to do is make it bigger and bulkier also this will increase build time as well um the the reason for that is because the more files when you’re when you’re doing a build all of those files need to be sent to the docker daemon and then all of those are parsed through they’re all you know available in order to be used to copy and in this case we’re copying everything and then all of those are moved to the image so just that process of moving all those files you know in order to use it as a the docker context um that can cost you a lot of time in your builds especially in continuous integration where likely you’re going to be building repeatedly throughout the day all right line seven i see this a lot too it really bothers me to see you know curl statements uh another wget statements and things like that to me those indicate an external resource that you don’t necessarily have control over now it’s one thing if this is being pulled in from like a private uh repository that you manage um but i’ve also seen the case where this might be an installation script from another organization maybe for you know a product that or you know something that you’re including in your image that you need to use their script to install a better way would be to bring that script internal and manage it yourself that way you’re not on someone else’s timeline of updates because that script could be updated out from under you it could be moved out from under you it could be deleted and then all of a sudden all of your stuff is failing so try to avoid lines like number seven also number seven requires curl so if you don’t already have perl installed you’re going to have to install it in order to even run that line um lastly 9 9 includes an entry point it’s running a start script that is actually running as a route by default so you really should obey the principle of least privilege let that script only have permissions for what it requires give it a you know create a group create a user and let the script run as that user and group but running it as root better have a a really good reason to be doing that these are just a few problems that i come across frequently in docker files it’s definitely not an exhaustive list but it’s a good place to start so best practices again use trusted or official parent base images don’t use bulky parent images utilize multi-stage builds i see docker files often that might use you know it might bring in maven or npm you know something like that um multi-stage builds are a way to actually do the build in a intro section and then only pull in what you need in a final section and to keep that image really small so um take some time to look at the documentation for multi-stage builds if you’re actually building your software with a docker file specify versions of all packages use a dot ignore file git ignore file um i’m sorry that’s not a dot get ignore file that should be dot docker ignore file um it is like a get ignore file but it’s not the same thing all right and then making your external resources internal and then do not run your processes as root all right options for you if you want to use docker files you’re happy with that you can obviously use docker desktop it is not free for organizations that have a certain number of i believe it’s it’s make a certain amount of money have a certain number of employees but consider the advantage of having that support uh when you’re using and you know open source um tool you’re gonna need to support that so look at that cost and and see if that makes sense for you i would just i would do it if it makes sense it’s consistent it’s easy to install across the board on mac and linux and windows it’s just a really good developer tool to have in your toolbox and builda also uses docker files um i’ll put the link here if you want to check out buildup it is intended for linux so you’ll probably be more happy if it’s you know you’re working in a linux shop um docker is not required to run builda so there’s your alternative you will need podman to start and manage containers builder doesn’t do everything for you the docker desktop does for example but a combination with other tools like podman you’ll you’ll get what you need done here’s some options that do not require a docker file build packs put a link here for that these are strongly opinionated builds they detect the type of app you’re working with so for example python build pack might look for a particular python specific file like setup.py requirements.txt a node build pack might look for a package lock json file um and then there’s a build phase part of that process maybe running a pip install or an npm install um so the the pros there you know they’re they’re pretty simple uh these are maintained these build packs are maintained um by um projects that are part of the cncf um the cloud native build build packs project specifically is part of the cncf um you will need to install a tool in order to work with it called pac cli and they have some good tutorials there for you to play with if you’re just learning build plugins uh the other option we talked about there was jib that is a plugin you can add to you know maven or gradle file it does a really good job of taking java projects and instead of having a fat jar it splits it up in such a way that makes the image a little bit smaller pretty easy to add that plugin and then one thing about it is it builds and pushes the image to a registry i have mixed feelings about the pushing of the image to registry seems like a developer wouldn’t want to be doing that you might only want to push to you know development specific registry something like that it certainly wouldn’t be the same one that your continuous integration would be pushing to for example uh the spring boot docker plug-in for maven and gradle that is also easy you just add you know read the documentation obviously add the plugin it actually uses paquetto build packs i didn’t know that at first because i didn’t read the documentation i just wanted to run it and see how it works and i realized that it’s actually pulling you know these external images these build images builder images um that’s okay just you might want to consider relocating those images to under your control and your management in your private registry so that you can handle updates appropriately and you may want to do you know a custom bill pack for example all right managing these containers i just talked about registries whenever we are building our container images or we’re pulling them for the purposes of launching the containers in a deployment environment we need to get these base images or our final production images from somewhere where are we storing these containers and something i see missed a lot by default if a registry isn’t specified or an image isn’t tagged with a registry docker hub is going to be assumed so this line 1 from our previous docker file that i had up there we probably want to change that to something you know add our private registry there tag that image clearly make it a trusted image not an untrusted image move it to our registry and then refer to it as such one other thing to mention about managing containers uh once an image has been built during continuous integration it should not be rebuilt anywhere along the pipeline instead as that version of container image passes tests and other verification processes it should be promoted that means moved or copied to a qa a staging and then finally a production registry or repository this way you can be assured that exactly what was tested is what is getting deployed ultimately let’s talk about securing these things now ultimately we want to be able to launch containers from a specified container image and be reasonably confident that the container won’t be immediately vulnerable to attack obviously there’s additional infrastructure and design concerns here but one of the easiest and best things that we can do is regularly scan our container images for known vulnerabilities and for new vulnerabilities that are discovered over time and it used to be that security like this was something that was tacked on at the end but now there are ways to detect issues earlier in the development process that includes scanning before checking in scanning after your ci builds or during your ci bills during and after testing scanning your release bundle scanning periodically or on demand even to get new information there’s a number of different ways that you can utilize JFrog x-ray i’ve listed several of them here we have ide plugins uh there’s frogbot for github repos JFrog docker desktop extension the JFrog cli also has an api for x-ray and then of course the JFrog platform and i would like to show you just show you what docker desktop scan looks like the with the JFrog extension here’s the extension here you can add it through this process you can set it up to connect to an existing environment that you have you can also create and it’ll give you an option to create a new environment if you don’t have one already this accesses everything i have available to me on my local machine i’m just going to take one of these here scan it and see what comes up and we get a pretty exhaustive list of you know vulnerabilities that we need to take a look at we can drill down into each one of these and get a summary of them more information we can also find out exactly what layer is concerning so we know exactly what we need to be updating so pretty nifty take some time check out some of the others and i think i can show you one more actually remember my login information i want to show you what it looks like in the jfrab platform so if you if you already have a sas version available to you you can actually go to the packages view um look for any docker packages available that you’re interested in i’m going to choose this one and just drill down into this version of this hello package i have some x-ray data here and again it tells me you know more information about all the cves i can drill down into each one of these and find out more information so lots of stuff i can do here a lot of this is also you know going to be available to you using the jfrox cli as well so you can make decisions along your pipeline on what to do without having to use a gui like this all right that is that um i’d like to open it up for any questions and it looks like we had a couple come in um one was do you have any workshops yes we do have workshops i didn’t get it a lot i didn’t get into a lot of implementation here this is more about just getting your mind working making sure you’re thinking along the right lines and the right things when you’re building your pipelines and building containers um if you were to google JFrog upcoming workshops you will get to a page that includes a list of workshops that are coming up they are very fairly frequent and at various times time zones so take advantage of those let’s see another one here oh is using the JFrog container registry free can i try it out you absolutely can let me show you i have a free instance here this is a free tier that i’ve signed up for if you go to JFrog.com there’s a button that says try for free or start for free go ahead and click on that and you can sign up for an account um and you can play with the JFrog container registry once you have your environment setup login there is a quick setup section here this is where you can set up your docker repositories if you like but what i want to point out here is this learning center there’s actually a video here on exactly how to do that with docker so i would start there and see how far you get and then refer to documentation after that all right um next where can i learn more about JFrog x-ray where can i try it same place you can also play with JFrog x-ray in there um you can learn how to set up watches and policies that kind of stuff if you go to JFrog.com and navigate to the resources section there’s quite a lot available about x-ray there it’s another option for you all right what ides are supported by the JFrog ide plugin that is a very good question and i’m sorry i keep jumping back and forth to a browser but i just want to bring up this link here this is our ide integration documentation and it gives you all of the ides that we currently support of course my favorite is intellij i use that one all the time and i i do use this plugin it’s very nice i can make changes to my palm file and i can tell immediately without even checking my code in whether there’s a problem with the package that i’ve added so pretty cool i would also recommend frogbot as well um that would be a good one you know to integrate into your source control if you’re using github repositories um here’s the link here check that one out if you get a chance it’s pretty cool all right i think that’s it that’s all we have time for today uh like i said this um webinar recording will be sent to those of you who couldn’t make it live today for the rest of you thank you for coming if you have any more questions we will try and collect those and get back to you you’ll probably get some follow-up information as well so once again thank you all for coming and good luck with working with containers in your pipelines

Release Fast Or Die