Docker Dos & Do Nots @ Jfokus 2022

June 15, 2022

< 1

Docker Dos & Do Nots @ Jfokus 2022
There is a popular phrase intended to communicate that there are many different ways of accomplishing the same thing: there are many ways to skin a cat. Whenever I hear this phrase, I understand the intention, but I always stop and ask who is out there skinning cats??? This talk is all about different ways of creating Dockerfiles. I’ll discuss things that I’ve seen and the reasons why you should be concerned about the details of how your Dockerfile is written. I will share the gotchas that I have learned and some of the pitfalls I’ve fallen into when I first started developing and maintaining my own Dockerfiles. Hopefully, my experience will help prevent you from suffering similar consequences as well as empower you with the reasoning behind certain practices and the avoidance of others. I’ll discuss the pros and cons of different methods and point out reasons why even though you can, maybe you shouldn’t… much like skinning a cat!

View Slides Here

Speakers

Melissa McKay

    Melissa McKay

    Melissa is a long-time developer/software engineer turned international speaker and is currently a Developer Advocate on the JFrog Developer relations team, sharing in the mission to improve the developer experience with DevOps methodologies. Her background and experience as a software engineer span a slew of languages, technologies, and tools used in the development and operation of enterprise products and services. She is a mom, Java Champion, Docker Captain, co-author of the upcoming book DevOps Tools for Java Developers, a huge fan of UNconferences, and is always on the lookout for ways to grow and learn. She has spoken at Kubecon, DockerCon, CodeOne, JFokus, Java Dev Day Mexico, the Great International Developer Summit, and is part of the JCrete and JAlba UNconference teams. Given her passion for teaching, sharing, and inspiring fellow practitioners, you are likely to cross paths with her in the conference circuit — both online and off!

    Video Transcript

    time of today’s jay focus here at a1
    give a warm welcoming upload to melissa mckay from jfrog
    [Applause]
    thank you everybody so glad to see all of you here um for many of you it’s it’s your first day the end of your first day
    at j focus and um it has been a pleasure for me to be here so far um how many of
    you is this your first time here at jfocus wow
    wow that’s like half of you in the audience that’s pretty awesome um
    can i just get a round of applause for the organizers of this conference and how well it has been put together so far
    this is going to be a pretty light hearted talk it’s a good one for the end of your day and get you set up for celebrations later and i promise that i
    won’t actually be talking about skinning any cats during this session i’m not that type of a person but what we’ll
    talk about instead is how you can tangle yourself up into some bad practices sometimes when you’re writing your
    doctor files and before i go any further i was instructed not to wear this hat
    and since because of the lighting and i just am so excited that i even
    have lighting to worry about at a talk right now because i’ve been so used to virtual
    talks and being on zoom all the time so this is great so i got a little rebellious and i’m wearing the hat but
    since this talk is all about doing things a little bit differently i will
    wear my hat a little differently so you can see my face
    all right this is my first time in stockholm actually my first time physically here
    my name is melis mckay i work for jfrog i’ve been with jfrog now for two years
    and i became a java champion at jfocus my first virtual attendance of jfocus that
    was pretty exciting and i also became a doctor captain during that time period so i get to
    spend a lot of time with the other docker captains and find out about what’s what’s coming you know down
    the pipe what’s coming up next for docker and containers in general um this is my contact information
    twitter is probably the best way to reach out to me if you have any questions or or anything like that you
    could also follow me on linkedin prior to being a developer advocate i
    always find this important to say because it makes a difference i was a developer i was a developer for
    20 years all the way from a lowly intern to a principal engineer at a pretty huge
    company and during that time i just found a real
    passion for speaking i had an opportunity to go to an unconference
    which is a very different type of conference if you’ve ever heard of them or ever get the opportunity to go to an
    unconference take that chance it was probably one of the best decisions i made in my life
    i did do it on my own dime because at the time you know i was not paid to travel to these kinds of things
    so it was very very small it’s not the type of conference where you have someone standing up front
    talking to you the whole time instead you have you know discussions with everyone in a
    group so everyone gets to participate and talk about whatever subject is near and dear to
    their heart i met a lot of passionate people that way got involved with other developer advocates and that’s how i
    became a speaker today so if any of you are interested in doing that in your
    career it can happen it works
    all right how many of you got new pets in the last two years
    anyone yeah there’s a few of you the rest of you just aren’t being honest you’ve done something
    in the past two years i’m actually a cat owner these are my two friends i rescued as
    actually my children rescued them as feral kittens last summer when we were all stuck at home and were getting a
    little lonely i call them our pandemic pets our coved kitties
    and again this is not a presentation about skinning cats obviously as you can see these two are
    alive and well instead i will include a bunch of cute
    cat pictures for you to enjoy because who doesn’t like that and anyone already familiar with my image and container
    talks that i’ve given already i refuse to use any graphic of shipping
    containers in my slides it’s never going to happen that’s my commitment and uh if
    you’re the first to report to me any marketing or anything around a session of mine that has a shipping container
    picture in it please reach out to me on twitter let me know i will give you some kind of prize for that
    in addition to cute cat pictures i’ll be going over some of my early experiences with docker and specifically the docker
    file and how that development went for me we’ll see if any of you can feel the
    same pain that i went through or you’ll learn something today so that you do not feel that pain
    how many of you are using containers right now
    every just about everyone how many of you are using them in production right
    now okay little less but about the same it’s pretty amazing now even just a
    couple years ago i could ask that question and there just weren’t that many positive responses it’s very clear
    how much containers and the use of containers has grown there’s a lot of different reasons
    for that some of it is just the tool sets around it has exploded and just made it a lot
    easier for developers to get involved and start jumping in and wrapping their
    applications in containers now we all know it’s not a solution for everything but it
    definitely is something that makes our lives a little bit easier when we don’t have to worry about what environment
    we’re actually running in we can just you know wrap it up throw it out into the cloud and it runs
    it’s pretty nice so let’s start with the docker file
    that’s where i started um you pretty much have all the building blocks that you need to get your
    application out the door and into production there’s all of these commands available to you this is a pretty fairly
    exhaustive list but with these you can pretty much do anything your heart desires you can do
    all the right things and you can do all the wrong things and everything in between
    they’re pretty much building blocks you can put things together and it’s pretty amazing what i’ve seen
    come out of some of the projects that i’ve worked on um if you want to know you know
    obviously a lot of you are using containers already and maybe not just docker containers specifically but you know containers in general
    there is some references that you can go out and learn about that when i started
    it was not as common several several years ago i started using it docker containers and
    uh it was it was pretty tough for me to start um i was on a brand new team
    and did not even know where to start so that was
    quite the experience so today we’re going to go through uh 10 things i just gathered 10 concepts that i wanted to
    share um and we’ll talk about the do nots involved with that and of course the
    related dues these are things i learned through trial and error when i first
    started with docker some you may be aware of these pitfalls already but my hope is you’ll find you
    know one or two or more of these helpful and think about making some edits to your existing docker files you might
    have in the wild right now so several years ago i was brought onto
    a project that was initially developed by a third-party contractor and it was time the company decided to bring that
    project in-house and to build a team around it so that now my team would be responsible for maintaining that project
    and making sure that we got new features developed and stuff
    it was a very small team to begin with and the first thing i realized was yes
    it was wrapped in a container and there was a docker file in the source
    that was the first thing that i looked at and um pulling up this dockerfile uh i
    just read you know the readmes that were available on how to get the project up and running and get started and i that’s
    how we started and chaos went from there several things i learned uh through this
    journey the first was ignoring the docker ignore file i did that immediately didn’t even
    know it existed didn’t know that that was there or what it what the use would be so of course i didn’t use
    it and one of the things that annoyed me the most was how long it took to build my images and this uh you know project
    was a pretty big it turned into a pretty big space so every time we would build and anytime
    anything would change it just took forever to build on my machine and that was just because i
    didn’t fully understand how the docker context worked
    why would you use dr ignore this is kind of a contrived example here but it includes the exact problematic
    line that we had in our docker file to begin with and you can see it right there that third line down
    copy with two dots and if you don’t realize this
    this is copying and delivering everything in your current working directory
    to the daemon and then into the image things that you might not want to put in
    there one reason to use the docker ignore is to prevent such a scenario
    you don’t want to send everything to the docker daemon you first of all immediately invalidate your cash for a
    lot of things so you don’t get to save that time and you may be sending test files or other resources that are not
    needed in the production image you may be sending user-specific files
    maybe you have your own environment set a certain way and so you’ve edited some files the or secrets even you may be sending
    secrets across the docker ignore file works very much the same way as git ignore
    and it just prevents you from putting in extra stuff into your image that you don’t need
    so let’s be real very clear about what this is doing when you run a docker build command
    to build an image using this particular docker file that i showed you everything that current directory is sent to the
    docker context everything is then copied into the image other
    you know avoiding adding any tests or specific files or secrets that you’ve stored locally is important
    and at first i thought that i when i started learning this i thought i could just improve my build time by explicitly
    copying only what i wanted but under the covers still everything gets sent and shipped
    to the docker daemon even if you don’t end up using it in your image in this case this particular dockerfile
    you wouldn’t there’s just two files there being copied so my project continued to take forever
    because i was still shipping all of that stuff and this just you know showed what it
    looked like i was happy with my two files that i ended up in my quite a bit smaller image which was nice
    but that wasn’t going to work for me so here’s our first do and do not
    don’t ignore the docker ignore file here’s an example of some of the items that you might want to put in there of
    course your readmes your your passwords if you have any of those and one thing interesting is that git
    directory i had been sending my git directory this entire time and putting it into my image
    with that explicit coffee copy so that that was something that i fixed
    once i learned about docker ignore all right number two
    using untrusted base images and i’m going to confess when i first
    started ramping up in docker the first thing i did i just followed those readme instructions i got everything up and
    running and i was happy with it i didn’t want to touch it it was working it was fine i wanted to bury myself in the code
    that i was familiar with which was java and there was some javascript in there
    and surprisingly there was a python component too that was kind of a fun project but i didn’t realize then how images
    actually worked until later when our build broke with an error message that said sorry your base
    image doesn’t exist i didn’t know what that meant so i went digging around it turns around
    turns out that the base image that was in our docker file was actually getting pulled from a private repo of the
    original third-party contractor which had been out of the picture for over a year
    so good luck trying to get that base image it was gone we didn’t we had to make our own so we
    had to learn very fast how base images work and how to put one together
    ourselves for our project a problematic base image probably isn’t
    going to be as obvious as this one named evil image but my point here is to
    pay attention where your base images are coming from this is becoming more and more important every day especially with
    our supply chain issues that we hear about more commonly now
    in the media and everything pay attention where they are coming from they are dependencies like anything else
    they really should be pulled into a dependency management system of some kind
    like artifactory or managed along with the other binaries wherever you have them stored everything
    that you rely on for your build there are official images that come from
    docker hub these images docker has a dedicated team that is responsible for
    reviewing and publishing those images and that content that team works in collaboration with
    upward software maintainers upstream software maintainers and security experts and the greater docker community
    as well so you’re a little bit safer if you’re pulling from official images from docker hub
    but just realize that you’re still relying on a third party so i would suggest to bring anything that you rely
    on into a private registry of some kind so that you can track it make sure it’s
    not changing out from under you make sure it doesn’t disappear like what happened to my team
    you can also you know it’ll help you avoid malicious packages you can get the latest updates
    that kind of stuff so here’s an example the second do and do not make sure that you’re pulling
    from a registry that you recognize and preferably one that you have control
    over also understand by default everything comes from docker
    hub unless you explicitly say or tag an image that it it should come
    from your registry so understand that and also understand that
    just because it’s listed as as the from there’s an inheritance there that base image could be based on
    another image which could be based on another image it is worthwhile to do your research and figure out exactly
    where those are coming from and what is inside makes sense
    we should know this about our applications so i got a little scared because i
    didn’t even know really where to look for a base image that was going to work for us i had to do a lot of homework
    tear some things apart figure out what was relying on what so how do we make this decision what do
    we use especially as java programmers you know what on earth do we just go out to docker hub and search for java is
    that how we’re coming up with our images do we just go google and get the first example that says uh you know from open
    jdk is that the right thing to do there’s also this whole other
    nomenclature if you’re if you’re not real familiar with linux these are actually you know linux
    distributions and when you see names like buster and
    bullseye those are different linux distributions for java specifically you would
    recognize jre and the jdk so you might decide well if i want a smaller image if i want you know less
    in my production image i would choose a jre over a jdk
    you could certainly you know just limit you know based on your java version um
    but once you start learning these other names as well you’ll see them in other projects not just java projects
    there’s no mystery here when you realize what those names indicate and i just listed them here for you
    if you need to install additional packages on top of any of
    these images you might need to pay attention to the linux version at this point and just to note the alpine
    version is interesting alpine is very popular it’s considered to be small this you know one of the smallest
    it produces small enough images that are really especially for container
    use however just realize it does use a slimmer c library that’s different than
    what debian uses which we’re more used to glib c or the the new c library uh the
    alpine uses actually a lot it’s called muscle and it requires an open jdk port and in
    fact if you were to look at the documentation for the official open jdk
    image it says that the port for alpine is not even in a supported release by
    openjdk and according to their documentation alpine images are only
    released for early access open jdk versions and they are not updated after
    ga so i would consider maybe looking for something different
    than the official open jdk alpine image and it makes sense if you’re already
    paying for support for specific distribution from from azul perhaps or from ibm it makes sense to stick with
    their official images and i do know they have alpine versions that that are maintained and supported
    all right moving on to number three not defining versions this particular outcome is gorgeous in a
    cat not so gorgeous in a project when you have incompatible versions
    flowing in and you don’t have control over all of those variations and variables
    and this one kicked us as you would expect our build just started randomly breaking one day and there was a lot of
    troubleshooting involved trying to figure out what changed
    one way to solve that is make sure that your artifact management
    has a way to identify for you every single component of your project to make sure that it is
    actually the same um that is one way to manage this a little bit better than we did
    we unfortunately had everything in in the same repo
    it made it really difficult for us to track things specifically but one of the things that i didn’t realize
    was not just versions of you know our application but the os
    packages too and in fact this was what happened to us we ended up building a
    new image uh no caching was involved so everything new came in
    and there was an incompatibility there took a while to figure out exactly what
    package that was once we did we learned we needed to make sure to specify the version that we
    were interested in using you should always always have a bill of
    materials for your build this is more important now than ever know what versions of everything is installed
    artifactory has a metadata called build info which is super helpful with this
    you can eliminate you know multiple moving parts and control your version updates a little better this way
    so the do’s and do knots here again a contrived example but the do not section notice the tag
    latest we started using that we thought that was great we’re getting the latest and
    greatest what i didn’t realize at that point was just how tags actually work for docker images
    they are not always going to be stable and latest
    might not actually be pointing to latest today or tomorrow
    it changes out from under you so really important to understand what you’re getting latest may not even be a stable
    version depending on where you’re pulling these from also caching it was really interesting
    when we started adding new members to our team and they would build these fresh images
    and they didn’t have the benefit of all of the image cache on their machine and things just did not work the same which
    makes sense right just like in java if you were to you know remove your entire maven
    local maven repository you might have some issues that happens pretty frequently
    frequently when a new programmer comes on board and they’ve got to build everything fresh
    and all of a sudden you realize oh no we have this dependency that we relied on that none of us realized was just coming
    from our local cache same goes for images that can happen to you too
    all right number four never updating this is a mess
    and i know a lot of us are guilty of this we’ve been on teams where keeping our software updated is not as much of a
    priority as getting the next release out or getting the next new feature out that
    has always been a push and pull with different parts of companies
    some are organized differently than others and it’s just a more difficult
    process to really iron out it’s really easy to fall into that idea
    of if it’s not broken don’t fix it and that really means that there are bigger problems going on
    but there does really need to be a balance there updates need to be scheduled they need to be regularly addressed
    just waiting for your build to break before doing those updates is never a good idea not a good strategy and why
    there are three reasons i put here security updates are important
    and again security updates are important on number three security updates are
    really really important blockout time every development cycle to
    spend on this kind of maintenance some believe that defining versions is a is
    or not defining the versions for example using latest is a good way to get the latest security
    updates but i’d argue that depending on the size of your project that increases the number of changing variables during
    your build and you might end up with some difficult troubleshooting there i like to control your variables
    i’m kind of a controlling person that way i guess but i think it’s important to do and
    i think it’s it’s just quicker to decide today we are going to upgrade this version
    rather than you know randomly do it out from under you
    um this particular file notice you know not only the base image needed to be updated
    it’s old but there’s also packages that were being added on top of the base image so
    every time this image was built fresh it would be building with a package that had known vulnerabilities in it
    something to consider and make sure you have a process to deal with that
    so how to fix it obviously decide the version that you want get it upgraded
    make sure you’re going through all of your testing cycles i would do these kinds of upgrades by
    themselves during during a maintenance session rather than have everything else come in
    at the same time new features and bug fixes and all of that uh reserve some time for this specifically
    all right we’re almost halfway through five including your build tools
    i see this all the time i see this everywhere in examples online
    when you’re out looking for just how to build a dockerfile it’s really easy to come across examples that look like this
    and they come from some base image that includes all of the tools that you need to actually build your project and this
    isn’t again not restricted to java this is for any anything out there they’re all
    supported base images that just make it easy for you to get your project up and running it works
    but you end up with an image that’s much larger larger than it needs to be you don’t need all of that extra stuff
    in a production image why would you be putting maven libraries in a production image
    also it’s just best to minimize your attack surface from a security perspective the less that you have
    wandering around in your image it just goes to reason that’s that the less you have to worry about potential security
    vulnerabilities you might ask why not just copy over the artifact from a previous step in the
    process rather than include the whole building of the project within the image
    build and one argument that i’ve heard for that is that
    some teams like to be able to include the specifics of how the project is built in the docker
    file they kind of keep it as a record of how it’s built and that makes it more buildable by anyone regardless of
    availability of tool sets in a local environment so that might be another reason you would want to include maven
    in there so that someone doesn’t have to install you know the correct version or whatever on their machines
    the solution to this i suppose is really just going to depend on how your team is organized for example there was a period
    of time when i was on a team that we just weren’t involved in containerization of our
    project we just focused on the libraries and building the libraries we passed those off and it was more of an
    operations chore to put that together and then actually deploy into containers
    so just depends on you know what works for your team how you’re organized but one
    thing to consider is multi-stage builds that way you can get the best of both worlds you can get a smaller image
    without those build tools in the end and here’s an example of what that might
    look like so in the do section there’s two from statements that whole
    first part is the build of the project and then when you hit the second one that is actually all that is going to be
    included in your image this is something you should really focus on doing only include the pieces and parts you
    absolutely need for your application to run in production
    all right number six using external resources
    this is another one i see all the time and it’s pretty
    blatant out there i see a lot of cruel statements and docker files i see a lot of you know
    wget things where you need to connect to some external resource to get your
    artifact that you need or dependency that you need one example that we had in our project
    was there was a proprietary piece of software that we needed to install
    on in the container and part of that process was just to
    curl out and get it but we were just getting a script we were just pulling in a script and
    that script was running we just you know ran it um you wouldn’t do that on your own
    machine would you would you that’s something that i would be very careful about i mean the first thing i
    would do if i was presented with a script like that is go get it and look at it see what it’s doing right before
    just running it on my machine i mean that’s the classic going to stack overflow and
    getting instructions to delete your hard drive and just doing it so
    at the very least make sure you know what you’re getting
    and consider pulling it into your own binary management system like artifactory this way you can also not
    only pay attention to what it’s doing but you can also control when it gets updated
    you will notice when it gets updated and then you can update it at your own
    pace and not at the whim of the script owner or the maintainer in our
    case this was noticed especially when the maintainer of the script just
    basically moved it so we were curling out to something and not getting anything back and
    we had to contact the maintainer of the project and get all of that fixed and that’s when it occurred to me this
    probably wasn’t the best relationship to have
    so the do and do nots here this is just an example of you know pulling that script in owning it making
    sure you are in charge of those updates you are in charge of what is actually happening with that script when it is
    running that kind of stuff
    trust but verify again and again and again we take for
    granted this stuff that we pull in there are so many binaries out there now
    a bunch of open source projects we are all in this position where none of us really
    want to reinvent the wheel we want to use what’s already there so that we can build bigger and better things and i get
    that totally understand that but a lot of times we’re being a little bit careless about what we’re pulling into
    our projects and it might not even be a problem with
    you know a vulnerability so to speak it could be just a licensing issue that
    could come up and bite your company later so if you aren’t aware of what your
    developers are pulling in what they’re using in the project restricting that in some way or maybe
    not restricting is the right word but monitoring it you can definitely count on that coming
    to bite you at some point
    all right seven this one’s pretty obvious hard coding secrets
    not so obvious hard coding config this is something that we struggled with because we really didn’t have a good
    pipeline at the time that was something we were building from scratch for this particular project and learning how to
    do that with a container and you know where to put that in our build pipeline
    and stuff [Music] this is an example of something i saw in
    a docker file and see if you can figure out what is happening here
    and maybe what the intent was here if we just go through it step by step
    you see that there’s a base image okay then we’re doing some updates we’re
    doing some upgrades of os packages all right and from what we’ve learned already we’re we might be concerned
    about updates that we don’t want right now so that would be you know something to take a look at but look at that third
    line what is going on there and this isn’t exactly the docker file but it is
    certainly modeled on one that we came across on our team
    what happened here is someone responsible for building the base image
    oops added some secrets they shouldn’t have but instead of rebuilding the base image
    and providing a new version they thought it was sufficient enough that during the build cycle
    to just remove that file just remove the parts that they didn’t want to be there
    when you start learning more about how layering works with docker images you
    know this is not a good solution maybe the what you’re looking at at the end if you
    were to exec into the container for example built from an image like this you wouldn’t see that file but it’s
    still there it’s still there on the system in an underlying layer
    so always best to rebuild this image now i’m sure there was some good reason why
    this base image wasn’t rebuilt but i think just understanding that this particular fix wasn’t really a fix was
    more important to us at the time as far as config and hard coding config
    the port that’s exposed here that was something that mattered to us when we started adding other environments and we
    had some restrictions on port requirements things like that going through our testing phases and our our
    integration tests and going through qa and finally getting into a production environment sometimes that
    port needed to change it’s much better to put that in
    i’ll go ahead and move to the the do section it’s much better to expose that as an environment variable that you can
    pass in rather than hard code that it’s very future thinking kind of future proofing your image development
    in this case too uh we absolutely required that the base image be rebuilt
    so that we weren’t going through with that silly line of trying to remove secrets that would just exist in an
    underlying layer
    all right number eight just doing too much there’s a point when you start you might
    start especially in the beginning thinking that your doctor file is just a glorified shell script
    and it is not it is meant to be a
    description of the blueprint that you’re going to launch containers from it’s not supposed to be littered with
    a bunch of actions and processes that are really meant to be like a pipeline a build pipeline or a test pipeline
    this is an example of what i’m talking about and again this comes from not quite
    understanding what it means when docker you know creates layers
    and then caches from those layers in this particular file you can see
    once again this this was an interesting thing we had a base image and they were trying to
    remove an old package for the purposes of installing a new package that’s
    another candidate for just rebuilding the base image that’s really what needed to happen there but down at the bottom
    can you figure out what the problem with that is it looks like there is a script that does some cleanup of a database
    and then unit tests are run
    this isn’t what happens when a test fails
    and then you try to rebuild this image and think again how caching works
    nothing has changed if if all you do is uh you know let’s say the test fails you
    just fix the the test it’s one of those cases and you try to rebuild this image
    well you’ve already gone all the way past that line cleanup database
    and now you’re going to try to run unit tests again you’re going to have some problems
    and side effects are all around whether or not you know you’re caching something that’s unexpected
    we had some crazy effects when we did stuff like this in our docker files
    and of course this file is contrived for complicity simplicity but it was it was
    again modeled off another file that i worked on most importantly docker file should contain item potent
    operations only in order to provide repeatable builds
    so the do’s and don’ts of this basically don’t treat your doctor file like a shell script it is not
    rather than running integration tests during the build of a docker file consider launching a
    container and running tests against the container in another environment meant for
    integration testing that would be a much better approach and keep your docker file as simple as
    possible
    okay we talked enough about caching and i added this one unnecessarily busting
    cash so there are times when you do not want cash because you want to verify that the build from beginning to end is
    legit that it’s going to work in any scenario in any environment but there are times when you do want to
    benefit from the cash and the one that i can think of the most is a developer
    if you’re involved and repeatedly building images on your machine it would be nice to be able to
    benefit from that cache just like being a java developer you can benefit from
    if you’re working with maven for example be able to benefit from your local maven repository
    it’s nice to not have to repeat all of the downloading of stuff that you already have
    so an example of this and we also had many instances of this
    i see a lot of this when i see you know projects working within pm
    and uh um trying to you know figure out how to utilize cash without you know
    downloading everything every time you run the build for an image
    performance really takes a hit you want to minimize that time for images to be pulled or pushed across the
    wire as layers are created if any of those layers changes that
    means that that layer has to be regenerated everything below it also needs to be regenerated
    so take a look at this file very simple named it named things so
    that it makes sense what’s going on here we have our base image we’re just copying from our
    from our work environment we’re copying a dynamic file in meaning a file that changes a
    lot next we’re copying a static file file that never changes it’s just some
    requirement for our application and then last we’re we’re sending in a script that we use to start
    our application so they do and do not hear
    reorder those make sure that your dynamic files happen later in your
    docker file not earlier that can take you know sometimes you
    might find yourself in a catch-22 that might indicate you need to do some reorganization of your folder structure
    or something in your project but in this case it’s pretty simple moving stuff that changes a lot lower
    now as java developers you’ll probably see this a lot uh if you were to use
    um the like the maven docker plug-in you can see that class files and
    third-party libraries and stuff are actually separated from each other and if you if you could see a docker
    file it would appear that the highest layers would include third-party libraries things that aren’t
    likely to change as often as your local your class files that you put together
    and that are in active development so that’s one way to improve performance
    when you are building java in containers making sure that
    those dynamic files come in as late as possible
    okay i’m leaving the best for last i had the pleasure of meeting uh thomas
    andolff from trusec before his interview at the devops speakeasy
    this morning and i don’t know have you all found where that is it’s right where the massage is given so surely you’ve all found
    where the massages are being given out the devops speakeasy is right next door to that and we chatted a little bit this
    morning about what he was speaking about his talk is tomorrow at 3 35 in room c3
    it’s called quickie the top list of securing your developer workstation
    and as we chatted he asked me if i could please help to lead the charge
    to get docker to get rid of running as a route processes inside running a running
    container they’re just like any other process on the system albeit a few constraints but they should not be
    running as root in a container unless absolutely necessary
    doing that opens up the possibility of escaping the intended confines of those processes and
    you know something someone unintended gaining access to host resources
    reduce that attack surface area by following the principle of least privilege and treat those processes like
    you would any other in your application monitor their use make sure you have the
    appropriate user and group running those processes you can
    create a user inside the docker file you can also determine a user when you
    launch a container however i like it being inside the docker file so that you don’t forget or you don’t have to
    constantly look to see if it’s there that’s the case for base images because when you’re pulling in base images pay
    attention to whether they are bringing along with them the root user
    here’s an example i mean most of the examples i’ve shown you so far
    where you know there was no nothing in there about creating a user a group and and controlling any of those
    processes and here we are again my start script is just going to run as root
    so i um feel bad bringing that up um sometimes
    um just because i feel like this is something that we talk about over and over and over again and i feel like
    everyone should know about it but on the next slide you’ll see a statistic that’s pretty scary but
    here’s an example of how to do what i’m saying this actually is just modeled
    after some examples i’ve seen with spring boot they often you know you’ll see them
    create a new group and a new user and use that group and user for the rest of the the build
    same with uh like postgres i’ve seen the postgres container does that a lot of the a lot of the other containers will
    do this for you always good to check though find the find the source of those and make sure that that’s what’s going
    on so there is a report that came out recently uh this is the 2022 cloud native
    security and usage report by systig systig is a pretty popular monitoring
    company this came out of three million containers that they were observing and
    76 of those were containers that were run as root
    so even if everyone’s heard about it and understands maybe we’re not taking this
    risk seriously enough hopefully this number will will go down
    but for now this is something that i think maybe maybe folks that are building containers
    just don’t completely understand i have a resource at the end that i’ll
    share with you as well it it it’s dated now actually it’s 2017 i
    think but it’s just a blog describing an example of how this can happen how you can get yourself in trouble by not
    assigning a user in group i love articles and blogs like that because i’d rather not just be told i want to be
    told why so like today when matthias told me don’t wear your hat he also told me the
    reason why is because no one will be able to see her face because of the lighting so only then
    was i ready to take my hat off and do the right thing so mind your do’s and don’ts uh that’s
    all i have for you today uh remember that this these ten things are not an exotic exhaustive list but i think they
    are the most important um i hope all of those items will encourage you to be thoughtful when you write your own
    doctor files and also i would not be a responsible frogger if i didn’t invite you to visit
    the jfrog booth we’re giving away a star wars lego set so you can
    either go to the booth to get this qr code or you can take a picture of it now and sign up for that raffle
    and then also visit the devops speakeasy we had some incredible interviews all day today and we’re going to be doing
    the same all day tomorrow that space is open over there you can sit and listen to the interviews they will show up on
    twitter and eventually they will show up on the youtube channel for the devops
    speakeasy these are resources that i just put
    together obviously the dockerfile documentation at docker is the best it has improved
    quite a bit over the last few years definitely point someone that direction if they’re new to containers i know not
    everyone is using docker as their tool but it is a very good starting place
    learning about official images the place where you can find docker hubs official images
    let me give you a hint it took me a while to figure out you know if i really wanted to break
    into a base image and figure out how it was made i wasn’t sure how to do that
    there is a cool tool called dive that you can use that will give you some indication
    of what’s inside those images other than launching a container and exacting in and poking
    around which could take a while but if you google
    docker official images and you look for a github link that’s where you start
    you would navigate into that find the project that you’re interested in and then
    be able to go hunt for it there’s a they’re all built the same way which is good they follow some good practices so
    it’s good to use them as a standard for how you might want to build yours as well
    but there’s always a maintenance link and it’s another github project that you can go to and that is where you
    will find the docker file itself for those base images that you’re curious
    about um multi-stage build included a link there there’s the article um
    about running this route that actually showed an example as well i was pretty impressed with the systig report if you
    want to just if you’re curious there’s a lot of good information in there about other statistics around images and
    containers running in production today and then i wrote an article on d-zone about storing your images
    what you might consider when you’re looking for an image registry some place to put them
    there’s some good information there too all right thank you very much
    this is great
    are there any questions i can’t see you very well but i can hear you if you shout them out
    pardon podman yes so yes there are other
    solutions um if you’re not do you run what’s your development environment are
    you on linux yes so that’s the advantage there
    windows and mac i don’t think it’s still very easy to run pod mount or anything else
    to make that work but windows though with the wsl2 you could
    use podman in there right you should be able to do that that is another option
    anyone use the like maven plugins for docker or anything like that
    no one anyone use build packs so something i discovered when i was
    playing around with using uh the maven plugins because the idea there was to you know obfuscate the whole thing why
    do we all need to worry about writing docker files and learning all this stuff when we already you know write java and
    we we know our application that was one of the advantages of creating these plugins but what i
    discovered was first of all the way the image is split up into layers might not always be the
    way you want it it’s something that you need to again you know trust but verify
    sometimes it’s good turns out okay sometimes it doesn’t and also
    one thing i noticed the particular plug-in i was using actually ended up using paquetto build packs underneath
    that’s all fine and good except that when i went to look at my image cache there’s other images that get pulled in
    that are not part of my application and they’re needed in order to run the build packs against your you know to build
    your image but what you might want to consider there is not relying on the defaults and make sure that you’re
    setting up the plug-in to use your version of the build pack that you have collected and put in your own
    image management so that you can control it
    all right anyone learn anything new
    awesome that’s the best part for me as long as you go away with something valuable that
    you can take back to your team that makes me happy all right everybody have a great night i
    don’t know what you all have planned next but i’m sure there’s going to be some celebrations going on
    and i’ll see you all tomorrow [Applause]
    [Music]