End-to-End DevOps for Containerized Applications with JFrog and Docker [swampUP 2021]

Melissa Mckay,Devloper Advocate JFrog Peter Mckee ,Head of Developer Relations and Advocacy at Docker

6月 27, 2021

2 min read

Join Melissa McKay w/ JFrog and Peter McKee w/ Docker to learn how to manage and secure software releases and build CI/CD pipelines with the JFrog DevOps Platform and Docker. Try Artifactory and Docker, and get a limited-editon JFrog shirt: https://jfrog.co/3j5PveD
Melissa is a long time developer turned international speaker and is currently a Developer Advocate for JFrog, Inc., with a mission to improve the developer experience of DevOps methodologies. Her background and experience as a software engineer spans a slew of languages, technologies, and tools used in the development and operation of enterprise products and services. She is a mom, software engineer, Java Champion, Docker Captain, huge fan of UNconferences, and is always on the lookout for ways to grow and learn. She has a passion for teaching, sharing, and inspiring fellow practitioners and you are more than likely to run into her in the conference circuit.
Peter McKee is the Head of Developer Relations and Advocacy at Docker and maintainer of the open source project Ronin.js. Originally from Pittsburgh, PA but currently residing in Austin, TX, Peter built his career developing full-stack applications for over 25 years. He has held multiple roles but enjoys teaching and mentoring the most. When he’s not slapping away at the keyboard, you can find him practicing Spanish and hanging out with his wife and seven children.
Are you struggling with how to setup your development and deployment pipelines? Are you following best practices in managing your containerized applications and all of the artifacts that compose your software releases? Utilize DevOps best practices to manage your containerized apps through your development, testing, and production environments. Learn how to automate and orchestrate with JFrog Pipelines and Docker Compose and how to distribute immutable releases across the globe from code to edge. During this session, Melissa and Peter will demonstrate DevOps methods and tools that will ease your software’s traversal through your entire development lifecycle and highlight solutions for common pain points.

Video Transcript

Welcome everybody to swamp up
to this talk on end to end DevOps for containerized applications for j frog and Docker.
We’re excited to have you here today.
I’m excited because Peter is with with me today from Docker.
He and I are going to get together and I’m going to share what knowledge I have about j frog
and the DevOps side of things and he’s going to share what he knows about Docker and Docker compose
and getting our apps deployed.
Awesome.-So, Peter,
tell us a little bit about your background, and where you come from.
Yeah. So I’m Peter McKee. I’m the head of Developer Relations here at Docker.
I have about 25 years experience at different levels in organizations,
mostly just running dev teams and fingers on the keyboard, which I like to do.
And now I’m in Developer Relations, which I love, I love that role in teaching and mentoring. So yeah.
Nice.
I’m Melissa McKay, I’ve been with J frog now for a year,
just celebrated my year anniversary.
Prior to that I have 20 plus years of development experience also in various levels.
It’s pretty exciting time for the dev world team this past year,
I do enjoy the online conferences, I think we found some pretty good tool sets that work for us. And I’m
really excited to be able to do this with you, even though it’s online.
Next time, we will see you in person. That is the plan.
Yes. Looking forward to that.
Definitely.
Alright, so the point of this talk, what we want to do is just basically do a little
pair programming here, we want to go through as real life scenario as possible,
grab one of these projects out there, that is an open source project,
and start working through it, start setting it up locally,
pretending like we’re ramping up on a new team for example, as a developer.
I’m going to talk to you a little bit about the DevOps side of things,
some things that you might not be aware of that are happening under the covers with your build.
And then Peter will follow up with some Docker stuff.
At the point when a lot of us get stuck,
we really want to get something deployed somewhere other than our local machine.
So he’s going to help us with that.
Alright, so what do we need?
Obviously, I am a Java programmer, so I’ve got Java already.
I’ve got my ID that I can work with.
But this thing builds Docker images and runs containers.
So what do you suggest feeder that I start with?
Yeah, definitely head over to Docker.com,
go to products and download Docker desktop,
you can run Docker desktop, on Mac or Windows.
And that will install the Docker engine, the cli. It even installs Kubernetes if you need that,
and everything is configured and set up for you and ready to go.
The only thing with the application we’ll be taking a look at today and perhaps
some of your applications you’re working on,
you can go into the resource settings in desktop and bump up the memory and the swap file
and give your VM that the engine is actually running in
a little bit more room, a little more resources so
things don’t slow down and the fans start spinning on your laptop.
Cool.
Now, everything I need is in Docker desktop, right?
I actually did go and download it. I’ve got it installed on my machine.
We’re going to be talking about Docker compose too, is that included?
That is included along with desktop, yes.
Before we had Docker dash compose,
which was a separate executable
but now compose is integrated right into the CLI so you can run your normal commands, right with Docker.
Awesome.
Alright, lastly, we need a Cloud account. We’re going to be using AWS, I have a Cloud account,
I just have a user in there it.
What else do I need to do in order to get my stuff deployed?
Anything else I need to set up in advance?
Yeah, you’ll have to set up permissions.
For that account, that role that you’re using that user needs to have policies and roles
permission setup to do various things with the context that we’ll be using.
And you can head over to our docs page, and we’ll add it later.
The link to go right to the ECS container.
Docker context for ECS,
we will list out all the permissions that you need to be able to perform
the Docker compose up on ECS.
Awesome.
Alright, well, let’s just jump in and get started.
I’m excited to do this.
So I went out to the spring community
and they have this application called pet clinic.
And they have a version of pet clinic that is called pet clinic microservices.
And I pulled down the code and at this point, it’s on my machine,
I just want to follow the instructions and get this thing launched locally.
So the first thing that I want to do…
I have this open.
First thing I want to do is go through this README file and understand exactly how this thing is built.
And I’ve already done that. So I’m going to tell you that each one of these modules inside this project
is its own micro service,
which means it gets built into its own image
and then that gets launched as its own container.
Peter, what do you think about this type of project?
Everything is in the same repository.
All of these micro services are in the same place.
Yeah, yeah. So I mean, obviously, this is project is used for teaching and demo type purposes, but
you know, a couple things I would think about in your own setup to your microservices setup,
and your project structures.
I think some of the hardest things
in software engineering is naming variables and project structure.
What happens is you set things up, and six months later, eight months later,
you go, oh, that wasn’t great. And you try and move things around and it becomes difficult.
So you end up keeping it the way it started.
But this isn’t too terrible. The one thing that we were talking earlier about,
before we started is there’s only one Docker file for probably about what 7-8 services, something like that?
And that’s okay when you start out
but what happens when each of these services might diverge from the others,
and you might need its own Docker file.
So first thing I would do is probably take each one of these
micro services and put it into its own repo.
That’s one way to approach it.
And in there, you would have your own Docker file, and perhaps even your own compose file
that will run just that microservice standalone, and the Docker file will build the image for you.
And then you can have a configuration repo that handles all the higher level things, so compose across all your micro services.
And so that’s a couple ideas, I would think about maybe refactoring the structure of this.
Yeah, to make it a little bit more cleaner separations of concerns, those type of things.
That totally makes sense.
I mean, all of this is in the same language, it’s all a Spring Boot application.
And all of these microservices are written in the same framework using the same framework.
So it makes sense.
They’re all in here. And there’s a lot of reuse going on here.
However, what you just said, like a lot of the point of microservices is to give separate teams autonomy over their portion of the project.
And one way to do that is to give them their own source code repository,
where they could actually maybe not do this in Spring Boot, maybe their service is written in something completely different.
So that would make sense to have an individual Docker files for each of those areas.
Today, we have them all in the same, we’re going to work with that and move forward with this so that we can get this thing launched and running on our machine.
Awesome.
Alright, so the first thing that I need to do, I have to go into each and every one of these.
And the instructions tell me I need to build my Docker image, individually, per each microservice.
So I go go into each one.
And the Maven POM file includes a Docker plugin that can be used.
And it is under a profile called build Docker.
So when I run it, my Maven command, and we’re actually using the Maven wrapper here.
And I give it the profile of Docker build Docker profile, it will actually go through and start building this Docker image,
I have to do this for each and every micro service.
So I have questions about whether that is an appropriate thing to do.
But while this is here, building, let’s go ahead and take a look at the Docker file that we’re using.
And let’s just walk through what we’re actually doing.
Yeah, so let me take a look at start at the top here.
So basically, a Docker file is instructions that are executed from the top of the file all the way to the bottom,
each each instruction is executed one at a time.
And at the end, an image is produced and then save to disk.
You can think of a Docker file as almost like a shell script that you might run before containers.
So you can install the software you need copy binaries over in libraries, configuration files, all those type of things right.
And then you might have have a dandy your script, starting up the your application.
So that’s pretty much what a Docker file does.
So if we look at the top here, the from statement, also to just let me jump back for a second images
or you can use images to build other images.
So if you’re a Java developer or C++ or object oriented developer,
you can kind of think of it as inheritance.
So we’ll have a base image.
And that’s our open JDK, you’ll see at the top there, and then we’re going to build an image on top of that.
And so you can inherit, inherit from that open JDK file.
So that was, that’s what the from command is doing.
It’s saying, Take this other image, and use that and I’m gonna build on top of it.
And then the next statement is a work directory, setting up your work directory, all subsequent commands after that,
will, will be executed inside of that directory.
And then we have some arguments, a build argument.
So artifact name, I’m sure that applies to this different various different projects as you’re passing things in and out.
And then we’re going to copy the jar file inside of the image.
So you see copy artifact name dot jar, into application dot jar.
And then we’re going to run Java and extract that.
And then down below that, we said we have another argument, Docker eyes version, and then we want a W get right.
And that’s going out to the internet, it’s going to pull down Docker eyes,
and then it’s going to untire it and then change the mode to execution. Alright.
And then right there were this Docker files using a multistage Docker, multistage build.
And with multistage builds, you can have, you can build multiple images from one Docker file.
So what we did at the top there is we set up a builder, so where we’re going to use Docker eyes, and we pull it down, and we change the mode on it on it.
And then from there, on line 15, we say we’re going to start another image, that’s what the from command does.
And we set up a working directory again.
And then we’re going to copy up files into this doc into this image, excuse me.
But you’ll see on line 20, we have a dash dash from equals builder, and the from parameter to the copy command,
looks at earlier builds steps, and will then copy things out of that image into the image you’re building now.
So where we do a copy dash from builder application, forward slash Docker eyes, and then copy it to the local directory.
So what it’s doing is it’s saying, okay, go to that image we just built before, go up there and grab that Docker eyes file,
and bring it into this image. Right.
And then we expose, then we have another argument for the port, then we expose that port line 22 and 23,
we set up an environment variable.
And then we copy a bunch more files, just like we did previously.
And then we set up the entry point, that’s the that’s the command that will get executed when we start our image as a container.
So it went pretty quick there.
It’s not, you know, this talk isn’t meant to be a Docker one on one.
But yeah, that’s generally what we’re doing here.
Some some things I would think about probably along with this Docker file is I would probably make
that top multistage builds are great.
What you could do is they’ll make a base image of your own.
So yeah, right, right there, when Melissa is highlighting, we can take that make our own Docker file out of it,
build that image and then push it to a repository.
What I would recommend to that w get bothers me the most.
Every time you build this, it’s going to reach out to the internet and grab something.
Who knows what’s that that location tomorrow?
So yeah, definitely, I think that’s excellent advice to go ahead and build a base image,
grab that Docker eyes version and and keep it in your local Docker registry. Right?
And then most of you probably push that to artifactory. Right?
And over there, you can get version control secure caching all the benefits of artifactory.
So that’s probably what I would think about also to Melissa mentioned, I would take the W get out maybe those type of things
but other than that looks good. I think we’re ready to go.
Awesome.
All right. Well, another thing that was in this project already was a Docker compose file.
And this Docker compose file, I guess this allows me to go ahead and launch all of these containers locally.
Yes, yeah. So compose is first started out around when orchestrators were coming about.
And Docker. We actually purchased the company and brought that their tech into in house
and they built compose Docker compose.
And it was just that it was the ability to the Docker COI, we really just focused on one one image one container at a time.
And then compose was a way to manage multiple images,
also networks and volumes and those type of things all in one file.
So you didn’t have to go on the command line and type out Docker run image Docker run image, Docker run image,
you can put all these configurations into a compose file and then do Docker compose up,
and it would start all your services all together.
And it’s very powerful. You can connect different different containers to different NIF networks,
you can have separation of networks and grassy grass type of thing.
You can connect volumes to it, those type of things.
Awesome.
So that allows me to launch this project locally on my machine.
But now I want to use my AWS account.
So what’s the first step that I do?
You mentioned Docker context. What is that?
How does that work?
Yeah, so we have a relatively new feature called Docker context.
And all what a context is, is basically the ability to take your local ccli and put it to and point it to a different place that can run containers.
So Amazon ECS is the elastic Container Service, it’ll manages all the hardware underneath for you using fargate.
You can also use easy to but it’s easiest to get started with fargate.
And so we can set up a context locally that points into our AWS account.
And then we can run things from our machine directly into ECS.
Using the commands that you know already, so.
So to do that, we got to create a Docker context.
Yes. So if you do Docker context, create, and we have a different flavor.
So we can also connect into ACI, which is Microsoft’s container instances,
but we’re going to use ECS.
So if you put ECS that tells Docker gonna use this,
but you know, the flavor of the context you’re using is ECS.
And then just give it a name.
And there we go.
My ACS because that’s easy to type.
Yes.
All right. So I’m creating this profile.
Okay, I have some AWS and variable by environment variables that are already available to me.
So I’m going to use that I guess.
Yep.
All right, that looks good.
Now what
if we want to see it, so we do a Docker context
is that a list out the context as you have locally?
Nice.
So we have default there as mobi is the type.
So that’s mobi is the open source project that’s underneath Docker engine.
And then we have to ECS is the one we just create as my ECS.
So you’ll see that the stars pointing at my ECS context, that’s the one that’s activated,
so we want to use the one you just created.
So yep, context us.
Hit enter there.
So now we’re using the one that we just created.
So now all the commands that we run will be pointed into ECS and AWS,
and we’ll try and execute them in in the cloud.
Alright, so I’m using my Docker context.
I’m going to try this out Docker compose.
Yep, just like you would do locally.
And
I’m in the wrong directory.
Docker compose up.
That’s where my Docker compose file lives.
Oh, that Okay, that looks better.
I’m just waiting for nothing.
Okay, now, this is interesting.
What happened here?
Yeah. So
It is on my machine? What’s going on?
Yeah. So Melissa, you built images locally, and they’re just sitting locally, right?
And so when we’re pointing up at ECS, it tries to spin up and look for those images,
and it can’t find them. Right. It’s like,
I don’t see the latest on Docker Hub, it will default to that.
And so we need to either use images that are on hub already,
or we need to tag and push them ourselves.
Yeah, Peter, that is an excellent segue into another look at this Docker compose file
and how the images are actually set up.
You’ve explained the problem really well already.
But I’m going to reiterate here, in this Docker compose file,
give us some room so that we can see it.
In this Docker compose file, we have the image names specified here.
But when I’m running this on my local machine, it is going to look in my local image cache first for these images.
I have these images here.
I just built them, so it’s going to use them.
If it doesn’t, if there isn’t a cache.
Like when we’re trying to run on ECS Yes, it’s going to go out by default to Docker Hub.
If you don’t specify anything different another registry, then it will try to reach out to Docker Hub.
So in this case, I thought ahead a little bit.
So I have another branch here that actually made some changes to this Docker compose file.
And let’s take a look at it really quick.
Okay, so first thing I did, I up to the version of Docker compose,
we were sitting on version two, that was pretty old.
There’s been a lot of updates and, and new features and stuff added since then.
So I upped the version.
And then I also went in and I specified the images.
The images that I have actually built and pushed to my artifactory instance.
I’ll show you that in a second.
So yes, I specify my own private registry here so that these images can be found when I do the Docker compose up to two ECS.
So the other thing I had to add was the Docker pull secrets.
ECS needs to have permission to pull these images from my artifactory instance.
So I went ahead and created an environment variable, in this case, for my secret that I have set up with my AWS account.
Let’s see, let’s try our Docker compose up now and see how it goes.
Alright, it’s going to go ahead and attempt to create everything that you need in your AWS account.
This usually takes a long time, and we’re not going to spend the time to watch it through to completion.
But let me show you at least what it looks like in cloudformation.
So the first thing it does is it creates a stack here, and you can navigate in here and see everything that’s going on,
you can check out the events, all of these things need to be created in order for this pet clinic app to be up and running in ECS.
All right. Now, very quickly, I want to show you
what my instance looks, looks like.
This is just the free tier instance that I got from the J frog.com. website.
And I just want to show you, I have all of these default repositories created already.
For me, this is a very fresh instance, I haven’t done much with it.
And so one of these repositories is the default Docker, local repository.
This is where I pushed all of my images here.
So you can see I’ve got the image that I specified that I want that Docker compose file to reference, they all exist here.
Um, another thing I guess at this point, I know that we, you know, we have our images up there,
we use Docker compose, we’re creating everything in ECS.
This is all very, very cool.
But that was a lot of manual steps for me to do to build the Docker images on my machine,
and then to push them into our repository, and then to manually do the Docker compose,
in order to start the deploy.
So the next step here would be to focus on automating that whole task.
And we have this feature called pipelines in the platform.
And what you can do, you can actually set up various integrations
so that you can have access to a GitHub repo or or source control repo, to AWS to your, your artifactory instance,
you can also set up your AWS pull secrets in here, which is what I’ve done as a generic integration is a good way to to manage your secrets.
And then, by creating a pipeline, I have a pipeline source in here that exists in one of my GitHub repos.
By creating this pipeline, I can trigger it to run by, you know, checking something in to that GitHub repo.
And then that run will go through the steps of building the Docker image, and then trying to, to
Oh, to move that Docker image, sorry, from your development repository into a potential production repository.
And then pull from that production repository up into ECS through Docker compose.
All of that can be done in a pipeline script.
It’s usually just, it’s just a yamo file, but pretty full featured.
So definitely take a moment go and get your platform instance set up.
There’s a really good Getting Started section even within the UI that you can run through to learn more
about pipelines and integrations and all of that stuff.
So all of this is pretty awesome.
I’m excited that now As a developer, I’ve got my development workflow on my local machine.
And then I can push stuff up to Amazon ECS pretty easily with just using Docker compose
something I already use in order to launch my images locally.
It’s pretty convenient.
I’m going to, I’m going to leave you with some helpful resources.
These are just the links that give you more information on what both Peter and I have talked about today,
the cloud Getting Started link, that link is actually sent to you when you sign up
for a free tier instance of the J frog platform.
It contains a ton of tutorials and information and all of the different parts that you can use
to set it up and make your workflow easier on you.
I included links to artifactory specifically as well as to pipelines and and then the Docker information as well.
We discussed compose today, working with contexts, and then ECS integration specifically.
So take a moment play with this a guarantee it’s it’s pretty good stuff.
Nice stuff to put in your toolbox as a developer.
Thank you very much.
Enjoy the rest of your time at swamp up.
There are lots of good sessions, lots of good speakers.
Make sure to get on the chat to ask your questions,
Peter and I will be here to answer any questions that you have about any of this stuff.
So feel free to reach out.
Thanks.