Microservices can be hard; many container best practices are still being discovered. This session helps minimize the learning curve with container orchestration, specifically Kubernetes, by bringing DevOps best practices into the mix. Go from zero to DevOps superhero just by selecting container tooling specifically built to simplify the process. Learn how these tools can provide better orchestration for cloud services, abstraction and encapsulation for your microservices deployments, and visibility into what runs where and why.
So hi, everyone. We will get started, though, in two minutes. I want to make sure. I guess we closed the door. We can get started maybe two minutes early just to make sure, if that’s okay with people in the back. Am I good? All right. I’m going to go ahead and start my Google clock here so I can keep track of myself. All right, so welcome. Hi, everyone. Okay, I came all the way from East Bay, so you guys are going to have to do a little bit better than that. How is everyone doing?
And hi. Yay! All right, so this is from zero to DevOps superhero, the container edition, and I’ve given this session a few different times. It’s been different each time, but I really want to focus on how, if you’ve already adopted DevOps, you’re already doing things like liquid software, how can you sit here and apply those practices to containers, and why do you necessarily care? Maybe you’re already doing containers and you’re trying to figure out best practices, how I can streamline this even further, how I can have some visibility and traceability into what’s going on. This is going to be the session for you, and we’re going to start from scratch on the container and Kubernetes side. So we will be working with Kubernetes as the session description suggested. Before we get started, because we have about 47 minutes since I started two minutes early, I want to make sure that we level set and really set up an agenda for what we’re going to cover in the session.
As much as the title suggests from zero to DevOps superhero, I can’t make anyone a Marvel superhero comic in 45 minutes, and truly, I don’t have superpowers, so I can’t do that in any amount of time, but what I can do is get you thinking, get you excited, and give you some resources where you can go play with this stuff in your own time. This session was really designed to show you what’s possible.
Now, let’s talk about our industry. Let’s talk about where we’re at. Life as we know it runs on code. You can not do anything in your life without code. I think that this slide is pretty interesting, because even a car, and I apologize, I couldn’t get the slide to be bigger, but a luxury car, and I’m pretty sure a Prius or a Tesla is probably the same thing, uses 120 million lines of code to operate. It actually uses more than a space shuttle, which I found interesting, which is 400,000 lines of code. You even have a simple traffic system, when you walk out and cross the street, and that uses five million lines of code. Now we’re starting to get even further into advancements with health care and open source technology, and actually somebody I work with has an artificial pancreas that’s been keeping him alive for three years, and his artificial pancreas is open source and uses 160,000 lines of code. His name is Scott Hanselman, a little unknown person, you may have heard of him, but life really does run on code, and for Scott, his life depends on it.
So for those of us in technology, for those of us who are software engineers, operations engineers, jacks of all trade, how do we deal with this? How do we scale from liquid software? How do we adjust our pipelines to deliver with containers, with server lists, with web apps, and how do we figure out what the right solution is? Is there a one size fits all? You have your intelligent vehicles now. Every single time you get a vehicle, there’s some sort of smart technology. We have multiple devices at home, whether it’s an iPad, an iPhone, multiple computers. Even my irrigation system, I just updated, because the old one died and was really hard to manipulate, that’s now smart and wifi connected. There’s so many different pieces that we rely on, and we also rely on waffles, which we’re going to be talking about.
I plan to talk about waffles as much as I possibly can in a tech talk today, and we’ll see how that works out for us.
So just by show of hands, how many people in the room feel comfortable identifying themselves as a developer or software engineer? Wonderful. How many people feel comfortable identifying themselves as an operations engineer or a systems administrator? Cool. We have a healthy balance, so that’s fantastic, and then how many people where whatever hat their boss tells them to wear?
Everyone’s hand should go up, or we need to have a conversation with your boss. I’m just saying. So in order to really kind of deal with this life runs on code, and even get to the conversation about containers, of DevOps, and how to do things in a superhero fashion, we need to make sure we’re on the same page as our counterparts. We need to make sure that we are all understanding where the other person lives. So for the developers in the room, maybe you’ve felt this way before. We often feel as though we need to create applications at a competitive rate. My boss is constantly saying, deliver faster. Push faster. Constantly iterate and be agile as you possibly can. But I can’t do that if IT or operations engineers are standing in my way. It’s not early 2000s anymore. I can’t take a zip disk and go over to the server room and pop it in and take a hammer and make my code work. I have processes now, and when I do hand you my code to the systems administrators and ops engineers, when you tell me it doesn’t work, that’s impossible. It worked on my machine, and as a result, I feel as though my ability to innovate and my productivity then becomes suspended, because I’m waiting on IT.
Don’t worry, for the SysAdmins in the room, I spent 10 years as a systems administrator before I went to the other side, and then now happily bridge the gap, but from the server side, or the systems administrator side, I can’t just give you unsolicited access, right? I have to worry about server protection and corporate compliance. So as much as you want to iterate as fast as possible, I’m worried about run time and making sure it’s up, so I don’t want you to iterate fast at all. So in turns out we start to conflict, right?
But since I’m not a developer, I innately don’t know how to integrate your infrastructure, your software, into my infrastructure, but at the end of the day, I can’t focus both on server compliance and application compliance, so I need to work with you, not against you. As it turns out, operations engineers and systems engineers are both incentivized to compete against each other, but if we have an aligned goal at the end that we’re both running towards, we’re no longer fighting, like “This is Sparta, I’m going to sit here and turn into a Marvel movie, maybe it’s Endgame. I’m no longer going to sit here and fight against you. I’m now going to run alongside you toward a shared, common goal.”
So that’s why I love this next slide, because we talk about IT stress points, but it’s not just IT or the IT person or systems administrator or information technology guru that I’m fighting with. This is if you touch software, if you write software, if your life runs on code and you develop that code for that life and that sustainability. This is all of our stress points, and therefore, it’s our collective responsibility, and that really sets us up for why DevOps is important, why JFrog has this conference every single year, and why we have the topic of liquid software. So this is the same definition that Microsoft stands behind, and my team at Microsoft, which is a DevOps team, stands behind. DevOps is the union of people, process and products to enable continuous delivery of value to our end users. Continuous delivery of value. The most important work on that slide is value. If we set value as our common goal, developers and operations are no longer fighting against each other. We’re both aligned and unified with the shared goal of delivering value to end users, and more importantly, if you’re not delivering value, really, what are you doing and why are you doing it, and how do you know if you’re delivering value? Do you have insights, telemetry, any kind of traceability to help you understand? Especially when you start to adopt complex situations and systems and distributive systems, like kubernetes.
Kubernetes is only five years old and it’s taken the world by storm. So many people are running it in production, and they don’t know what they’re doing or even how to trace where their pods are, where their namespaces are, if they have managed identities, rollback. They’re running without any kind of idea as to the value they’ve implemented or they’re offering.
Now why containers? Why are containers so important? Why is everyone so gung ho about it? As it turns out, it actually has a lot of DevOps practices that’s tying into it. We talked about the bottlenecks for both developers and operations engineers. Containers directly addresses that. For a developer, it enables this write once, run anywhere micro-service architecture, and for the operations person, it offers portability. It offers abstraction, standardization across my environments now. I don’t have to worry about my version of Java in dev differs from the one in production, and now that’s going to be a problem. I have higher compute density. I can scale. With containers and DevOps practices and setting up my pipelines in a certain way, I can automate that scale, either vertically or horizontally, and I can scale out or scale in.
Now, just for those of you who may have been living under a rock or got caught in Thanos’s snap for five years and don’t remember what a container is, I just want to level set again. Remember, it is not a virtual machine, and in fact, it’s not even a real thing. It’s an application delivery mechanism with simple process isolation, and if you have more Windows background, it’s actually built on Linux kernel features, so that’s why things like your namespaces in kubernetes is what your process can see, and your C groups is what your process can use. That’s things like your CPU and your memory. I like to tell people, containers are pretty much just a waffle, and that’s why I said we’re going to talk about waffles as much as possible. Now, here’s why. If I actually take the butter off and I have just my waffle, which is just bread, it’s kind of boring and dry. I really want to have syrup, butter, whipped cream, I mean, go all out if I’m going to go eat waffles. But I’m doing that because essentially the waffle becomes a delivery system for that happiness to my body. All the container is, is a delivery system for value to your customers.
So containers are like a waffle. Now remember, containerization dramatically differs from virtualization. Virtualization, you’re going to have hypervisor on top of bare metal, or on top of an OS, and then you’re still going to have a guest OS, your dependencies, and your applications to manage individually, whereas a container is going to sit on top of your docker engine that sits on top of a host OS, and within your container, you have that encapsulated, process isolated environment, with just the dependencies and kernel that you need to run your application.
So a refresher on container layers. We all remember how you have your layers in the middle. Those are your read-only layers. Be careful what you put in there. I can’t tell you how many images I see that have SSH keys and pats baked into it. Probably not a good idea. Utilize multi-stage docker falls when you can. Pass information and make sure you’re not publishing stuff that shouldn’t be published. Your container layer is where your read-writes are actually going to happen, so again, we’re back to food, because it’s pretty much like a cake. Consider where you’re putting your important information in those layers, and make sure what you’re putting in those layers are valuable.
We’ve talked about the problems for developers and operations, so remember the advantage between the two. Containers offers the developer fast iteration and agile delivery system and something that’s immutable and portable, and then the ops person gets to benefit from that portability as well, because it offers greater cost savings. I now have a more efficient deployment because of that standardized environment, and I can even do things like [inaudible 00:11:56] bursting, where I can go from the private cloud to public cloud to multi-cloud. I have a lot of ability to scale, so as much fun as I’m sure it is to sit here and watch me talk with my hands and use PowerPoint, let’s go ahead and get into a demo. We’ll talk about the demo afterwards. I’m going to assume that some people in the room have been working with containers, have some experience with kubernetes. If not, I will have a refresher after, but this is really your click bait of stay in the room, because I’m going to show you something super awesome right now.
So to get started with this, I actually try to genuinely listen to all the feedback I get from all of my sessions, and I read every single bit of feedback, though I will tell you, no matter how many times people put it in there, I cannot control the temperature in the room, the smell of the room or anything that facilities controls, so when you put it in there, know that I empathize with you, but it’s not exactly helpful feedback. Now, if you tell me you like my shoes, that is helpful, because it reinforces me to go buy more shoes.
However, one of the big things that I heard in my feedback was, “We’re tired of hello world. It’s great that you can talk about kubernetes and containers, and use all these tools, but that only works in hello world. There’s no way it can work in a real world application.” So I sat and thought about the most complicated application I could think of, especially when it’s one that I didn’t write. I have no visibility into the code. I have no visibility into how it was structured. I went and found somebody else’s Java project, not even knowing what Java versions, dependencies, anything, forked that, and tried to see if I could use these tools with it. Let’s see if it works.
So you’ll see right now, I have a Java project right here. I don’t have a docker file. I don’t have kubernetes manifests or Helm charts. For those who don’t know, Helm is the package manager for kubernetes. You could think about Nougat, Pip, Appd, APK, we can keep going, Maven, there you go. I don’t have any of that. I’m going to use a tool called Draft. Draft works in conjunction with Helm to simplify my Helm chart creation. You can also do things like that with Scaffold. I personally prefer Draft, and I’m doing this on a local kubernetes cluster, by the way.
So I’m simply going to do Draft, create. The cool thing about Draft is I can get started with truly just two commands. I don’t need any more than that. Draft create just added a ton of files to my workspace. For one, it gave me a docker file, and you can see the docker file over here is actually a multi-stage docker file, so that’s good. We have some best practices in play, and I have a secondary docker file that’s going to copy from the first stage, my jar file. Now, because this is a JFrog conference, I am actually using X-ray. I have Maven dependencies. I’ve set up my architecture to use X-ray and Artifactory. So I’m going to go ahead and kick off locally JFrog Maven install, and it’s going to compile my jar. This is no different than what I would normally have to do in my development. Now, as as result, I don’t actually need this first stage, because everything that first stage is doing, I’m doing either manually or I’ll do in my pipeline. Now, I will have to update a few other things, for one, my application serves on port 80. I need to update the copy line to make sure I don’t have any typos. I’m going to go ahead and, nope, apparently I’m not. I have a fun little trick that I like to cheat with. I have keyboard strokes. There we go.
So I’m going to go ahead and paste my docker file in. That just updated the port. It updated the copy line. Everything else is the exact same. So I’m going to go ahead and save that. We have the docker file done. Now, let’s play with our charts.
So the first thing in charts, I control everything through my values file. I’m going to zoom out just a little bit here so I can see some stuff. I need to update my internal port to match what’s in my docker file, which is 80-80. I need to make sure that I update my CPU. Like I said, this isn’t hello world. I need to be cognizant of my resource requests and limits, so I’m going to go ahead and change my CPU to 256, we’ll replace that, and I need to change my memory to 512.
So just like that, that looks pretty good from a local perspective. If I’m getting started and I’m trying to play with this from zero to DevOps superhero, before I check it in and get everything working in my DV pipeline that my ops team and everyone has helped me with, I need to test everything locally. I can’t use the ops pipeline as my sandbox environment. So I’m going to go ahead and clear, and we’ll go ahead and do draft up. That’s the second Draft command. Now, it’s going to fail. There we go. Because I didn’t consider my docker ignore file, which is actually blocking out the target directory, but if we remember my docker file, line six, I am copying from my target directory. So let’s go ahead and do draft up one more time, and that’s now going to build everything appropriately, though I did not save this, so hopefully that works. We’ll go ahead and do draft up one more time, so it iterates.
So while that’s releasing and actually starting up, you can see it release pretty quickly, partly because I’ve built everything outside of the docker file. Really just had to deploy. I’m now going to actually set this up for the cloud. I’m letting the pod startup and spin up. It’s running Tomcat. It still goes slow. So I’m using JFrog. I’m going to add in some image pull secrets. We’ll zoom in here just a little bit. I’m going to need to add in my ingress, so by the way, for those of you who don’t know, in every demo I think I have an EngineX proxy. It has TLS set up with Let’s Encrypt, the auto-issues and auto-provision certificates. There is a video at the end that will actually walk you through that, so you can do it by yourself, and by the way, the video’s only 15 minutes long. So I’m going to save.
Now, all I did was I added in image pull secrets. I’ve already created the secret in the namespace. I set up my ingress, and I set up application insights, only that data doesn’t exist in my template, so I need to go over to my template. You’re starting to see Draft gives you the framework or the scaffolding to get started, and then you can go in and kind of manipulate it or you can work with your ops teams to do that.
There’s a few things I need to change. For one, I know with the way that my Azure DevOps or my CICD pipeline works with BuildID, I need to actually make sure that that’s in quotes. So I’ll go ahead and update that. I need to add in the provisioning for my image pull secrets, so I’ll go ahead and add that in, and I need to make sure that I define an environment variable for my application insights, so I’ll go ahead and add my environment variable in, make sure everything is in alignment for [inaudible 00:18:17]. We’ll just zoom in here real quick. So you can see everything underneath containers. I added in my if values image pull secrets exist, and for those of you who are new to Helm, we use a values file to control everything that you would previously hard code. So if I highlight, you’ll see how many times values is referenced in the code. I’m just managing anything that I’m actually defining here, and then these are just variables. Helm runs on a template engine. It’s just a template that’s going to parse everything and convert it over into a kubernetes manifest.
So I’ll save my deployment. That looks good. Now I need to do the same thing with ingress, only if I drag this down and we zoom in, you’ll notice my ingress is actually pretty simple, right? If the ingress is enabled, set metadata. I don’t have any ability to add annotations, which I need to if I’m going to use any kind of ingress. I don’t have any ability to really define TLS. In fact, my routing rules are wrong, because service external port doesn’t exist. I want it to go to my internal port, since I’m routing through a proxy. There’s a bunch of things wrong with this particular file, so I’m going to go ahead and update it with different code. Really all I did though was change the annotations. I made sure that I can add in annotations. It’ll take care of the indents, because of the way you can manipulate your templates. I can sit here and change the service port to service.internalport. Every period is an indent in your YAML. That’s the tabs and spaces people joke about. And then you have your TLS. I’m saying if TLS is enabled, please define it.
Now, we’ll go ahead and zoom out. I’m going to go ahead and hit save. We’ll double check everything here. It looks like I did my quotes. I added my pull secrets. I added my environment variables. This looks correct as well. So what I’m going to do is just commit everything with a super awesome commit message of demo live. Of course, that’s how all my commit messages go, and I’m going to go ahead and push it.
Now, I haven’t forgotten that we’ve released a deployment down here. I’m pushing that to kind of save time during a demo, but I’m going to clear, and I’m going to do k get pods. K is an alias for the kube control binary. You can see it’s been running for three minutes and 43 seconds, and you can see that if you could zoom in, but I’ll zoom it for you. You can also do the same thing with k get services, and I can see the services are up and running. I can interact with this just as I would any other kubernetes cluster, but it’s running locally on my system, just as part of docker for desktop. So I could do something the same way I would through any other kube control command. I could port forward and connect to a service or connect to a deployment, or I can use Draft. I told you, Draft will help you get started with two commands, but it gives you a freebie, which is Draft Connect. That will actually take care of your proxy tunnel for you.
So I can tunnel right into it, I can see the logs, and when you zoom in here to the first line, it’s connecting to my chat application on this local host address, so I’ll go ahead and highlight that, go over to Chrome, paste that in, and hopefully my chattybot is working, which it is, so I can see that. We’ll go ahead and sign in, make sure that looks good, start chatting, and JLD joined, but I want to share it with everyone else. I want to deliver value, or deliver my chattybot waffle, to everyone.
So in alignment, one of the things that I did with DevOps practices, is I did already set up a pipeline, solely for the sake of time. However, I want to show you a little bit about how I did that. For those of you who are new to maybe Azure pipelines or Azure DevOps, formerly known as VSTS, kind of like the artist formerly known as Prince, only this is technology, and in May, about six weeks ago, we actually announced unified pipelines, so previously you could do YAML builds and that was it and you would have to manually do everything for your release. Now we can do everything as part of one pipeline, and that’s what is set up here. But if I didn’t know what task I wanted for YAML, I could just search for a task, fill out the regular information, and then Azure pipelines will add in the YAML for me.
Now, there’s another cool trick that I did for this, but first off, let’s go ahead and go through what’s happening here. You can see there’s a trigger on my master branch, and filling out my variables. I’m setting stages. This is starting to look very similar to any other CIDC system that I would define in YAML, whether it’s Jenkins, even though that’s groovy, I’m still going to define out my stages. Codefresh, Travis CI, it doesn’t matter. I’m going to have it in code, because it’s a DevOps practice, and just because we’re talking about containers and starting from scratch with containers, your DevOps practices don’t change. You’re still going to need to compile your application. You’re going to need to build it, package it up somehow, and release it somewhere. Those principles remain the same.
So I’ve set up a stage for build, my build steps, as you’ll notice, I’m going to zoom out here and actually go over to where I got my build steps. I took them from a visual CI, and I just went through and added tasks. All any CIDC pipeline is is just a task runner, and I went and searched for tasks. Only the tasks that I wanted didn’t exist, because they’re for Artifactory, JFrog. I even have Slack notifications tied in. So what did I do? Keep it old school and go back to Bash.
So I’m just passing information in, I’m logging in to docker, I’m getting the JFrog CLI on to my build server. I’m using a private build server, because it gives me more control and traceability into that value add, but you can also use a hosted one. We offer all different kinds, including, which I always love people to see, we also offer hosted Mac, Mac Sierra. If you have public projects, by the way, it’s completely free. And then the next few tasks that I’m doing, if you look at this, is JFrog RT, Maven clean install. It’s the same command I ran locally. Go over here, I’m doing docker build and push. It’s the same command I would run locally, making sure Helm is installed on my build server, running Helm [inaudible 00:24:12] so I make sure I didn’t screw up my Helm chart. It’s part of my pipeline. Helm package, package it up and push it into Artifactory as an artifact, and then publish all my build information over to Artifactory, so it’s adding my build dependencies, collecting my build environment, pushing all the information for docker, and at the end of the day, I have some build Slack triggers. Now, I thought about overcomplicating this. In fact, I thought about it a lot. I thought about throwing it into a function, doing Lambda, doing something super cool, but why?
Does it add value? You want to know what I ended up doing? Keeping it old school and just doing a crawl post. All it is is an http web hook that goes right in to post information to my Slack channel. So along those lines, let’s go take a look and see what happened. It looks like our demo broke. Good thing I planned it, otherwise that would be super embarrassing.
So let’s go ahead and scroll down here, take a look at our x-ray trigger, and right down here, x-ray scan alerts were found. So I told you I was using JFrog Artifactory X-ray. I also set up policies that say if you violate this particular policy, you do not proceed on through to the merry gates of wherever you want to go. So let’s go ahead and actually check our Slack notifications too. You can see I have an Artifactory notification, X-ray scan report, and pipelines. So the second X-ray scan failed, it failed my pipeline. I can drill from X-ray scan report over, make sure that I’m signed in, and see my X-ray information. I can see if we zoom in here, I have 74 violations. I have some high security concerns. And you want to know the base thing that I see? Is my base image is Debian, number one. I don’t like using Debian-based images. It’s a larger image. It’s a wider attack surface, but most importantly, if we remember, because I started from scratch, I’m using a public image. I have no idea what’s in there. I have no control over the security or anything else. So when we start talking about from zero to DevOps superhero, the only way to become a superhero, to become really good at things, is to remember the basics. Remember security. Remember what gets baked into that cake, into those layers.
So I’m going to comment out this line, and I’m going to uncomment mine. You’ll notice that I have my own image right here, that I added, and that I patched, and I switched to Alpine. Not only is it one quarter of the size, it’s also more secure. So let’s go ahead and add this, and say X-ray patch, if I can spell, and I’ll go ahead and push. Now while that’s pushing and triggering another build, let’s go back over to Slack. Let’s take a look at our build artifacts, right? I’m using JFrog for my Maven artifacts. That means now I have a direct link into every single thing that happened with my build, and because I pushed it into a Slack channel, which you could push into a dashboard or anything else you want, I have easy visibility into going and accessing that information, which will help me figure out if something went wrong, help me plan for my next sprint, help me figure out if I’m delivering value. You can throw all the money in the world at your software, and you can try to deliver as many new features as you want. If it’s not up, if it doesn’t work, you’re not delivering any value and you might as well just throw the money out.
So you can see right here, I have three different modules. The first one is my jar files, and I can see all 80 dependencies that I have. I can go back to my published modules. I can see under JFrog, and we’ll zoom in here, you can see I have a tar file, right? We can scroll back over. I can see the tree over here. I can go over into Artifactory, view how many times it’s been downloaded. It’s really just a Helm chart, it’s not fancy. I want to go back to Slack, get back to where I wanted. I want to show you specifically the Dockerfile. So I have a docker manifest, right? Scrolling down, I can see my manifest jsum. I can view that, and I can go over into the tree where I can see that information. So not only do I have traceability because all my notifications are in one place, I also have traceability because I have links back to everything, because these systems are integrated. I can go to X-ray and I can see this was still the high build. I’m still waiting for the next build to start, so let’s double check and make sure that is starting.
There we go. So I’m waiting for the next scan to happen. Once that gets filtered in, I’ll be able to see whether or not there are issues detected. Let’s go ahead and check. Okay, it looks like that finished, so let’s go back over to Slack. There we go. Now I have build artifacts. That’s still green, but my X-ray scan report is now green. I was able to fix the alerts. Let’s see how many alerts I was able to fix. I fixed all of them, by changing an image that I owned, that I patched. I still have two security alerts, but they’re low, and I can sit there and address it based on priority. But now I have that visibility, and I can get that information when I go back over to Slack and go back to artifacts, I can go back over to that manifest, go back over and link right from here. There’s my latest build, as soon as it loads. Here’s my X-ray report, and now I have no scanned issues. And if I want to see more, I could see more right here from the details in X-ray. So let’s scroll back out, looks like I did this well, so it looks like it’s deploying to the environment.
Now, let’s talk about this. Environments are a relatively new concept at Azure DevOps, but all I did was set up environments based on my Azure cluster, or I could do any kind of kubernetes cluster. I can have one environment deploy out to three different clouds. I could do Google, Amazon and Microsoft, because I want to be inclusive for everything.
All I do is go to my environments, click new environment, choose kubernetes. Probably need to give it a name, so I’ll go ahead and say test, and then I just choose the provider. Is it generic or is Azure? Azure’s going to help me create a surface principle, generic you’re going to create a service account, but once you go into the environments, I can see the information. I can drill down into it, and here’s where traceability gets really cool. I can see the actual pods. I can see the replica set. I can see the selector, the labels, all the information that I would manually run commands for, I now have traceability and visibility into what is running, where it’s running. I have logs that I can see. I don’t have to try to see, is this up and running? I can see the actual YAML of my pod.
I can dive as deep as I need to go, and if that’s not enough, I also have Azure Monitor set up on this cluster, so I can see my CPU utilization for my notes. I can see that actually as a whole, which I’m waiting for it to go back to average. I’m not exceeding even 20 percent there. My memory utilization, I exceed maybe 26, so I’m only at a quarter of what I would use. My node count is always three, so I know my nodes are online and healthy. I have 41 current pods, so I can see how many pods are actually running in my cluster, and I can see my server response time and my server requests of my actual application, because I embedded application insights into my Helm chart and deployment, so it’s sitting there making sure, is your application up and running? I could also take that one step further and do things like lightness checks or readiness checks, all part of my Helm chart, just using draft, Helm, some monitoring. You could use Prometheus or Grafana. It gets a little bit more complicated, but I’m just using really basic things. Standard CIDC, Draft, Helm. There’s really nothing else that’s complex. Obviously I have JFrog Artifactory and X-ray, but I’m using the same CLI I would do normally.
So let’s go back to PowerPoint here real quick. We’ll kind of wrap up. All right, so let’s talk about what happened, because for those of you in the room that maybe are brand new to containers and kubernetes, that might have seemed incredibly overwhelming. For those of you who have done this before, that might have been, “Ah, give me more, I want more waffles.” So essentially, I have code out of git hub, and by the way, if you are new, every single demo I ever do is always open source, always on git hub. There’s links at the end. I even have videos of individual sections that are in easy-to-digest nuggets, so you can go learn. But the code’s on git hub.
I used Azure DevOps as my kubernetes pipeline. I’ve done this same exact demo. In fact, I did it I think a week and a half ago with Codefresh, doesn’t matter the CIDC system. You can get more insights with certain ones, but the pipeline’s going to remain the same. I’m still going to have to compile my code. I’m going to have to build, and then I’m going to have to push it somewhere. Ideally you’re pushing it to a private container registry that’s highly available. I chose Artifactory. I have X-ray set up. We’ve done the same demo with Aqua for some sort of scanning. You want to make sure you know what’s going on in your containers, in your pods, and in your code, and then I packaged up my Helm chart. Even my Helm chart is in a private repository in Artifactory, and then I deployed it out to kubernetes on Azure. I could have deployed it out to AWS, Google, doesn’t matter. The principles are going to remain the same.
Now, again, for those of you who are effected by Thanos and have not been around for five years and kubernetes is new and you’re wondering if it’s candy, remember that it’s an open source container orchestrator. It was designed specifically to automate deployment, scaling and management of applications. So there’s a lot of features and functionality, which unfortunately it’s incredibly small, but things like load balancing, secret management, you can tie that over with things like key vault. All of those features are built into it. You don’t need to add additional complexity to the complicated world of containers just to understand it. Utilize tooling that other companies have to simplify your overhead.
That’s why I introduced Helm. Helm is the package manager. It’s the de facto package manager for kubernetes. It’s powered by a template engine. I can’t tell you how many times I would screw up kubernetes manifests because I would put the name of one label in one file, and then I screwed it up and I couldn’t remember it in another file, and then I copied something and pasted it in and that screwed everything up. Tabs are spaces, apparently. I think tabs won at build, but kubernetes is hard. YAML is hard. Kubernetes is complex. Let’s simplify. Keep it simple, right? Draft was designed to simplify Helm even further, so it gives you that framework and that scaffolding to get started. It gives you a basis. It’s not going to do it for you, there’s some knowledge that you’re going to have to have on how things work, but it helps you get started.
So the next slide really I want to talk about, because this is probably the most common question I get asked. What do I use? What’s going to make this easy? The answer is I don’t know, because your environment and your work flow is going to be drastically different than mine. In fact, I’ve had people, as you saw my terminal, which if you have a question for that, there’s a link at the end as well. I post pictures of my terminal all the time. In fact, it’s something that I’m very well known for, and so somebody commented on Twitter and said, “Why do use Vim? Why don’t you use Emax? Why do you use ZSH and not Fish? Why do you use OMZSH or Vundle, why do you choose this?” And giving me crap for it, and the difference is I have very specific reasons, but also, why does it matter? What works for me might not work for you, and what works for you might not work for me.
The important thing is, is all of these tools are built on the same exact practices, so at the end of the day, the container edition of DevOps is, guess what? It’s no different than a regular pipeline. Certain tooling like Codefresh, for example, has container-based pipeline. So it offers you additional granularity and control, because you have additional isolation as opposed to managing build servers and build VMs, but at the end of the day, it’s still going to be built on the same framework, whether you’re using Travis or Jenkins or Octopus, you’re still going to want to make sure that you have image scanning tied in. Whether you’re using X-ray, and X-ray integrates with Aqua, and now JFrog just announced JFrog pipelines. There’s so many different tools, because they want to make sure that you have an option for what is going to work best for your work flow and your environment.
I was recently reminded that I can be very verbose, and I’m sure that’s just because I have a very detailed engineer brain, but I really tried to scale it back, and I think of that now even in my work flow. KISS. Now, you might be thinking a different analogy. I made sure that it’s appropriate. Keep it super simple. It’s not name call. Don’t overcomplicate the process. Look at the basics. That’s what’s going to make you a superhero and that’s what’s going to supercharge your pipelines. In fact, I woke up this morning and I saw this tweet, and I loved it. Kelsey Hightower actually retweeted it from Abby Fuller. I don’t know who needs to hear this, but not everything needs to be a kubernetes CRD. Now, for those of you in the room who have never tried to extend kubernetes, since kubernetes is extensible, I can add on custom resource definitions, which adds on additional objects, something more than deployment and namespace and replica set, PBC. I can make it as big as I want, but not every single thing needs to do that. Just like my Slack notifications didn’t need to be a function. It can be a crawl command. All it needs to do is offer value.
Remember the beginning, as long as we’re both walking towards the end goal, we are walking in the right direction, and along that same theme, I recently got super invested again into Marvel Agents of S.H.I.E.L.D., and I love this quote. The steps you take don’t have to be big. They just have to take you in the right direction. So many people are sitting here thinking, “kubernetes, docker, I’m so overwhelmed. Liquid software, this pipeline tool, this, this.” You don’t have to go to the end of the race right off the bat. You’re never going to make it. So just take small steps. Bite off the pieces that make sense.
Here are some questions that can help you. Ask yourself, what are my main objectives? What do I want to accomplish with this delivery? What is going to offer value? And then follow it up with what are some indicators that’s going to help me figure out if I’m meeting those objectives, whether it’s containers, whether it’s web apps, whether it’s serverless, it doesn’t matter. What’s going to indicate that I’m hitting that, or that I’m in alignment with my SLA that I’m promising my customers? And then you can follow it up with what will my pipeline look like? My pipeline that I built is already on git hub. You’re going to get a link to it. You can sit there and play with it. In fact, I have a secondary [inaudible 00:38:38] that I’ll throw up there that has a very similar pipeline for a project called Croc Hunter that works with Codefresh.
The pipelines virtually look the same, because they’re structured. We start with the structure. We’re always going to compile our code. We’re always going to have to build and push that code or put it somewhere. We’re going to have to scan, do some kind of build testing, code coverage, scanning of some kind. We’re going to deploy to our environments, whether we’re deploying to dev, whether we’re deploying to Prod QA. In between, when it deploys to dev, you’re probably going to do some testing. You’re probably going to do some UI testing. You’re going to make sure that it’s ready to promote into the next environments. Maybe you’re going to set up gates so that you don’t have to have a manual approver for each promotion. You can actually use gates where you can query a function and make sure that you are testing things in accordance. You can have a query monitor alert, make sure that if there’s a certain amount of browser exceptions, don’t let it into QA.
And then you’re going to follow up with your promotion, but with all of those need to do tasks, doesn’t matter what tool you use, it’s going to come down to the same exact functionality. You’re going to have a private package feed, ideally, if you want to have traceability and visibility. You’re going to want to have a private repository to put that information, whether it’s your docker image, your Helm charts, your artifacts. You’re going to use some sort of scanning tool. You’re going to use some sort of CICD tool, and every CICD tool is slightly different in the sense of it might do more of one thing than another, and maybe that’s more for what you need in your environment. Only you know that, and then you’re going to have your testing. Maybe you’ll use Selenia, or maybe you’ll use White Source Bolt, or who knows. And then you’ll go right back to your CICD. You’re going to fall back on the basics.
No matter what, the foundation is the same. In fact, when people start asking, “well, kubernetes is great, I hear serverless is better. Shouldn’t I be throwing serverless into kubernetes and web apps, and I want it all.” No matter what you choose to do, make sure it offers value, but remember at the end of the day, the process you’re going to design is the same. Ask yourself, does this add value or does this add unnecessary complexity? Because if it doesn’t add you any value and it’s just going to add additional overhead, additional resources to manage, you’re just throwing money out, and most importantly, I told you I was going to mention waffles as much as possible. Remember at the end of the day, it’s just a waffle, and I’m not only talking about containers. Even your DevOps pipeline is just a waffle, if you remember all it is is a delivery system, and you’re delivering value, or in a waffle’s case, your delivering happiness. Which, honestly, if you’re serving your end users, you’re probably delivering happiness as well, or as we started off in the beginning, life runs on code. Sometimes you might be delivering life and sustainability.
At the end of the day, though, it’s just a delivery system. Don’t overcomplicate it. Now, for those of you again who are new to kubernetes, if you’re wondering specifically about some best practices, I sprinkled them throughout the session. For one, build small containers. You saw how me switching my base image, which was Debian over to Alpine, and then making a few minor patches not only reduced, I didn’t show you the size of the image, but one was 25 megabytes, the other one was five, so it was literally 25 percent of the size of the Debian image, but I have a smaller surface area for attack. I can better secure and have a better performing container. Utilize multi-stage docker files when you can. You don’t always need to. Either way, I’m still going to either build my Maven jar file in my docker file or outside of it, but the build happens first. I don’t need those same kernel components often at build, so consider your architecture. And that leads us in to application architecture. Use namespaces. Lock them down. Use managed surface identities. Use role-based access control. Use Helm charts. Use things that simplify the process and not overcomplicate it.
Implement health checks. I briefly mentioned it, but that’s things where I can set up lightness and readiness probes to ping my http port and make sure it’s responding with a 200 code. That way I can make sure, because kubernetes is declarative, if this is unhealthy, kubernetes will try to restart it automatically. It takes less management that I’m doing out of my hands. Set resource requests and limits. No one wants any kind of memory leaks. No one wants something just sitting there using all of their resources. It doesn’t make any sense, and it’s not going to deliver value, and then finally, be mindful of your services.
So I always throw in, don’t rely on load balancers, and I get asked every single time why. Number one, they’re expensive, both from a cost perspective and from a management perspective. You can still route things through HTP application routing, nginx, Apache, or Istio service mesh, there’s so many other ways to handle traffic and maximize your resources. There’s even things like traffic manager, so Istio, traffic, a few other things live inside the cluster. You could use something like traffic manager or Azure front door, that actually sits outside your cluster at layer seven, and you can use that to load balance across multiple clusters in multiple regions. You can scale because then it leads me in to external services, so you can actually take advantage of your resources by leveraging external services that were specifically designed for that task. That even gets into databases, storage, and so on and so forth.
And finally, I never formally introduced myself, because it’s not a Jessica show, it’s apparently the container show, but thank you so much for having me. My name is Jessica Deane. I’m here frankly because I love technology community, so I got asked earlier why Swamp Up is my favorite conference, and it’s because I get to hang out with amazing engineers like you. There’s no relation to James Dean. I put that in there because my last name has two Es. You can follow me online at jldeane on Twitter, Instagram and git hub. You can also add me on LinkedIn, but honestly, I don’t really check it that much, so I apologize for anyone who does, but I do have it, and then finally, this is probably the slide everyone has been waiting for. You can get all of the resources including the deck, the links to the YAML file, the code, every single thing that I demoed today, at aka.ms/jldeaneswampup19, or you can take a fancy picture of the QR code that will take you right to my gist.
And finally, thank you very much.