Talking Cloud Native DevOps
Speakers: Jessica Deen, Cloud Developer Advocate @Azure & Baruch Sadogursky, Head of DevOps Advocacy @JFrog
Class is in session! Join the Deen of DevOps, Microsoft’s Jessica Deen and JFrog’s Professor Baruch Sadogursky as they demonstrate some tools to help you streamline and automate your DevOps pipeline using Azure and JFrog. Jessica and Baruch will demonstrate how to easily containerize your app, use Helm to manage your Kubernetes dependencies, automate your builds, manage your container images and Helm Charts, create and publish build info (associated metadata), promote pipelines to various stages such as development, testing, staging, and production, how to check for security vulnerabilities and open source compliance, and how to orchestrate and manage your containerized apps in a cluster environment through a fully manage service.
RESOURCES:
Video transcript
Baruch Sadogursky
Hello, and welcome to Jeff frog cloud DevOps days. Today’s day is dedicated to Azure. And I have the pleasure to have the real Azure Avenger here. My dear friend, Jessica Dean. Jessica, you are a true super frog to us. Great friend to the frogs. So thank you for being with us again. Just to mention what again means Jessica is an alumni at swamp up the general user conference, where she is consistently rated the best speaker and the best talks throughout the years. So you’re in for a treat, folks. Thank you very much. And well,
Jessica Deen
Thank you very much. Thank you for having me. It’s awesome to be here today. All right, well, there who we’re going to talk about. Today, we’re going to talk about developing with confidence even in an abstracted world, even in a world where microservices, Kubernetes decoupling the monolith, all of these are common topics. But it’s really hard to have confidence in that. One of the first things that I like to start with is defining what exactly is DevOps? How does DevOps really kind of play in to this confidence loop and gaining confidence? Now rukh, you and I have had several conversations throughout the year as exactly DevOps is, what are the definitions that you can have? And the funny thing is, I think, is you can ask five different people, and you’ll get five different answers. They all ultimately kind of go to the same end goal, but it’s hard to kind of put it into an actual definition, right?
Baruch Sadogursky
Well, I hope why now 10 years in DevOps people kind of symbolize the on the on one of the versions been having a lot of people who is DevOps engineering, the title actually, maybe suggest the opposite, that actually, some people think that it’s that it’s an engineering discipline? Well, obviously, obviously, it’s not it’s a cola, it’s a set of collaborative practices. Right, and the vision of different departments, or different specialized professionals working together through collaboration towards the, towards the goal, which is eventually whether software faster.
Jessica Deen
Absolutely. And so I just touching on even what you said, is collaboration, working with professionals and people towards a common goal. And again, it’s it’s kind of funny, because you and I have like we’ve had so many of these conversations. And what I love is when people have really kind of embraced the DevOps Curley kind of adopted that mindset, because as you said, like it’s not,
it’s not a job
title. It’s not something that you like treat as an engineering discipline. It’s not like well, I can program in node. So also I can tie in node with DevOps. And now I’m a DevOps engineer, like it’s not like that. Instead, it’s it’s actually an adoption of a set of practices. And that aligns with Microsoft’s definition as well, which is written by Donovan brown here at Microsoft. And DevOps is the union of people something for the use Ed, people process and products to enable continuous delivery of value, a common goal to our end users. And I love this definition, because the most important word on this slide is value. Because really, even as you said, we want to deliver better software, we want to do it faster. Well, we can’t do that if what we’re delivering isn’t valuable. And then in an obstructive world, it gets a little harder, because how do we define value? What does it mean? And if we’re not delivering value, that begs the question, what are we doing and why are we doing it? So that’s really kind of the entire goal of DevOps is to enable continuous delivery of value and to be able to do so confidently. volunteen gooeys? Why are we doing what we are doing? Exactly, so Okay, if we’ve died if I doubt if we’ve identified the definition of DevOps, we now know what exactly DevOps is, what is the problem we’re trying to solve in an abstracted world in a microservice world and a container filled world after we’ve moved more cloud native, we’ve broken out the model if we have our different API’s and services. What What could possibly go wrong? What’s the problem?
Baruch Sadogursky
Yeah, so a developer had a problem, they decided to bring it into microservices. Now they have 99 big problems as the number of their market services. Right.
Jessica Deen
Exactly. And one of the biggest problems I think I hear and this is even from home person, my own personal experience, but also from customers that I speak to is it’s hard, if not painful to actually write code or debug code locally, while still satisfying those micro service requirements. How do you debug or understand where the problem is, in a single API, when you really need the context of the larger application, you can ask to gain access to Kubernetes test environment, and you can sit there and kind of poke around. But wouldn’t it be better if you could actually, locally test even with that context in mind, another big problem is often the only team members only those that are on your team for your API for your service for your application, actually have context of what’s going on. So if you’re trying to work with other people on the team, maybe people who don’t speak code, maybe their designers or project managers, how do you really kind of explain or the issue wrong and give them confidence that it’s been resolved, right to make sure that we’re really focusing on delivering that value and having confidence in that value delivery.
Baruch Sadogursky
And this confidence problem is, is huge, right? Because in the end of the day, the more complex your application is, the less you have confidence in, in the in the moving parts. And I’ll tell you, I’ll give you one example, which is, which is perfect. I was recently a spoke at the panel in the scale by the big conference, which is all around programming languages, and functional and and what’s it’s called type systems and what’s not. And one of the ideas that was thrown out there is Shouldn’t we define now types for our API’s. So we will get to a confidence level in which if you try to use our own API, it won’t even compile. And, obviously, we can solve a lot of problems with this type system. But we’re very, very far away from this kind of confidence. And that means that we need to guess, will it actually work, we can test it to a certain extent, we can review the code and try to deduct from the God will it work or not, within the end of the day, the only quality that matters is the quality in production that we serve to our customers, and sitting behind our desk and working on our machines. We’re very, very far away from having this confidence that whatever we are doing, will work in production.
Jessica Deen
Absolutely. And I think that’s one of the big things is having that confidence that just because we made a change at our desk, that we can get that into production. I’ve spoken to developers before and I’ve even been guilty of it is I make a change locally. And in order to test it, I do my local tests, right, I make sure everything’s listed. And all of that looks good. But in order to test it with the larger application, I go ahead and I push it to a PR branch and I let the PR and everything run through and deploy it to my test environment. And I rely on the DevOps process to do my testing, when in reality, wouldn’t it be better if I actually am able to test locally and then have the DevOps process as a trust and verify step right? It’s that extra. It’s that extra confidence level that again, we’re making sure that what’s running in production is quality, because as you said, that’s all the matters.
Baruch Sadogursky
This brings memories from 15 years ago, if you want to remember how we had this discussion, should we run the unit test locally on our machine before we do commit? Or should we trust the CI server to find all the errors that we make? And I think that the consensus now is that you do try and test as much as possible running the user tests pre commit, and only then you make commit and and your CI server is kind of the extra step.
Jessica Deen
Exactly. Because not only is quite is quality, or Yeah, quality, your responsibility as a developer. So you have to make sure that what you’re pushing in, you’re you have confidence in but then you also have to take greater care in security. And that’s another gray area that kind of gets in the way in containers is how do you have confidence also in the security, maybe in your dependencies in your Docker container images, even if you are a developer and you don’t really you haven’t gotten to into the actual structure of Docker or Helm charts, or Kubernetes, all the code you’re putting in there is still getting baked into a little container. How can we have additional tools to verify that those packages, those dependencies, everything is still secure, right? And that maybe we’re pulling from based on policy, we’re pulling from secure repositories or locations, right? That’s another big big area here. So we’ve identified quite a few problems. Now let’s talk about what what are different approaches that we can use to really kind of solve that. First, we can write code locally. And as we’ve talked about, it’s hard to satisfy those micro service requirements. We want to be able to build quickly, we want to be able to test. We want to debug, we want to cycle, but we need to make sure that we still have fidelity to all deployed environments, right? regardless of the fact that it’s in production. How can I make sure it’s still in alignment with everything else?
Baruch Sadogursky
Yeah. And you’re done. You’re done. That’s the problem. You, you really can’t
Jessica Deen
know. And that’s, I mean, containers kind of solves that. But it doesn’t it’s not a one size fits all solution. And it’s not the universal of, Okay, I’m cloud native. Now, all my problems went away, right?
Baruch Sadogursky
But you don’t have the containers anymore. They don’t want to then don’t work on the new Macs. So we don’t have containers anymore. Well,
Jessica Deen
they do if there are, but that’s a whole different. You can you can update all your images to use arm that I mean, that’s different that don’t they don’t live virtualization,
right. That’s the problem. But yeah, no, sorry. They just didn’t get to stop that. And no one else in the
world is ever limitations here. By the way, for those who didn’t get the joke, the new m one chip for for Apple computers, because it is an arm chip from Apple doesn’t will not actually support running Docker, nor native containers, you would have to update it to an arm container because of the structure and underlying hardware of how Docker works. In Caldwell, water on Twitter actually has a whole thread where they go into all of that.
Baruch Sadogursky
Anyway, local thing on Yeah, focus on your own development local, it’s very convenient for us. Not really, can you predict how it will perform in production, even with containers, definitely a step forward into this direction, but not taking taking it all the way. Especially not with microservices.
Jessica Deen
And especially not from a local perspective. So this is really from a developer perspective, how you as a dev can streamline your local work process to have that confidence so that by the time it gets out into your pull request, and ultimately out into production, everyone else has the same confidence level that you have. And that’s where you mean, so again, we talked about how you can do things locally, you can do things remote, you can even do things hybrid, okay. And there’s different things where if you can do things in a hybrid capacity, you actually are able to tick all the characteristic boxes, you’re able to build fast, you’re able to have more fidelity, you’re able to scale with different app components, and you’re able to have something that’s ease of use. Now, what’s a hybrid solution. They’re a hybrid solution that we Azure use and announced earlier this year is something called bridge to Kubernetes. And so we’re going to talk about that. Now first, before we actually describe a bridge to Kubernetes is I want to talk about the problem and the application we’re going to be working with. So we have the end user here over on the left. And then we have a bunch of different services running in Azure. So we have different API’s that’s running in Azure Kubernetes service, that’s everything that we’ve decoupled, we’ve adopted the cloud world, we have different languages, dotnet core react, we have different services that we’re going to need to try to troubleshoot. It turns out that there’s a bug in the bikes application. And we’ll show you what that problem is that the services also use other services within Azure such as Azure SQL Database, and Cosmos dB. This is a very real world application, which in very real world scenarios, there’s very real world problems. One of the biggest issues is routing, how we actually test things, again, from that local environment. So we’re going to do is we’re going to take the bikes API, and we’re going to test that single API in the context of that larger application and that larger architecture and infrastructure. So you can see here in a little routing example, I’m going to be able to have an isolated API, that’s that purple bike’s version down there at the bottom, and still be able to communicate to the external, that’s where the hybrid comes in, or the remote services running in Kubernetes. So that’s where we have hybrid, right, I’m going to be able to use both my local and my remote API’s to work together to gain that confidence from a local experience. And I’ll be able to do so seamlessly. That’s
Baruch Sadogursky
exactly a win win, win win, right? So you do have all the context of the big of the of the of the big thing of the production of the real world. But also you can play with your own one of those components, without it being like running gobblegum and doing calls in your in your application.
Jessica Deen
Exactly, you get the benefits of both right, you get local and remote to give you a smoother development experience, even in this again, obstructive world. So using bridge to Kubernetes, which is an extension that you can use in Visual Studio code, you’re able to reduce your time developing and testing because you don’t need a dedicated Kubernetes sandbox environment. You can actually create your own through the extension itself. You can debug and iterate your code directly using using Azure Kubernetes service. So now you can take the power of the clouds and make your multi micro service solutions. more simple, right? That’s one of the big things. And as we’ve kind of touched on, you can now debug your containers remotely. But you do so using local code. And I’ll explain that in the demo. But it’s really, really cool. And it’s super, super simple.
Baruch Sadogursky
Yep. So we will, we will see. And it should be, it should be really, really mind blowing in New York City. So yeah, so
Jessica Deen
anything that you take away from that previous slide, the biggest thing to remember is that because this is a lightweight solution, because it’s just an extension that’s helping you develop better faster quickly, iterate quickly, no permanent changes are actually made to the Kubernetes environments. This is all set up so that you have a local sandbox in a hybrid scenario, relying on remote API’s, but still using your local code. So no permanent changes are made, which is perfect. Right now, I don’t have to go worry about standing up a permanent isolated dev environment sandbox, or I don’t have to worry about having that fidelity across different environments, because I’m just using the actual code natively, using the existing API’s that are running remotely. So here’s the demo that we’re going to work with today. We already showed you what the architecture looks like. But essentially, me as the developer is going to go in, I’m going to hit the front end of our application, I’m going to be rerouted over to the bikes API, when I make a request, you’ll see why we keep calling it bikes, it’s a bike application. And then from there, that likes API is actually going to be redirected over to my workstation over to my code, but it’s going to do so in an isolated fashion. So if I would normally go to bikes dot Jessica Dean comm not the URL. But if I were, this is instead going to give me testing or Jessica test dot bikes dot Jessica Dean comm it’s going to give me a prefix automatically. And I can go through and I can I can change whatever that prefix is. But the cool thing is, it’s going to handle all of that routing, and all that testing for me. So I can work on just one that one API and still see how it’s performing in again, in the context of the full application.
Baruch Sadogursky
And this is great for debugging problems, or trying to figure out what what is out there. But I think we can, we can take it to the next step. We can do this concept of, well, let’s look at your site before we roll it out. We can we can make it into the process, we can make it the process, we can actually do it, or every pull request for every change that is coming in. We can take it look at it and decide if we want to go with it forward in the context of the rest of the application running in the real world.
Jessica Deen
Yeah, absolutely. It’s it’s pretty cool when you actually start seeing it in action. Because looking at I mean, I’ve been working on Kubernetes for four years at this point. And even looking at this, there’s a lot of moving parts. And it can be overwhelming even for people who are experienced, and especially when you’re new, or even again, even when you’re experienced, but especially when you’re new, it’s really hard to kind of know where to begin, how do you continue to be an effective developer, when you have your team and your leadership pushing down these new processes and new tooling? How can you still find a happy medium, and this, honestly, really kind of that is that happy medium. And we’ll talk about this at the end. So I’m going to go ahead and pause right now we’re going to get out of visual or not Visual Studio code, we’re gonna get out of PowerPoint. And I’m gonna go over here to my browser, and this is my application. This is the bikes application. It’s called adventure work cycles. And I feel like this is a perfect application for the fact that we’re all in quarantine right now, right? Because we can’t really leave our homes or we shouldn’t be leaving our homes, especially if you live in the United States in California. And we’re still trying to find things that we can do that’s safe, that’s socially distant, but also fun and active, right? Something that still gets those endorphins going. I talked about fitness a lot. We, Brooke and I have have had some fun conversations. But this is a perfect application. So let’s say that I’m a user and I want to run a bicycle. So I’m going to go ahead and log in as a relic brakes. You can also log in as parents if you’d like. But I’ll log into a relah. And I’ll start finding a bicycle click on a women’s cruiser here. It’s the top suggestion. And I like to look at pictures, but unfortunately, there isn’t a picture of the bike. Now there could be just this one item, but let’s keep using the application. I’ll try to find a different women’s cruiser. This one has orange tires, so I like that one. So click on that one. So no image. Let’s find one for brick and see if maybe for a men’s bike. Seabrook Do you like Do you like red? Yeah, red is fine. Red, fine. Okay, so green green for J. Exactly.
Yeah. Okay, so
we still have a picture right? So there seems to be that there’s definitely Really a bug in this application where we’re supposed to be getting a picture because we can see it on the thumbnail. But we’re not seeing the larger picture when we actually drill down into the item itself. So to fix this, we’re going to go into Visual Studio code. And I’m already in my bikes service. So just to show you down here, my terminal, I know I have fancy colors and everything. But if I were to go back and do and, whoops, actually, I backed up completely out of the directory itself. Let me clear that and go back into just the bikes sharing application. That over here, all right, and now I do an ls, we should see that I have a bunch of different folders, right, I have assets, I have bike sharing, web bikes, billing, databases, gateway, all of these different sub folders correspond to our different API’s. Now I’m route microservices, exactly concept of microservices, for I’m already in bikes. Let me clear this. Clear. I’m already in bikes in Visual Studio code, you can see I’m just in one folder, so that I can work on just the API with the issue. Now, you can see that I do have a charts folder, here, I have a Docker file, I don’t actually need those, um, just know that I’m not going to touch them today. We can use good debit, specifically to get refund. And then I’m also going to be using an extension, as we talked about, called bridge to Kubernetes. So you can get that from the extension marketplace in Visual Studio code, you just go and install it, I already have it installed. Now after you install it, you’ll want to configure it right. So let’s close this out. We’ll go back over here. And I already have an existing debugger for locally debugging with NPM, and node. But how do I add in the Kubernetes part? Very simple. I simply open up my command palette, that would be Command Shift p on Mac or Ctrl, Shift p on Windows, I search for Ridge to Kubernetes. You can see I have configure, I click on Configure. From there, it’s going to find the different services in my Kubernetes cluster. As long as my system has access to the clustering question, I can start to test this. So I go I find the service, which we know that it’s a bike service. And then I’m going to specify a local port. And this is going to be the port that I want to test on locally. Since it’s known, I’m going to use traditional 3000. And I’ll hit enter. Now I use the existing debugger, the same configuration that I would use for my local, I’m going to use that but it’s Kubernetes is going to bridge a fire, right. So I’ll click this. And now I’m asked if I want to isolate. And this is important, because if I selected No, it would actually redirect all incoming requests for the service to my machine, which would affect production or dev or whatever the cluster is, of course, you don’t want to do that. So instead, I’m going to select Yes. So it’s only going to redirect requests from the and it gives me a made up random URL. We’re subdomain of Jessica d dash 4495. And this actually uses header propagation. So you have to make sure that your application has headers and everything in the application. That’s a conversation for a different time. But once you do that the configuration is created. You can see right here, it actually updated my VS code, launch file. And right here, now I have a configuration for launch NPM with Kubernetes, it’s virtually the same as my launch via NPM. So using the same port, only now it’s also using a pre launch task. If you go take a look at the task, I have a bridge to Kubernetes service, I have the service name, I have the ports. And I have an isolate as this is where if you wanted to change the isolate as prefix or subdomain you could. But that’s that’s how that part’s working that it did everything for you, you don’t have to worry about it. Now that we have that working, let’s go ahead and test it. So I want to make sure I select my launch via NPM with Kubernetes, the debugger and I’ll go ahead and start the debugger. And this is where really all the magic starts happening. Okay, I don’t have to do anything at this point, you’ll notice that the task bridge to Kubernetes service starts, it starts redirecting services, it starts taching to the cluster, it knows the current namespace, the target service, the service for, and it’s isolating the service with the header that we give. Now I have to make sure that I give it administrative permissions to my machine, because it’s actually going to even temporarily modify the hosts for the service. So now what’s going to bring every reservation API, the other API’s, it’s going to bring all of that to a one to seven dot x x x local address
that’s only on your machine that’s only on my machine. So I’m going to have to also I have to give it again my password to make changes.
So you can hear my boss, my dog trying to talk also in the background. So you can see it’s launching the endpoint manager and by the way, so this also works because it has a routing service. That’s living in your Kubernetes cluster. But you don’t have to deploy it. If that pod or that deployment doesn’t exist, the extension will install it for you. Like there’s literally nothing that you have to sit there and do aside from install the extension and configure it with the cluster you want to use. That’s, that’s great user experience or developer experience, developer experience and user experience. I mean, it really, especially because as developers, understanding, networking, and routing and host files and IPS, it’s not really something that we do. It is that my background, I understand. That’s what
Baruch Sadogursky
I wanted to say it’s easy. For you, it’s so easy, because you’re big on results. But as someone who
Jessica Deen
isn’t a developer, it’s it’s not. And I’ve learned that the more I’ve gone into development, the more I’ve forgotten about ofs, which is that’s a that’s a completely different problem. But you can see that now my debugger started. And you can see that the container port for the application is normally Port 80. But it’s available a localhost 3000. So it’s just redirecting, you can see all the different services I have. I even have rabbit and Q on the server, I have users and anything that I have running, all got rerouted to a one to seven dot x dot x dot x. So I have access to it locally. Now, what does that mean? What that means is we can actually even set a breakpoint and hit the breakpoint while accessing our application.
Baruch Sadogursky
It’s exactly what we want, because we need to figure out why we don’t have our our images.
Jessica Deen
Exactly. So I’m going to go into my server JavaScript file, we’ll scroll up here, just so we can kind of see what we have going on, we can see that we have Express, we have MongoDB. And I can start seeing some of the variables that we’re setting some of the functions, we have one where we’re doing app dot get, so we’re gonna find our bike, it’s probably not in there, we have a new bike, we have update bike, it’s probably going to be in a gap, we need to list. So here’s our gap bike. And if we keep scrolling, we can see that we have a variable for the bike equals result. So I’m going to go ahead and set a breakpoint right here so that I can see what that result is right? Now that I’ve set that I can actually still in Visual Studio code, click down here at the bottom where it says Kubernetes. This is the connection that the debugger has made. And when I select that, it brings up all the different URLs for all the different API’s I can connect to, well, I want to go to the bike sharing web that’s been isolated, right? That’s my own personal environment. Now when I go to that, here’s my own login. This is different than anything that’s running in our dev or production or whatever you want to call that other environment. This is me locally. And you’ll find that out because we set that breakpoint when I sign in again, when I click on any bike, I’ve taken back over to Visual Studio code where I’ve hit that breakpoint, I can hover over result, and I even have IntelliSense. Like I can see everything that’s being called I can see the object, I can look things up, see the ID object, I can see anything, I can see that there is an image URL that’s supposed to be used, I can even if I want it to go back over to the actual debugger itself, I can see my call stack and I can start dropping down into different files. I mean, this is true development, right? Like I can use to finding the problem. Now in this instance, because it’s not so much about how to develop, right, like we all we all know that part. Instead, I want to actually fix the problem. So it turns out that somebody hard coded the image URL as we can see this bike dot image URL, so I’m simply going to comment that out. Okay. And I’ll remove the breakpoint here, we’ll restart the debugger. Okay. And now we’ll go back over to our browser here. And I’ll simply refresh that page. And now I’m able to see a picture. Okay. And again, this is still local. Now just to prove that if I delete this prefix and hit enter, it’s still broken. Going back to mine, it’s not broken. And again, this is not running a container in a container. The bikes API right now code is not I didn’t have to push anything over. I didn’t have to deal with Docker help. This bikes API that you’re seeing is the native code that’s running on my system the same way it would if I were just doing NPM start. Yep. It’s just native code. So now that I’ve fixed the problem, I have confidence, right? I’m I’m glad that I was able to fix my own error that I probably hard coded, I’m confident in this fix will stop the debugger. And now I want to go ahead and check in these changes. I don’t want to check in the launch and tasks, because that’s stuff that was specific to my system. But what I do want to check in is the server change, where we can see where we actually commented out that hard code image URL, right, we don’t need a static file. So I’m going to say, fixed hard code link. I’m going to go ahead and commit and see I’m still on my bike image fix branch, right? And we’re going to go ahead and push that up over into by local repo and then we’ll go take a look at how we can now use DevOps, or specifically JFrog pipelines to fix this even further and give everyone else confidence. Exactly.
Baruch Sadogursky
So this is what I wanted to comment about. So now you fixed it, and you know what you’re doing, and you are confident in your commit. But what you are doing now is that not, it’s not that you’re committing to, to master or to the main branch, and then it goes directly to production. Instead, you are as a good citizen, opening the pull request.
Jessica Deen
Yep. So we’re gonna go here, right now I’m going to say compare and create pull requests, you’re going to go from my bike fix and the master, I’m going to go ahead and create the pull request.
Baruch Sadogursky
And now I am your colleague, and I want to review your pull request. And I look at the change. And I do my best to understand whether these common out actually fix the problem. Now, frankly, I don’t know, I would love to see it, to understand whether it was fixed or not, instead of just trying to understand how coming out this line, fix the vibe that we that we had. The problem is, the only way for me to see it is to approve the pull request, let it run and see it in production.
Jessica Deen
I mean, the other part that would be more painful would be for you to pull down my pull request, test everything locally, in your own environment, using braids to Kubernetes.
Baruch Sadogursky
And lazy I was I was I was ready to let it run to production to see the changes instead of trying to recreate your environment on my machine. Because frankly, both options are horrible.
Jessica Deen
Yes, both of them are awful, and you shouldn’t do either one of them. Instead, what you should do is make sure that you have a confidence check built in to your DevOps pipeline. And you’re gonna do that, you can do that using JFrog pipeline. So you can see that I actually have a pull request bikes API pipeline that previously was successful. Spoiler alert, we’ve tested it, though, it wasn’t always successful, I did have two failures. And now we have one that’s running. And this was automatically triggered by GitHub, the second I made that push over into my GitHub repo, and then open the pull request, this automatically kicked off.
Baruch Sadogursky
Before we start to dig into that, let’s talk a little bit about different pipelines, and then figure that out. This is also among other things, a CI server, right, a build server, we can we had a commit, and it’s running and it’s trying to build it. This is what you should recognize immediately, like, Hey, I know that what is that? It’s a it’s a CI that running and it actually is, but taking it to a higher level. General pipeline is one of the products inside the different platform. And as you can see, that’s exactly what Jessica is now. And its purpose is wider than just building your code. It’s actually an orchestration tool. More than anything else, it means that you can build pipelines, and eventually pipelines of pipelines, that will take your code from development, all the way to production, including distribution, whether wherever your code should be it should be distributed. And obviously, the most common practice will be what we are doing now, we are building some code, and then that containerize it and then build a Helm chart, and eventually put it pull it to Kubernetes. And that’s obviously as of today, as of 2020. Probably the most popular use case when we’re talking about tooling for enabling DevOps. But it’s not limited to that you can actually have a very sophisticated pipelines or five lines that go all the way to distribute to remote for computing, cloud computing, IoT, and whatnot, and the scenarios are limited. So we’re going to show you a little bit of how pipelines work on this particular example. A but keep in mind, it’s it’s actually much wider than that. And it’s also support, tons of other difference.
Jessica Deen
It does. And one of the cool things about JFrog pipelines, that’s very unique to a CI server solution is every step rather than running on a traditional build server. Traditionally, ci systems have always run on a server of some kind. So you have to set up a build server with all the tools that you need, or program in w getting and pulling all the tools that you need into your pipeline. Data pipelines works as containers right there. It’s a container based pipeline solution. So every step runs in a container, which now means that rather Been a managing my own build server, or be making sure that I have my own tooling available for each step to run, I just run a very small container to run one single job with the single tool that it needs. It’s a lot more effective, it’s faster, you can see this entire PR pipeline ran in two minutes, because we were able to cache layers. And by the way, this isn’t my own architecture here. This is this is the JFrog solution free tier anything like if you still have caching, not a lot of providers will give you caching even on a free level, which is kind of nice.
Baruch Sadogursky
Yeah. And it’s actually pretty, pretty generous offering comparing to the alternatives it is. But in the in the real in the real deal, you have those not fools that you can actually decide, how do you want to run to run this containerized environment. And what we have there is actually a set of options, you can bring your own Kubernetes cluster, and run your build on the nodes of this cluster, you can select which images you want to run, you can bring your own images, as you just mentioned, we’re in the end of the day, you can just connect with bare metal, four types of fields, that cannot be containerized.
Jessica Deen
You can even add it you can see this is the dynamic note pool that again, was provided as part of JFrog. And you can just enable your cache, you can enable your settings, you can have it clean your cache weekly, or whatever you want to do, and then make it available to your different pipelines, it’s actually very easy to set up particularly because they have integrations. So you can see I have integrations over to my GitHub over into Azure over in artifactory, we’re putting the images, we’re going to use that to address the security problem that we wanted to fix. And then Kubernetes for my cluster running in Azure, this gives me access to log into Azure to log in and actually get information about my Azure resources, right, because we have a larger application. And this gives me access to my actual Kubernetes cluster.
Baruch Sadogursky
Those integrations are powered by hashicorp. Wolf, actually. So you can, you can know that the credentials that you put in the AWS integrations are also safe, you don’t expose them into environment variables that we know, especially in containers environment can actually be written or written from, from a file. So nothing, nothing about that. It’s a it’s a hardcore walk behind the scenes. And very soon you will be able to bring your own world with the existing credentials and expose them as integrations were different pipelines as well.
Jessica Deen
Yep. And now let’s talk about the different steps that we did. So I think I can let’s zoom in here. Now that’s a search and make this bigger. Yeah, exactly. Yep. And now you can just zoom in, like, there we go. I’m using the apple mouse. So it’s like how which way down is zoom in. Now, why? I don’t know. So you can see that we’re doing the normal steps that we would do To test right. And again, from a developer perspective, we might not so much care about this. But this is more from the DevOps concept of how do we make sure again, that everyone has that confidence, we’re taking our changes, we’re building it into a Docker image. But we also talked about how we want to make sure that all the packages and everything we put in Docker is also secure. So we’ll take a look at that in a second. We push that over into our Container Registry, which we’re using JFrog artifactory, we make sure that we create a valid branch name. And what that means is we already have a branch that we push to well, we need to make sure we’re going to take the name of that branch and use it in Kubernetes. And Kubernetes has a max character limit. So we’re going to make sure that we slim a Fae that down to where that we don’t exceed those characters. And therefore we don’t have any unforeseen errors that we don’t need.
Baruch Sadogursky
Before we Before we continue, let’s talk a little bit about those small circles that are coming in and out on each each and every step right. So those are the declarative income inputs, and outputs on each and every step. So for example, you can see how the pair of Docker build and Docker pools generates this small how they call it. info and try to describe what I see in the picture. It’s not like a joke. Thank you. So you see this not. And this is actually an end description of a visual representation of the of the general building for the bill of materials that we gathered throughout the build and publish to artifactory. And here we say the next step, the entire next pipeline will be dependent of this bill of materials and it’s actually available to us in the next steps, in case we want to use any information about now you can see also the small icons here as inputs, and this is where we actually need something from our GitHub as an input to each and every step. So for creating the name, valid branch, we need information from the commit, or creating a Helm PR install, we need the information from the commit. So this is where we can declare what we need for these build, and what this bill produces as an element.