Deploying your containers to the cloud can be complicated sometimes, but in this demo, we are going to implement it, easily, step by step!
Our experts will upload the JFrog platform on the Azure cloud and install and configure the following:
- Artifactory as our binary repository manager
- Xray to help us detect licenses and security vulnerabilities
- Pipelines to automate our CI/CD process
All right. Hello. And thank you everybody for joining us in this JFrog Cloud Days.
The Azure day.
Yay. All right, so let’s start. First of all, Shiri, thank you so much for joining today. I super appreciate it. And let’s introduce ourselves. So my name is Batel Zohar, and I’m a developer advocate here in JFrog. Before that I was an embedded engineer, then I moved to the HTS team and the support team in JFrog.
I love my doggie. So as you can see in the picture, I have two actually beautiful doggies, and I love to play some board games. So if you want to get a hold of me later, or just want to discuss about Kubernetes, DevOps or anything else, I’m pretty available and I would love to discuss. So here’s my LinkedIn email and Twitter. And Shiri, again, thank you so, so much for joining us. I really, really appreciate it.
My pleasure. Being out of the house and in the beautiful Tel Aviv JFrog office, it’s just taking advantage of it while I can. So great pleasure to be here. I’m Shiri Hochhauser. I’m an ISV technology strategist in Microsoft, and that means I’m working with software vendors and startups like JFrog, though JFrog are one of my favorites, obviously, helping them build their solution on our cloud platform and do their magic on our cloud.
So I love boxing and traveling whenever that was still possible. And you can get ahold of me in LinkedIn or via email. Unfortunately, I’m not as advanced as Batel is, and I’m still making my ways through Twitter, but slowly but surely I’m going to get there.
You’re going to get there.
I’m going to get there.
All right, thanks again. And let’s start. Let’s discuss a bit about DevOps and continuous flow of updates.
So our idea, of course, it’s to automate everything. Right? I guess that you already see this graph. So I’m not going to talk a bit too much about it, but our idea here is to automate everything. Right? We would like to build our test, our releases, everything to be automated as possible and try to reduce human action, right? For humans, we make mistakes. It’s sometimes really hard and pretty tiring to do the same repetitive task again and again. So let’s just go to the two hour job and talk a bit about the Jfrog platform and start this session. So our idea here is to create an end-to-end solution from our VCS, from our resource control to our deployment, to provide you all the tools that you need in order to create the best version and released fast, of course. We would like to release fast as much as we can.
So the first thing that we have here of course is our VCS, our source control. Our source code, sorry. And we have Artifactory as our primary repository manager. We’re going to talk a bit about why we want to use Artifactory and what is Artifactory at all. We have Xray. That helps us to secure and make sure that we don’t have any open source licenses that are not allowed inside our organization. We have JFrog Distribution and Connect for deployment and be able to deploy it for different version, servers, even some IoT devices. And on the top of everything, we have Mission Control. That helps us to monitor and configure our services. And we have JFrog Pipelines for CI/CD. So basically, you have a full end-to-end solution from one place to another, since you created your code to deploy it to different service.
So let’s start from the beginning. What is Artifactory?
Who doesn’t know what is Artifactory?
Some of the people don’t know what it is, but let’s start. So Artifactory is a universal binary repository manager. Our idea, it’s kind of the place that can store all your artifacts, but it’s not storing your code. It’s not a VCS. It’s kind of a place that after you created your code, you created your packet, you will upload it to Artifactory. And then you have all of the features that you have in Artifactory, like have a further, full traceability. So you can see what exactly you created, we’ve rebuilt. You have a smart mechanism for cache. So let’s say that I’m going to get some dependencies from a remote repository. Right? Of course, most of the developers are using the same code all over again, and again, just changing some small components or some stuff that they need specific dependency and so on.
So in that case, I don’t want to go and get this dependency again and again. Right? I’m going to use the same version just in different teams or just in a different repository. So I’m just downloading it once from a remote repository. I have it locally inside my Artifactory, kind of caching mechanism, and I’m going to use this code. So it’s kind of a smart way to cache it.
We also have a smart storage mechanism, which is really cool. We’re using a checksum-based storage mechanism. So think about just the way that you’re creating your code. You created your package and you are uploading to Artifactory. Another team is going to use your packet, right? You want to extend the capabilities or create a new feature for this specific version. I can download it. And if I’m going to use the exact same version, I don’t want to save it twice. I’m going to save it only once. And then I’ve reduced my storage very easily and of course, fully reproducible building our CI Pipeline and are going to see that today.
So why Artifactory? So it’s universal. It’s integrated with 27 package managers, NCI server, and it also supports a generic repository, which is a very, very cool feature. So if you don’t see a logo that you are already using as a front end developer-
But all of them are here, like Maven, NPM, like Helm. Everything is here.
Still, if you find something that you can’t see here and you really want to integrate, you can always use your generic repository. Generic repository will provide you the ability to create any repository that you want, any URL that you want to get your dependencies, and then you have everything in one place inside your Artifactory.
Second bullet is the system of record. It’s full metadata for all supported package formats. So you have all of the information. You’re going to see that on this session. So whenever I’m uploading, for example, a pipeline package, I’m going to get all the properties about this package. I can see who uploaded it when and have all of the information about this specific package.
The third one is the storage optimization and checksum based storage. Like we said, whenever we are uploading something to Artifactory, we’re calculating the SHA-1, SHA-2, MD5 and much more. So it won’t save the same artifact twice. We were just linked with in case that we need it in more than one location.
Automation. Of course, we love to automate stuff. We have Rest, API, CLI. Artifactory Query language is our own language to create very complicated queries. To get some information from the database. For example, to filter some properties, if you want to.
And we have integration. So we are building integration with automation tools for CI/CD.
Now here’s your lovely part about DevSecOps. Yes. Let’s talk a bit about DevSecOps.
Yeah. So what I told Batel before was that I was really excited to see it. It’s been such a broad motion about embedding security as part of that DevOps process, and it’s a shift left security paradigm. So again, we want to automate everything, but we also want to incorporate security as an integral part of deployment into production. So we don’t want to just track it after it’s already deployed. We also don’t want anyone to have to do anything active to check or scan their code or their deployment for vulnerabilities. We want it to be just a native, integral part of the deployment process. That if you were able to develop something and deploy it into production and it made it way through to production, it means it’s also so safe. It’s not just scalable or it’s good and operating, but it’s also secure. And Xray, this is what it does. It’s like… And Batel will show in details, but it checks for vulnerabilities where the code is already built and pushed into, right?
Yes, exactly. So Xray actually is a tool to protect us from security problems in our Artifactory. And with Xray, we also have kind of a user plugin. For example, for VS code, I can just download the plugin. Take a simple scenario. When the developer see the fixed version already, he probably will be going to use the newer version that doesn’t contain the security vulnerability and we are going to move anything-
And it’s so much easier doing it then. Right?
So we are going to show the developer already in the development time, hey guy, you are using a version which has known vulnerabilities. You are dependent on this and that package. Please just replace it. And it’s just much easier to do it when you develop, and not later when it’s already packets for the DevOps, and definitely still better than doing it when it’s already in production.
No, that’s crazy.
Nobody would like to do that.
And that’s so horrible to do that. Like, it’s taking so long and you need to get restored version and everything again. So it could be much, much harder than doing it from the beginning,
Catching it early.
Yes, exactly. And release fast. You remember we wanted to release as fast as possible.
So whenever we’re talking about why we would like to use Xray and think that the most powerful thing here is that it’s natively integrated with Artifactory. Everything is in one place. I don’t need to move to a different location. I don’t need to send it again to upload it, to do anything else. I have already all the products in my platform. I can see everything in one place and I’m going to show it in demo. So it’s integrated. We have Artifactory, it’s natively integrated, visibility and impact analysis, universal security and compliance, and much more. So everything here is in one place and I can easily scan my vulnerabilities to make sure that I’m keeping safe before I’m moving to production, which is super scary to debug their stuff.
And how do I know what should I address first? Like, do you have any mechanism for prioritization? Because obviously if I’m going to see hundred of [velurs 00:00:10:39], I don’t know which one should I approach first.
Yeah, of course. So we have our own mechanism and we also have a CVSS score that lets me know what is the score by CVSS and how much it’s critical. And we’re going to show that also in the demo.
But the basic overview will look like that. First of all, we have our database over here, the Xray database that will update every few hours with our amazing team that collects more and more data every time. Then we are actually going to search for every artifact, really every package that we have here. We’re going to call it. Let’s extract it. Right? We’re extracting it for every layer, for example, from our docker image, and we’re going to scan it. So whenever we’re scanning it, we have policies and watches. What we would like to do and where to look for it. Right?
So the policy will define me what I would like to do. I can scan for open source licenses to make sure that I’m not using a specific license that is not allowed in inside my organization, like a GPL. And so when you heard the crazy stories about exposing all of your code or paying a lot of money just because you use a specific in the balances that is not allowed. And security vulnerabilities, of course. We want to make sure that we are protected and we won’t have any accurate or something that will attack us tomorrow morning and we’ll be safe.
So watches will let me know where to look for it, so I can create even different policy for a different score. Let’s say that if I’m getting to a very, very high score. Right? I want to block my build. I want to make sure that I can fail the build. But if I found something that it’s not that high, I found something very low or medium. The chances that I’m really going to be unsecured in that case are pretty low. I don’t want to fail anything. I may want to update the security team or update someone from another organization, but I don’t want to crash everything and just plug the build. That could be really hard in some cases. So I want just to add a web hook or a slack notification, or something like that.
And of course, everything will be automatically. We have automatic actions. We don’t need to do that again and again. We are just configuring it once and we have everything that’s configured inside our platform. And here we can also prevent downloads and much more. Any questions so far?
No, that’s super cool.
Wow. So I think that we covered two out of the five or six products that combines together the JFrog platform. JFrog Microsoft goes back a long way, but in the past two years, the enhancements that we did were essentially about being able to provide the best experience on Azure to JFrog customers. Right? And I still remember that when you looked in that marketplace, there was only two offerings. And if you would go now to the Azure Marketplace, which is essentially our Upsource, and there are over 40,000 solutions of third party vendors.
Yes. Providing their amazing software and whatever they do, being able to get it on our cloud. And the beauty of it is, again, you see here there are the different licensing tiers, the different offerings, and all of them are saying get it now. That means that this is a transACT solution.
So when you click this, it’s being deployed right away for you, either on the JFrog SaaS multi-tenant offering… You get your own environment or either if for some reason you need to have it deployed on your tenant for any kind of compliance or security reasons. You can also get it in a single click, there being deployed a resource group with the competence that JFrog Solution needs to run. And also the payment goes through the Azure subscription payment and all of this is very seamless. You would be redirected to the JFrog website, hit your license key, or purchase a new one. You can also set the free trial. Right? You can also-
Yes, exactly. You also have the open source version over here, so we can easily just get it with one click. We have everything in one place and also-
Just try it out and see how it works.
Yeah, exactly. It’s so easy. You can just click on it and you have everything. I love this solution. Look at this. You have so many options and much more than that. You just took a screenshot, but I remember that we have much more than just 12 options here. So it’s super easy. Go and try it. And we hope you’re going to like it and check it during this workshop.
Yeah. And here we are going to overview the whole pipeline, the whole process that goes to, which is a very classic, I think scenario.
Right? That each-
…each one of you is probably facing every week or months or quarter, whenever you roll out a new version. So let’s say you have your code, you have your containers in GitHub. And then you push the new version to the master of your container image. And now you want to build it and all the way through deploying it into your Kubernetes classroom production. So we are going to show how leveraging GitHub actions, which is essentially the CI/CD tool by Microsoft. So you can on a push of a new version of the code into your repository, leveraging the JFrog CLI, the JFrog command-line, build and push the Docker image into Artifactory. And there will also show how so many cool parameters are being mapped from GitHub to the JFrog platform to the Artifactory with the actual image. Right?
So we are going to see how it works. Then when the image is already pushed into Artifactory, we’re going to scan it with Xray. We’re going to show that unfortunately, we found a lot of vulnerabilities, but we are able to mitigate it much faster.
Yes, faster. We’re able to fix it. No worries. We’ll find a way.
We will get you covered. No worries. And then deploying it to your Kubernetes cluster all the way. And this is all going to be… We’re going to show it as a simple short CUE YAML file.
When the indentions are correct [inaudible 00:17:12]. I love it when the indentions are right. When they’re not so much, but also I think that showing here also that security shift left. Right?
You can see it also in two sides here.
Exactly. So as we said in the beginning, we would like to move everything forward, the developer parts. Right? To shift left and make sure that we’re able to scan ourself. So the first thing that we have here is our GitHub. We’re going to put our credentials as a secret to make sure that it’s not just plain text inside our code. And then we have Xray to make sure that we’re scanning it again. So we want to move everything left to the shift left idea, and basically scan it from the beginning.
Along the way, we’re keeping you safe. Like also the code itself when it’s actual code. So we are making sure that you are safe. And also when it’s already being packaged into an artifact, then Xray will keep you covered and safe. And then obviously we’ll still trace that when it’s in production-
Yeah, of course.
… and log it, but we are covering you along the way. Right? And we are making sure that it’s secured from the very first piece of code all the way through.
Yes, exactly. All right. So I think we can start, right?
Yeah, definitely. Let’s go into it. Awesome.
So before we created a build in GitHub Actions, we already had a build. Let’s go to our own build here.
So now we see that previously built and pushed Docker image that it’s already in Artifactory. Right?
Yes, exactly. That’s before our changes. Let’s say that we added another dependency or just changed something here. So before that, we already had some published models. We had some artifacts. We can see the manifest.json over here. We can see the dependencies and everything we need. We have the environment variables. So it’s something that will provide me the ability to debug things very, very quickly.
And those environment variables, where do they come from?
In that case, they come from the GitHub Actions and it came with the JFrog CLI. So the JFrog CLI provides me with the ability to collect the environment variables very easy. And then whenever I need to debug stuff, like in things that just breaking in a specific machine or having some issues, I can easily debug it from the UI or from the CLI or with REST API. I can find all the environment of that they use as part of this build.
So it’s natively integrated with the CI/CD tool and is able to extract the details of the build.
Yeah. That’s why we’re using JFrog CLI. And here also, if we’re going to talk a bit about GitHub Action, I have some native stuff that we already used. This build also run-
Here you can see that Batel needs to be blamed if anything doesn’t work.
She’s the one who wrote it. She’s a GitHub actor.
She’s the one who wrote it. She’s the one that executed it.
All right. Let’s just remove it. But also this is another cool part here. You can see that the GitHub Event Name was pushed. Right? You remember that we would like to make sure that we are automated everything, so we would like to create a commit and push and test for every single commit. So in that case, we created [inaudible 00:20:18] for every push. Of course, we can change it for a specific branch and so on, but just for this example, we use a testing and we use it when we actually create a build for every single push in that case. So we have some information here. Let’s try to run it on the GitHub Actions to see how it goes and go back.
Oh, first. Sorry. All right.
So you want to run it first and then we’ll talk about it as it runs.
Yeah. But just a sec. Before, let’s go to the repository, so whenever they will run the workshop, they will see what they have here.
They will see that and see here.
Yeah, so I think this is one of the coolest things. So as we said, it’s about bringing, I think, the developer, the DevOps guy, the experience to where he already is. So scanning the artifacts where they are already stored anyways. So same way that GitHub Workflows, which is essentially the CI/CD pipelines, are being stored. As you see in a new folder that is being created for you the minute you create a new Workflow, it’s being there with your code. So right along your code, we save also the YAML that tells you what happened to this code when a new version is being pushed or committed into the master branch. And it just brings the CI together to where the code is.
Yeah. That’s super cool. So it’s created the director here. We can see the YAMLs and of course we can create different YAMLs. Right? And basically split it for kind of a different step. Right?
Yeah. So you would usually have a workflow. A workflow is a full pipeline from end to end. And then it could be built with one or more, obviously, actions. And those actions are like little steps or little building blocks. It can make sense out of it. Right? So you can obviously have a like 12 different ones, but you can also take one action and have it call other actions. So it just makes more sense for you to group together some actions and some steps that you would want to do together as a workflow. And then you are also able to leverage other people actions because they make sense. It’s like common tasks that you see being reused and being reproduced all the time.
Okay, cool. Cool. So as we said before, for every push, we going to-
We just don’t want to run it because it’s going to take a while. [crosstalk 00:22:41] Let’s do the view runs.
I’m sorry. Let’s go to view runs.
View runs. And here we can see the previous one and we can rerun just one of these.
Yeah. Let’s take the brains ones here. So this is the building blocks that you just described and-
And as you saw before Batel went really fast on this, but some of them didn’t work for us and that’s fine. And later we are going to show you this one is going to run well, but that you are able to really fast see where it failed. Right?
Because you see every step that went okay, then you see a green little circle. And then wherever it like doesn’t work, then you are able to see where it clashed and where it crashed and to mitigate it.
Yeah, of course my shame. We don’t want to see all the red lights here.
We were working all day to make it work. But here we are going to rerun it, and while it runs in the background, let’s look at the actual code and what it does.
All right. So let’s go back here to the workflow and from a lovely YAML here.
So we start with that on, which is a trigger, which makes sense, because we want to say when all this is going to run.
So here we just decided and here you are going to use any basic event on GitHub that we all know, so it could be push, commit, pool. And you can also say, I want it on push to a specific branch.
Right? Because I don’t want it to be happening in every push, just when you push into the master or just when you push into staging or whatever. We can configure the environment variables. And just the important part is you can see on the [inaudible 00:24:14] run-host, the runs on what is the host machine. So what is the build machine that is going to run for you, like somewhere in the cloud, this CI/CD pipeline. And in the steps, you’re going to see that each and every step, as we said before, is an action. So each uses stands for an action and we’ll show you that some of these actions are preconfigured and predefined. And some of these could be actions that we actually configured in a different file. So why don’t you show them what happens when we search for JFrog in the marketplace actions? Awesome.
So let’s go to the GitHub Marketplace here. You can see that we have a lot of different actions here. So whenever I need to configure something, I can just, first of all, search for it.
Maybe someone did it.
Maybe someone created it. Exactly. Maybe someone did it before. We like to reuse the same code.
Recycling is good for the environment.
Recycling is great. So first of all, here you can see that we have different actions, some of them created by the community. Like you can see here, some of the actions having some five stars, which is cool. Some of them much more like our official one.
But that just says that also many people actually use JFrog with GitHub and they have their artifact management with JFrog. They have their code repository and their repositories that you GitHub and they have this scenario happening. So the beauty of it is that more and more people use actually the same scenarios. They’re going to share this stack with the community. This is all what GitHub is about-
… and to use it, but also usually the software vendors like yourself would want to provide an action that is the official approach of how to do this, and this action that we actually used and embedded in the code. So here we actually called it, right? So as you can see, we did use this and that means, uses meaning we are actually calling this specific action. It’s all about being able to use the JFrog CLI. So it’s JFrog command-line, executed right from our pipeline. So right out of our GitHub Workflow as an action. And this is because we want to now say, what is the Artifactories that we are going to connect to? And we want to be able to use the actual CLI commands. So you’ll see here later that we can actually use some of the… You see this one, the jfrog.rt. And Batel taught me that stands for essentially the JFrog-
Artifactory. So we can actually use CLI, but that is JFrog CLI syntax. And we can use it because we had this uses. Right? And we are able here to configure the CLI, the command-line by JFrog, to be working from our pipeline. We obviously though need to tell it what Artifactory you want to use it, because we will tell it, oh, Docker push, but to where? To what Artifactory?
Yes, [crosstalk 00:27:19].
So show us how we can connect to your Artifactory, Batel.
All right. I’ll show you the magic verse in a second. So here we can see the very, very interesting documentation about the JFrog CLI. So you can see that by default, we’re going to use the latest version of the JFrog CLI. And the first thing that we want to do is to configure the Artifactory, like you said before. So I’m going to go to my terminal. Let’s make sure that it’s clear.
And all the steps are literally here, you see?
Like, you just need to copy.
Exactly. We’re going to copy the exact step. It’s pretty straightforward. And it’s not that hard. So jfrog c add will provide me the ability to configure the JFrog platform. And so here’s my default server. You can see that I’m going to use the URL bateltest.jfrog.io.
So the URL is for the whole platform and that for you, you have all the licenses to all the services. So we can see it’s for Artifactory, but also for Xray, for distribution, for everything. Right?
Exactly. Here. So we’re create the URL by default, so slash and the service name, and here I can use an access token. If I want to create a specific access token for the CI/CD, for example, I can easily create it, find it inside my Artifactory and reuse it in my VL actions. And I can also of course, using some plain text, so here I’m going to add my credentials. I don’t want use a proxy or something you don’t think I need, and it’s going to figure out and great.
And now what we are going to get is a token in which we can… It’s essentially an access token, so we can connect. Not we, but our GitHub Action, our GitHub Workflow can connect to the Artifactory. Right?
This establishes a connection that allows it later to use all those CLI commands and to push and to do all of these things against your Artifactory instance. Right?
Exactly. So JFrog CLI taking this token and basically export it for different information about this Artifactory server and so on. So I can run jfrog c export and the server ID here. So we can see here is my token.
Yeah, we did it.
Now, obviously you are not going to do something not smart. It’s just putting it in.
I hope so. [crosstalk 00:29:40] You know, this happened before.
You had so many crazy stories.
Yeah. But, Batel, you know how to do things right.
Oh, thank you.
So I’m sure this is one of the biggest problems though. Right? It’s like saying, okay, so we want to share our code and our repository to so many people, but then the fact that you never wanted to have passwords and tokens in your code. But definitely as it becomes like we’re sharing our repositories with so many other people. We have them in GitHub. Then how do you make sure that you don’t have any login credentials, hash tokens, passwords, information-
… in your code?
Yeah. So the first thing that we’re going to use is the GitHub Secrets. Right? We have a great mechanism to keep our secrets, to make sure that we’re keeping safe and nobody can see our special tokens or our secrets. In that case-
So here we see that we are referring essentially. It’s like, when we want to quote, we’re just calling secrets dot, and this is the parameter name, which I guess this is how you named yours secret before.
So let’s go and let’s see, though, if now I can see your previous secret, what is going on there. So we’ll go into Settings and into Secrets. And here we can see the names of the secrets that we configured before. And now if we are going to go and we want to update this with a new value. So he seems blank. And obviously it’s not blank. Right?
You have put there. Yeah. It’s worked before. [crosstalk 00:31:08]
So the beauty of it is… Because you don’t want essentially ever to be able to see the secrets that someone else had put there before. Even if you have full access to the repository, you don’t have access to the token. Even if you have the rights to change the token, you still shouldn’t have rights to view an existing token and leverage that right?
Super cool. I love this picture. So now that we’re keeping everything safe, it looks like it’s super protected. We have everything here. Let’s update the token to make sure that I’m using the latest Artifactory version here in the right server. And let’s go back to the YAML. So I told you, I love YAML. So I love it so much. It’s super readable. Let’s see what we’re doing here.
When the indentions are right.
And that doesn’t happen often. I have to say, I hate YAMLs. I have a confession. I hate it.
No. No. How do you hate it?
Because constantly it’s breaking and then I see like in line 102, I accidentally hit the space one more time.
But look at it. It’s so cool.
When it’s working, it’s beautiful. I do agree.
Now it’s working. It’s great.
Yeah, now it is. So after we connected to that… So we have here, it’s not even that we connected it, but we have your configured that our Artifactory like access token as an environment variable. This is where we now want to build and push the Docker image. So we want to build the Docker image and then push it into Artifactory instant. So here we are going to use simply the docker build, and we want to build that Dockerfile file. So this is why we also change the directory because the Dockerfile is under the Workshop app.
Yeah, by default it’s going to the root directory.
To the root directory, right. And we got before that an error that it didn’t find our Dockerfile.
So obviously it’s like finding the right pos. And then we are also using the tag. So here we are essentially able to give… This Docker image is going to be built where we want it to be targeted. It’s named essentially. It’s going to be this ID of that, but this part, it essentially stands for your Artifactory instance. Right? And this part stands for the virtual-
For the virtual repository, exactly.
For the virtual repository in your Artifactory. Correct?
You’re right. Correct.
And this is just the name of our project we wanted to put ATS. Right. Because this is our deployment.
Right. Because we’re going to use ATS.
And now, just wondering but also I know, this command that jfrog rt was set already. The rt stands for Artifactory, and this is Docker push, I assume. But why did we use here just the normal Docker built command to build it… Though we still built it was the help of the dependencies and everything in Artifactory. But why didn’t we simply use the Docker push command? Why did we use the CI one?
So we can also use the Docker push, but if we’re going to use the Docker push, we won’t be able to collect the environment variable. You remember that I showed you all of the information that we collect, that it’s important and we can easily trace it. We want to make sure that we have this information for the future. And therefore we use the jf rt Docker push. We’ll do that with the JFrog CLI.
So much better. So your Docker push is just Docker push plus on speeds. Plus extracting tons of parameters from the CI/CD to Artifactory that will be there with the published image.
Yes, exactly. And even more than that, whenever we’re using the JFrog CLI, we’ll check the checksum base mechanism. And if we already have this artifact inside Artifactory-
If it hasn’t changed at all-
… and we don’t want to replace it and just take more space.
Exactly. Space and time and everything. It will be much faster because we already have it inside Artifactory.
Super cool. So definitely I’m going to use the one from you guys.
Totally cool. So now essentially assuming that all of this passed correctly, and soon we are going to go and see whether the run ended successfully. Fingers crossed, but after the Docker images again pushed into Artifactory, then one deployed into Kubernetes. So first, we created before that a Kubernetes cluster, this is a part like it should be. It’s so easy to be honest with Azure, you just go through the wizard. We will show later, like the one that we already preconfigured, but we want to be able to now deploy the image to… This is the point, right? You want to have the deployment up and running, so we first need to connect to our Kubernetes cluster, so we’ll have permissions to roll out a new version. So here it’s the same way. Right?
So we’re using another secret?
Exactly. We created another secret that’s called AZURE_CREDENTIALS just to make the connection with the Azure.
With the JFrog IKS, which is a cluster. And here you see, we are again using… And this also shows. This is another action that stands for… It’s again, an atomic sort of step, which is about setting context. When you want to be able to log in to get credentials and log to the cluster, as a specific service principles, it has permission. So you provide to this action that credentials, the cluster name and the resource group, and what it does in the background is essentially logging you in. And now every command that we are going to run from here onwards would be run in that context of this credentials service principle. Right?
That’s amazing. It’s super easy. Now just assuming the user-
It just makes it more organized. Right?
Yeah, definitely. I love it.
Yeah, it’s so cool. So, and obviously the last one is like the actual deploy. And here is the beauty of it. So you can see, and what I really like about this is we are saying here we are going to essentially take the image from, and what we give here is the actual link to the image field in Artifactory. So, but not only to the Artifactory instances again, to the virtual Artifactory. Right?
And then to the actual AKS. So we’re referring to this very specific image that we built and published, and we want it now deployed to the AKS cluster. And we are giving the deployment YAML to specify technicalities [crosstalk 00:37:24] and pod specification. But we are telling here essentially, go to that image that we just built three steps ago. Right?
From Artifactory. And do that. And again, can you say what is that at two last JFrog command lines?
Of course. So we have the jfrog rt again, JFrog Artifactory bce, which is the build collect environment. It’s collecting all the environment variable, like if you want to see the host machine that run it on GitHub and other information about a runner. And the build published jfrog rt bp, which is beta published. That provides you the build information and creates a new artifact that’s called beta build. And we have the full json file with all of the information about it, and we’ll see how to debug it and what we can learn from this publish.
It has also a lot of parameters about the host machine. Right?
We’ve seen that before out the builds hoster. That it’s provided to you by GitHub, and I think you have new visibility to it otherwise. Incredible.
It’s really, really cool, in case that you need do back stuff. And look at this. We have a green build here.
Let’s click on the build really quick to see what we have here.
I’m so excited. Wow.
Yeah, that’s amazing. So, as you said before, we have all the steps. Right? It’s pretty easy. And you can see that it’s run super quick. Like it took two minutes, because we already have it and-
So we are being honest with you guys, the first time usually takes longer.
So bear with us. But usually from then onwards, we also have some caching mechanism, so it gets faster and faster. But you are also able to check it out. Like it never works the first time.
It never works.
Don’t expect it to. But you are able to see where the problem… So one time, we were able to catch that the problem was connecting to the Artifactory because we weren’t using the right credentials.
Or later it was about connecting to the AKS cluster. But you are able to identify it really easily and really fast. And I’m just curious, if you are able to see here, here you can really tell that it’s collecting environment variables from the actual workflow. Right?
That’s very, very cool.
Now I really want to see like what is collected and what you can see in Artifactory for this run.
All right. So let’s take it from here, because it’s very cool. You can see that we also deploy the building frog, so we can just click on it. See what we have here. So we’re redirected to the build.
Yes. We have here the publish module. We have the AKS latest. So we can see all of the layers that we’re using here. We can see the environment variable that we said before. We have all of the information that we have as part of this build. We saw GitHub information here.
It’s still Batel [Ratnish 00:40:17].
It’s still Batel. Yeah. It’s still me. Oh no. So.
But I also want to see if there were any vulnerabilities or any problems.
Awesome. So let’s go also to the package to see if there is any vulnerability. So you can see that it’s scanned by Xray over here and we have different version. [crosstalk 00:40:34] Yeah. So it’s critical. Oh no, that’s pretty bad. Let’s see why it’s so critical. You see here Xray severity.
That is so important, that it gives you the severity. Because as you can see, there are 174 violations here and it’s a very small piece of code. Let’s be honest.
Yeah, it’s not that big.
I’m just saying that it’s so good that you prioritized it. Because I assume that the critical, out of which would be 10 or 20, and it’s much easier to resolve and it makes sense. And you know, being overwhelmed by 174.
Yeah. It’s really, really hard to debug everything. So let’s just try some of them. So here let’s take them musl, for example. We have a fixed version, which is really, really cool. Think about-
Batel is going to the fixed right away, but first tell them about the problem.
All right. [crosstalk 00:41:22] Let me just choose one of the protocols here. So the problem here is the libc, it has floating point stack and we have some issues that are related to some math directories and-
It says here, like a [Swiss 00:41:40] specific version, this library has… It doesn’t even matter what, but it says that through 1.1.23, like a [Swiss 00:41:49] specific, minor version. We know that there is this existing issue, right?
So obviously we need to change it. Either to update it or actually to roll back to an older version, but to change it to some other version. But now we’ll have to look and find what version it actually works.
Yes. So let’s see how to debug it. And it’s pretty easy with Xrays. So you see you can have everything in front of your eyes. You have the fixed version, like I said before, and I know that it’s so quickly-
[crosstalk 00:42:17] So it tells you right away.
It tells me right away. I don’t need to think too much. I just see the fixed version. I can change it in the code. If I’m adding the plugin, for example, for the VS code, I’m going to get the fixed version immediately. Whenever I’m just creating it and writing my code-
The minute I would do reference Musl.libc version 1.1.23, it would be like, no, no. After [crosstalk 00:42:40]
Yeah, exactly. You’ll be able to learn from your left side and see it.
And we can also see the blast strategies, which is amazing. I see you can see here where actually this impacts. So if you would have several components referring to this library, we’re going to see here more and more of them, just so we know.
So we’ll know to update it. As we can see, this one is 1.1.20r3 and we’ll have to change it to be the r5 to be okay.
Yes, exactly. So you can see here, we uploaded the image, right? The AKS image. Then we can see that it’s under one of the layers. This is one of them. We can check the checksum and see the exact layer here. And here is the file that’s actually vulnerable and having the issues with version 1.1.20. And if it’s impacted on different paths, I can see all the paths and change them together, so I don’t need to do the same work again and again. I just go through the impact path and change them. In December, if it’s critical and I have the references, so if I want to learn more about it, like you wanted, so I can click on the reference and go into the CVSS, or open wall in that case, and get all of the information that I need about the security issue.
Yeah. So not only that, we’re also scanning for open source licenses, or as we said before, if we are using GPL, MIT, or just not allow to use Apache licenses, for example. I can easily block it through here. And again, I can configure an automatic action in case that I found something, fail the build, stop the downloading or anything I want to prevent my developers using this specific package, for example.
Exactly. And I think that what is amazing about this… Because we often just want to work and we just want to use whatever packages we want, but obviously we don’t want to do anything which is against the company policy, but we just don’t want to check it. And this way it’s very easily… Like it says, okay, this one is banned, but it refers you. Sometimes it could be even automatically referred, behind the scene, take your dependency, which was referring to the non-licensed version to the open version and re redirect it essentially to a dependency, which is to the licensed version. Right? Because it’s just like, again, we want to provide… It’s not about this system. It’s not about blocking the developers from using whatever they want. It’s just about making sure that the company’s not exposed to… Companies grow and when they become a big enterprise, they can be exposed to lawsuits and to problems if they use an open source version. So in that case, you are just able to trade it for the one that you are licensed in and the developer wouldn’t even know.
It’s super easy. It’s amazing. And let’s talk a bit about the Docker Layers. I love this feature too. So let’s say that we both upload a different version of our amazing AKs here. I can see the exact layers that are running. So whenever I need to debug something or it’s running somewhere, but having issues on another machine. Again, it’s very, very easy. I can see the exact command if I’m changing something. Or if I just, I don’t know, remove the copy here or change an environment variable. Right? You can see here the environment variable where using NGENIX version 1.15. I can easily, see the version and debug. Another cool feature is here, the Set Me Up. And whenever you want to run it locally, for your testing-
Reproduce whatever happened on the host, build host. And you have no idea what happened.
Yeah, another GI team or someone that just want to reproduce it, or even just test it. You know, we released a new version. We fixed it. We think it’s amazing. But someone needs to test it, so we can easily click on the copy here, download the version. And the last thing that I want to show you about traceability is the artifact information here. So again, we can filter by specific package types. Let’s stay in the Docker for now, because we love Dockers. And our workshop is in Docker and here is our Azure workshop. Let’s take the local one. So Azure workshop, where is… Can you see that? Here. Okay. So here is my ideas, right? And I have the catalog here, and here is the latest. And you can see that I have the layers, like I said before, exactly what we saw on the packages.
And this is the manifest.json that have also the properties. So whenever it’s… You don’t need to do anything. It will take off the information automatically and get the Docker label in that case. And the build number that we run on our [G direction 00:47:09]. We can see the SHA. We have a lot of information and it’s super, super important. When I was in support, I saw so many cases when things were working perfectly on one machine and having issues on the second one. And so that every day, every time, whenever you are building your software, sometimes something will fail. For sure.
Expect it. Yeah. But I think also the beauty of it, it gives you visibility again to some of those processes which are running in the cloud. Then it’s match service, so you don’t have access to it.
So when you have your own build host machine, then you can see exactly how it’s configured and what it used. But often, people are really happy that they get, free of charge also, the GitHub build machines. But then sometimes they’re like, oh, maybe it failed due to some dependency or some parameter or some version Docker, whatever that it’s running, and now they can actually see what actually ran on that build host machine. And I think that it gives you just the ability to this component, which is managed. Which is incredible.
Thank you. Yeah. So here we have all of the information. Again, we can go to the Xray information and we have also the large json file with all the information that they saw here before. So from there we can-
Yeah, that’s awesome.
So I think, going back just to the YAML, so we can know that we published, we build and pushed the container image into Artifactory and then we deployed it enterprise. So let’s check our cluster on Azure and see if it’s healthy and up and running.
Okay, just a second [crosstalk 00:48:59].
Let’s make sure that it’s run. Yeah, exactly. So it’s deployed and your next deployment that we create in here, you see the name space. Okay. I think we’re good.
Everything looks kind of fine. And yes, I’m sure that also as a part, by the way of that Kubernetes deploy action, there is a ping to their health.
[IN S 00:49:18], and endpoint to make sure that the service is up and running. But let’s just see that everything looks good. So here we’re in the portal and you have a dashboard of all your… This would be your top-used Azure services. And we’ll go into Kubernetes and we can see here the JFrogAKS, which happened to be the cluster that we are using. So again, this is something that you can do yourself as part of the workshop. And it’s so easy, but here we already preconfigured that. But essentially, when you use the managed Kubernetes service then you get fully managed, fully patched, [AK Ek 00:50:00] Kubernetes environment.
So the master is you get high availability master, which is preconfigured as three to five instances, I think. And then you have here only a single node pool, because I want it to be as minimal as possible. So I have here one node pool, running node sizes, which are very small. These are machines with two cores and four RAM, if I’m not mistaken. So we have its scales from one to three. You choose your network policy, which you use Azure CNI. You can also use Kubenet. Azure CNI just gives every pod an actual IP from your [feenep 00:50:43].
That’s very cool.
It is very cool, but you also need to plan for it because it means you will be using many, many, many, many more IP addresses. So you are more likely to get exhaustion. Though obviously, it’s much easier on the sensor. It’s like a fixed and not a virtual, but a real IP address in your virtual network. So not one that the bridge is giving every pod. So it has advantages, but it’s a consideration to be taken. And now let’s check it out. So let’s look at the monitor.
Just to say before I see it, you updated Kubernetes version. That’s very cool.
Finally. Yeah, so one of those things is like, again, so we are obviously following… A lot of people are asking me why are we updating it so fast. So obviously it’s just we are following the actual Kubernetes version, but I’m getting those emails about having to update it, which is very difficult. So this one is the best, the newest version possible. But I think it gets easier to do those updates. It’s definitely something that is like, when you have good setting, then it becomes something that is done much more easily as the time goes by. And we also had here the AAD enabled, and that just means you can also log in, and this is part of the credentials that we passed there. And we didn’t really invest time in this, but you are also able to authenticate to the actual Kubernetes cluster, again using your AD identity. So even as a developer, you can give someone permissions to use that, but also for a service principle, which can have other permissions on other services. So really cool.
Oh, that’s very cool. So whenever we want to connect, we can just click here and take the credentials from the site. Right? And have that information.
Yeah, you can write, but it’s in, like our CLI, your CLI, like anywhere if you have a Mac, you can also do it on your laptop.
That’s super cool. And then let’s talk a bit about monitoring. I love all of these graphs here.
So we didn’t break anything. That’s good news. We can still see that the CPU ever is still very low, but it’s going. It’s like, okay, its stable. And we can see also the networking. So still requests are going back and forth. So we are in good shape. Right?
Yeah. That looks great. [crosstalk 00:53:03] automatically, right? Whenever we [crosstalk 00:53:03].
Exactly. So we can see here, the node count, now it shows as two, but when I configured, it was one to five. So with the [odda scalar 00:53:10], so essentially when the CPU or the [amnumorulitization 00:53:15] goes above a certain threshold, it will just spin up a new node for us. And this is a video managed clusters after all. There are advantages to that.
That’s amazing. That’s so amazing.
So I’m just so happy. So literally just with that YAML we were able to take the code and essentially on every push of it… So we reran the CI/CD, the pipeline.
The workflow, we triggered it manually. But on every push of a new version to the master, what would happen, that would take the new code that we just maybe changed something in the service. Right?
In the service.
One of the services brought in our Kubernetes cluster. That would actually build it, Docker build, push it into Artifactory, get it scanned and all of its parameters. And then also automatically we will go, in the next step, back to our Artifactory, grab the published image from there and deployed it to our Kubernetes cluster. And all of that was-
Yeah, it took us, I think, 40 free lines. Something like that? Super easy. [crosstalk 00:54:21] automatically. It’s amazing.
And also the fact that it could reference the breath plea, like it published image in Artifactory. I love it.
So easy. I love it. Yeah. All right. So I think we’re done here for today.
Yeah. And you guys also go check it out. There is here the link, which Batel worked really hard on, on the actual workshop. You can use the free licensing also by JFrog and also some… The Azure free subscriptions. Right?
And just try and test it out.
Yeah. So whenever you’re testing it, you will get the full… You can run it locally. So you’ll have all of the information, all of the instruction. You can see here what we discussed today. You’ll have some prerequisites and some information about what you would like to run, and we hope you’re going to like it. And I think it’s-
We’re sure you’re going to like it. Come on.
We are sure. Sorry.
We enjoy doing it ourselves.
We enjoy it a lot.
So, and if you struggle with anything or encounter any problems, so we’re in trust, you guys. We’re sure you’re going to be okay. But just in case, we are here. Hit us even now with some questions or send it our way later. We are available.
Yep. And thank you so much for today.
Thank you. I enjoy so much, Batel. Thank you.
I know. Thank you so, so much, Shiri. Have a great day.