Go Cloud Native with JFrog Artifactory – 15 minutes overview

See how Artifactory helps you bring together your cloud native tools and processes in a single system so you can serve production ready builds and their necessary components to dynamic environments from a trusted, always available source.

Learn how Artifactory:

  • Acts as your Kubernetes registry
  • Hosts and manages your IaC/orchestration files
  • Allows to build and deploy anywhere
  • Keeps your SSC secure by scanning your containers and packages for vulnerabilities
  • Create the foundation for predictable, frequent, high-impact releases with minimal effort

Learn more here!

Video transcript

Speaker 1:
More companies these days are exploring the cloud. We see a larger scale of companies moving more and more to the cloud for many different reasons. We actually have other talks on this, just so you’re aware if you want to go to one of our other pages inside of YouTube. But the idea here is that one of the key concept here is, of course, reducing costs. And the other thing too is, how do you operationally run more competitively in terms of speed and efficiency, and also too, the ability to also reach the market more by creating smaller services, and being able to operate more frequently, and do all the things you do?
And cloud-native is usually the approach most companies are heading towards these days. We see more and more companies retracting from the idea of using dedicated data centers, of course, moving to more public clouds. And taking advantage of services that are out there these days such as AKS, or EKS, or GKE, or whatever. I mean, there’s so many different types of providers out there for hosting, OpenShift, and whatever.
But the ideas here are simple. cloud-native is the approach that people are using now and have been using for years, but more and more legacy companies even are heading down this approach just because of effectiveness, things like cost, and efficiency. First of all, what is cloud-native? Why is this such a big deal? And why are so many people asking about it these days and heading down this path? Well, just so you know, we are actually governance board members of the Cloud-Native Foundation and we have a very needed say inside of this. If we’re going to go with the actual definition, let’s do that. Cloud-native technologies empower organizations to build, run scalable applications in modern, dynamic environments such as public, private, and hybrid clouds. Containers, service meshes, microservices, immutable infrastructure, and declarative APIs exemplify this approach.
Yes, that’s a very broad definition behind it, but really what it means is breaking down the things that you do into an easier, consumable, and manageable solution that allows you to do things such as scale more rapidly, and grow more rapidly, and do more features, and quicker. There’s tons of reasons why it’s out there. Reduction of TCO, you name it. The thing is though, we’re going to talk about the technology side of this and why Artifactory, and including actually, the entire JFrog platform is actually one of the premier choices for you building and actually deploying your cloud-native approach is because we’ll concentrate on things like containers, right? This is what most people think of when they think about cloud-native, but it’s much more than that, it’s a little more complex. Because the thing is, you got to remember Docker, of course, the containers that you host your application in, allow you to control the actual execution of the things that you create in an easily manageable environment.
But, of course, you’re going to need to scale. There was Docker swarms in the early days, but most people go with Kubernetes. And one of the key factors behind Kubernetes is, of course, the instruction set to orchestrate all these containers, which is hell. Let’s go take a step back and let’s look at Docker, right? And when we talk about Docker, mentally you think, “Okay, it’s a place I host my applications,” and that’s true because that’s what a container is. A container is a way for you to do something in a controlled environment and its own little ecosystem but it’s made up of different things. Of course, it’s the application you’re going to host, right? This is the thing that you’ve built to execute inside of the actual Docker container itself.
But, of course, that’s going to rely on runtime. And the runtime is all the execution bits that you might need. Are you running an MPM application? Are you running a Java app, a .NET? Whatever you’re running, you’re going to need runtime to support this application. And you’re also going to have to have an OS behind that, right? It’s going to be SED-based, it’s going to be Windows-based, it’s going to be Debian-based? And the thing is there’s a lot of moving parts and a lot of complexity here. And one of the key factors of what we offer as a solution … And today I’d like to say, we have many a different talks on how to build applications using Artifactory and the cloud-native approach but I’m going to focus on some of the important details on why this is important.
And the thing is, we start off with a container, but, of course, the container is just the first step. Because as I stated before, say you’re doing a web service and you’re hosting this in one of the many different Kubernetes providers to provide scalability and all that, you’re going to have Docker images that perform specific tasks. It might be your web front end, it might be security, it might be a back end, it might be data processing, it might be an API service. But when you deploy these, you need to have your thing such as Helm, right? This is the instruction set on how you’re going to deploy these containers, how they’re going to scale, the resiliency models, and things like that. And this will allow you to have that high productivity and allow you to replace each of these individual components without it disrupting the rest if you design it properly.
But the other thing too is that when we look at what we offer at JFrog as our platform, we are an end-to-end DevSecOps tool so we have using Artifactory. And using Artifactory, you can maintain and manage the third-party transit dependencies that you use to build your software. This could be the base-level container images, these could be the libraries you use for your software, these could be the actual OS components. All the pieces that make up that container can be maintained and monitored. You maintain and manage inside of Artifactory for those third-party trends and sources that make up 85 to 90% of what you do, and that includes those containers.
But on top of that, it’s also a place to store the applications that you build, the place to store the actual container images, it’s a place to store the actual Helm charts. And also too, you rely on things like infrastructures as code. Maybe you’re using Puppet, Chef, or Terraform, you could store those two. Because Artifactory supports over 30 package types natively out of the box.
And then a security sheet. Another thing you should always, always, always be thinking of and that’s our JFrog Xray product. We have plenty of talks on this. We do everything from shift left to shift right. We do deep-level container security. We actually provide security at your developer level through the CI process all the way down to your CD process. In the future, we’ll actually monitor deployments of the things that you’re doing at Kubernetes.
To the right of that, we have distribution. This is a way to deploy the actual cloud-native applications into single cloud providers or multiple, we don’t care. Or a hybrid. Our approach is simple. We don’t care where you do this, we provide the same toolset across the board. So you might be exploring different cloud providers or multiple, we can support you. And the distribution component allows you to publish those artifacts in things called Release Bundles, which are digitally signed immutable releases that inside of those contain maybe your Helm chart and your Docker images, and our Xray product can even scan them one last time before you put them into production. And when you put them into distribution and you publish them, you publish into these things called Edge Nodes. And Edge Nodes are lightweight immutable versions of Artifactory that say your COOP control service can talk to because those Edge Nodes also act as a native pull Docker registry.
The thing is, is that we have a comprehensive end-to-end cloud solution that allows you to do that. You can use your own CI tooling because we are universal, or you can use our pipelines product, which is a CD and CI orchestration tool. Itself is actually a cloud-native application that uses Kubernetes to do the runtime executions of the build you produce. Now understanding that we have all these pieces here, we have more actual videos on deep dives into each one of the components we have here, this is just a short video just to show you what we offer.
So let’s just take a look at some of the benefits that you get by using Artifactory, Xray, and the rest of our products. First of all, you should understand, our solution is global. As you can see here, I’ve got multiple versions of Artifactory running around the globe because we provide developer consistency not dependent where you are. So developers in India can use the same set of binaries as developers in Prague and developers say in Silicon Valley. I have Edge Nodes here. I have an Edge Node in AWS, I’ve got one in GCP, and one in Azure. I can deploy my web service to where it needs to be not dependent on the cloud provider that I’ve chosen. I can even host this in my own data center if I wanted to.
But the thing is that we’re going to concentrate on today is, of course, the containers, right? The containers and the Helm charts. So let’s go look at an example of this, of what you get by utilizing Artifactory and why it matters most. As I mentioned before, you can emulate your software development life cycle. Just so you know, the application that I’ve chosen to do in this case is being actually built utilizing Jenkins. So it’s got a greater backend, it’s got a node front end, I’m creating a base-level Docker image that I’m storing in Artifactory. I’m combining them all together into a Docker image that hosts my Docker image. Oh, yeah, it hosts my … Sorry, my MPM on my Gradle app. And then I’m actually going to use the metadata that I have here that’s inside of Artifactory … Because one of the key factors of Artifactory is the way we store the binaries but number one.
But number two is the fact that it’s metadata approach, right? We actually store the binaries as a shot in case we decide check some and we reference it via metadata. There’s whole stories based on that, it’s very in-depth. But that metadata is the key factor here too, is you can use that information to create laser-focused deployments of your cloud data application. You should never use the term latest when pulling a Docker image, that’s a terrible idea. It should always be the version that you know is going to be hosted as your application, right, you should always know the actual image that you are releasing. It makes the actual remediation that much easier. Well, the thing is, using that information to create that Helm chart allows me to create very focused deployments of what I’m doing and I don’t have to question which application I’m actually using.
So let’s go take a look at this step here. Now, this step here happens to be the combination step. And just to show you. Also, in my DSL that I’m using here to actually build this application, I’m actually going ahead and I’m using the Artifactory download service, which is actually part of, in this case, our Artifactory plugin for Jenkins. But I’ve created a function that actually uses our thing called AQL or Artifactory Query Language to go ahead and query Artifactory to find me the latest version of a release that I have that I want to host inside of this Docker image. This means I know exactly what version I’m going to host in there and I can query it based on metadata that I have stored in our Artifactory.
So when I look at this, number one, I can also tell you that … I mentioned that we can emulate the SDLC. So there’s a value associated to all builds called status. And status is an arbitrary value, in using our promotion API, where we can promote a binary from dev to QA to staging to production. You can do the same thing with Docker images. I can tell you that, first of all, this has been released, I know it’s been released. I also know it contains critical issues. But let’s go look at this for a second because the big benefits of … Debugging cloud-native applications can be difficult. Because in most cases, when you have a Docker registry you get this. Just a bunch of obfuscated image layers. You do a Docker pull, a Docker run, and you’re working, right, because you don’t know the contents.
Well in here, I can actually show you by using our best practices that I can actually tell you that inside of this actual Docker image, I’m running this version of my node front end and I’m running this version of my Java back. In addition, if you are running into problems with say your cloud-native approach and your cloud-native deployments, how do you know what version of an application has changed between Docker text, right? 114, maybe the previous version was 93. Well, in here I can go ahead and I can actually do a diff using Artifactory to tell me, “Hey” … Actually, I’m going to use 92 in this case. What changed between number 114 and number 92 because something seems different? Well, it looks like the node front end stayed the same. It looks like we haven’t changed the UI but it looks like the Java backend has changed. In seconds I had remediation between two versions of an application that I just hosted into a cloud-native approach.
But I also understand all the pieces behind it, all the environmental and system information. We also provide deep-level scanning of these binaries that are inside. Now, this is a terrible container, I’ve got to just tell you that. This one has over 2000 violations, it’s one of my test ones. but I’m going to focus on an issue. But the thing is, you have to know is, is that when you are utilizing Artifactory, we are finding every little level of detail of issue inside the container. We do deep-level scanning of every piece. So if I show you say, a critical issue that I found right here … You can see, by the way, just all so you know, JFrog is a Certified Number Authority, a CNA, so we produce CBEs.
But if you look here, you can see that we have actually found a critical issue. And in this case, by the way, our actual research team said is it’s actually medium because we also provide contextual analysis. But like I said, Docker containers, you need to understand everything on this side. So in this case, actually, this issue I’m running into happens to be a jar in a jar of a layer of an image of a build. By the way, this is actually faster Jackson, it’s an XML parser for Spring Boot. This is actually detecting it inside of my jar of my application. We provide you with all the information. In this case, this is a terrible CBE, but our company and our team, our research team has gone in and provided more research data so you can provide better remediation, a more quicker response.
Now, with our Xray product, you should go see videos on that, you can actually detect this before it even gets in because … Just to be aware, if you detect this in production it’s 100 times more expensive than if your developers had fixed it in the first place at the developer level. In addition, we also show you all the licensing files of everything, the OS, the runtime, and the application. We also provided information on things like operational risk, right? How old are the binaries I’m using to build my application? We also help you understand all the components inside of your Docker apps. So if you look here, you can see I could actually expose to you all the layers of the actual components that are actually inside, all the binaries that are inside those Docker image layers. I can also go in and create reports if I need to such as violation, licensing, security, and operational risk as PDF, CSV, or JSON.
You know might have a regulated industry. We can provide you with software builder materials. You hear a lot about that these days, right, we have plenty of talks on this. The idea is, it’s putting together a list of ingredients of what makes up the binary that you produce. In this case a Docker image. And we support SPDX in CycloneDX formats. But the other thing too is that if you look here, we can actually hook it up to your Jira and show you Jira issues. You can actually produce Jira tickets also for quicker remediation in addition to that. You could also go ahead and do full diffs between images, right, so you can look at the artifacts, the dependencies, and the environmental system information and see if things have changed. You can also follow its entire release cycle by looking here. But that’s great. That’s a lot of information I just threw at you.
But let’s do this though. Debugging cloud-native applications is hard. So we went from the Docker image … Just say you want to find out something’s wrong here. One of the best parts of the way we capture things in Artifactory, and a lot of people dig this is, the fact that I can go in here, I could trace it from the Docker image. I’m going to the tar gz that I’m actually hosting.
This happens to be an MPM app, right, you can see it’s a Node app. I can see all the information about it, when it was produced, how long ago, how many downloads. I could see all the information on what its dependencies are. I can see what permissions it has. It has its own set of Xray data that I can go look at to see if there’s any security issues I should be worried about. I can add more metadata to make it more applicable. I can say, “Only pull things with maybe a specific one.” I could add as much as I want here. I could follow any binary at any time and say, “Hey, I want to be notified if anybody utilizes this binary” so I can say, “Hey, don’t use this one go use another one.”
But the key factor here that most people … When they’re building these cloud-native applications and you do run into something … Say you have an issue, a government, a security exposure, and suddenly you’re worried, how far back does this go? How long would that take you to find? Well, in here I could say, “First of all, here’s the build that actually produced this tar gz, but here’s every single, in this case, image, Docker image, that’s utilized this.” I could tell you exactly how far down the rabbit hole it goes. And I can go right from here and actually follow this build back to its creation and show you that here’s that tar gz, and I could find out all its own information.
But let’s talk about the other side of this. As I said before, Helm charts. So when we look at Helm charts here, I have another DSL that I’m doing here. I know it’s a little messy looking in the way. But the thing is, if you’re familiar with shell and you’re doing things dynamically, you know that SED is a way for you to rewrite files, right? If you’re not used to Helm and Kubernetes, please go do an in-depth … You have your chart and then you have values, right? Values are the Mad Libs version of Kubernetes, you put these values in.
Well, in this case, I want the values for my charts, which I have here, in here, and I’m going to use the SED command to do. I want a query and find the latest versions of the application I’m going to be utilizing inside of this Helm chart to deploy my Kubernetes service. And once again, we are going to go to our old friend AQL here. So we’re going to query, in this case, my Docker registry, in this case, my production one, and get me the latest one that’s been approved based on metadata. I want to make sure I have the latest one.
And I even had that ability inside of Jenkins to dynamically create that. So let’s go take a look, right? I also stored the Helm chart here. And if we go in and if you look, it’s actually got some critical things going on. If you look at the Helm chart, I can go from here, I can go in. You can see here’s the tar gz that I produced. Now, in this case, there’s only one artifact, right? There are some transit dependencies that we have, and I’ll show you in this case. You can see here that there’s a manifest Dot JSON that came from my Docker app. But let’s go in and look at what the Helm chart … When I actually went ahead and I built this and I got that information, I can go in here and say, “Well, here’s the Helm chart that I’m going to deploy.”
If you look at the charts, the values file, I can actually show you right from here, view the source, that I can show you that the version of the actual Docker app that I’m doing is right here. I can actually show you that I’ve pulled these sources dynamically so that I know exactly what tag I’m going to go ahead when I deploy this application. And the thing is that by doing this, you know exactly what you’re doing and what you’re deploying. You should take the stress out of yourself by doing this because then everything else focuses on innovation, not implementation. It allows you to actually go ahead and automate this also so you can deploy faster and more effective.
If you want more information about the rest of the platform and deeper dives into that, please contact us. Then we also have our JFrog YouTube channel, and we have plenty of stuff on our website. But the idea here is faster, more native deployments, even down to things like utilizing our JFrog pipelines product, which I can just show you as part of this too. Like I said, we have integrations into many, many other tools including things such as Kubernetes. I think I actually might have some here. If you look here, you like I said, you can actually go ahead and utilize our pieces for that. Well, everybody, all I wanted to say is thank you very much. Have a wonderful day, be safe, be well, and come to us to discuss more and more about cloud data.

Trusted Releases Built For Speed