DevOps Plumbing: Red Hat OpenShift CI/CD Pipelines with Artifactory and Xray

Phillip Lamb
AppDev Solutions Architect, Global Partners and Alliances

In this session, Jeff Fry of JFrog and Phillip Lamb of Red Hat will demonstrate the ease of supporting DevOps with a fully-fledged pipeline in a cloud. From source control, CI server, artifact repository, security vulnerability, license compliance scanner, Docker registry, Helm repository… all the way to runtime, and OpenShift, tracing, and monitoring tools.

We mean EVERYTHING! We’ll use K.I.S.S. principles (keep it simple) applied to a bunch of SaaS tools to show how quickly you can pull it all together.

Video Transcript

Good morning, good afternoon, or good evening, depending on where you are. My name is Phil Lamb, and I am the DevOps senior solutions architect for Red Hat’s global partners and alliances group. I’m based out of the Dallas Fort Worth area here in Texas. I came into this role after more than 15 years as a professional developer, and I’m passionate about DevOps, agile, automation, basically everything that enables me to be as lazy as possible while still getting good code shipped.

 I’m here with you today to talk about some of the basics around producing software using DevOps, and how with Red Hat’s Openshift pipelines, and JFrog artifactory you too can fix the leaks in your development pipeline.

 So let’s get started.

 I’d like to start with a description of the modern software development cycle, how we produce software today.

 There are three main stages.

 The first step is sourcing.

 This is where most of our time is spent as developers, we search for and utilize reusable components.

 So we can avoid writing everything from scratch.

 This can be frameworks, tool sets, really everything available through the open source community.

 We find them in different registries, NPM, Docker hub, Maven Central, etc.

 And then we reuse them.

 But those pieces of our code weren’t written by us.

 So it brings concerns like, how do we manage them correctly in terms of versions, metadata, What was used, how do we know it’s good, etc.

 And obviously, there’s third party component governance and compliance.

 What does our organization allow us to use?

 Are there potential vulnerabilities with some of these packages?

 That’s one of the favorite vectors these days for cyber attacks on organizations.

 Next, we have development.

 A few decades ago, this would have been the entirety of our cycle diagram, back when people wrote most, if not all of their code from scratch.

 But more and more, we’re writing less and less code.

 Most of what we’re writing these days is integration between separate components, what Tyler Jewel likes to refer to as stitch Ops.

 We still know how to write code, how to build code, and how to test code.

 It’s still the heart of software production.

 But the amount of code we’re writing is shrinking.

 The third step, which is becoming increasingly important, is distribution.

 This is what you’re probably thinking about when you hear the terms continuous delivery and continuous deployment.

 But that’s not the only distribution target.

 How about distributing what you’ve written with other teams, downloadable software, and then, of course, the relative newcomer to Dev, edge and IoT distribution.

 That’s the big picture.

 Let’s see how that relates to DevOps.

 First, let’s establish some concepts about DevOps by way of my marriage.

 I have found over the years that I’ve been married to my wife, that collaboration and communication must exist in order for two people to harmoniously live together.

 The same goes for DevOps.

 Devs and operation folks have traditionally been at loggerheads due to perceived differences in either who to blame, or who to praise or who gets the lion’s share of the budget.

 DevOps is all about blowing up those negative attitudes and deploying tooling, which enables both teams to work together with relatively little conflict, helping to ensure

 the collaboration and communication, which is so important to a highly functioning team.

 Really, DevOps is not one tool, but rather a culture.

 The best indicator of functioning DevOps process is very simple to measure.

 It should bring together Dev and Ops so that they can work together and move through the software dev lifecycle as quickly as possible, with the highest quality at the end.

 Whether you’re new to DevOps practices or have been implementing them for years, you’ve probably heard of CI\CD, continuous integration and continuous delivery.

 It’s one of the prominent practices in the DevOps movement, and focuses on frequently delivering applications to customers by introducing automation into the various stages of application development.

 In practice, CI\CD introduces ongoing automation and continuous monitoring throughout the lifecycle of apps, from integration and testing phases to delivery and deployment.

 Taken together, these connected practices are often referred to as a CI\CD pipeline, and are supported by development and operations teams working together in an agile way with either a DevOps or Site Reliability Engineering, or SRE approach.

 So now for a couple of definitions.

 Continuous Integration is an automation process for developers.

 Successful CI means new code changes to an app are regularly built.

 Tested and merged to a shared repository.

 It’s a solution to the problem of having too many branches of an app in development at once that invariably conflict with each other.

 Continuous Delivery refers to automating releasing changes to stage and pre production environments, which can then with approval of the operations teams or release managers get deployed to production.

 It’s an answer to the problem of poor visibility and communication between Dev and business teams, in automating manual steps that slow down application delivery.

 The purpose of continuous delivery is to ensure that it takes minimal effort to deploy new code.

 Continuous deployment, which is the other possible CD is similar to continuous delivery.

 However, changes are deployed into production automatically without manual intervention.

 So that sounds great, but you’re probably still seeing a number of tools that are interjected throughout those stages.

 So let’s look at how Openshift and JFrog are working to minimize that.

 First, let’s talk Openshift.

 Openshift is Kubernetes but built with the security, stability and support that every growing enterprise needs.

 It is a tool that many use in their application development and application builds.

 What we’ve done with Openshift, and especially with the newest edition of Openshift 4 is that we built out this massive operator framework.

 The operator framework effectively encapsulates engineering knowledge in an easily deployable, reliable and repeatable final product.

 What operators allow us to do is ensure that we’re highly integrated with partners such as JFrog, as well as with other processes and open source projects.

 [So that you can start to move through a CI]CD process all within Openshift.

 Openshift might be the base layer, the container technology you’re going to leverage as you move through your application development.

 But we’ve done everything we can with this operator framework to make sure we are highly integrated to perform with other tools you may use.

 Let’s start with three high level ones.

 Although, of course, there are many more and JFrog is one of them, which we’ll review

 here in just a few minutes.

 But let’s start with Openshift builds, we’re talking about a Kubernetes native way to make sure you’re building container images on Openshift and that those images are then portable across any Kubernetes distribution.

 So it ensures that you can have that portability, that you can extend your build strategies to other Kubernetes builds, or maybe your own custom builds as you move forward.

 It also supports multiple build strategies.

 The next one is Openshift pipelines, it’s based on the open source project Tekton, Tekton, with a K.

 It’s a Kubernetes native CI\CD pipeline that we’ve integrated into the product so that you can actually push your development through the pipeline and make sure you’re ready to go with Openshift.

 Last is Openshift GitOps This was recently released and what we’re doing with GitOps is giving you a declarative way to make sure you can continuously build and deliver on the features that you have in play.

 So it is tightly integrated with other features such as Openshift pipelines, and enables you to build with Git as your single source of truth and push through your pipelines for a faster way to get your end product to production.

 Now, for a bit more detailed look at Openshift pipelines.

 It’s packaged as an operator with an openshift.

 So you can pop over to the operator hub in openshift, download it for free and start using it.

 It’s a declarative CI\CD approach that is built upon the aforementioned open source project Tekton built for Kubernetes.

 It’s a cloud native pipeline that takes advantage of Kubernetes execution, operational models and concepts and allows you to scale on demand as you have multiple pipelines running with each one of those pipelines individually isolated within containers.

 This helps tremendously with repeatability.

 And it also gives you some assurance that what you’re building won’t be affected by other builds.

 I’m sure you are all familiar with the “It works on my machine” problem in testing.

 This helps ensure that you don’t run into “It works on my build server”, because of assumptions around what software and tools were available and configured.

 It also has security built in with Kubernetes R-Back and other models that make sure you’re consistently working across pipelines and workloads.

 It also gives you the flexibility to work on Kubernetes and support your exact requirements as you’re building out a pipeline for development.

 Let’s now bring everything together and talk about the entire software journey all the way through to production.

 So everything starts with the developer.

 And the first step is sourcing.

 Sourcing is perusing the internet, finding the dependencies that the developer wants to use, and then basically declaring them and whatever build tools or dependency managers that developer uses.

 It’s then declared as a dependency in for example, a go source files or you can add them as a from directive in a Docker file.

 And once they try and build locally, the first thing that happens is that those dependencies are trying to be resolved.

 Organizations can set up their own private repository managers or registry managers from where they get all their sources.

 And in the case of JFrog Artifactory, it will know how to get to the remote registries, repositories or sources of the dependencies and bring them in and cache them when they’re verified as secured and compliant by using JFrog Xray, which scans and analyzes everything that goes into JFrog Artifactory by using information sources, like JFrog’s, internal vulnerability and licenses database, but also databases from across the internet, including proprietary ones like VulnDB.

 Once the Dev team writes all the dependencies and all the integration glue around it, it’s ready to be checked in the source control.

 The commit is the next step.

 And this is where the CI server kicks in.

 In the example I’ll use today, we’re going to be using Openshift pipelines and the pipeline’s are going to run the exact same build the developer ran locally, there’s one addition to the pipeline that we’re going to use and that is JFrog COI.

 The COI helps when we need to integrate the CI server which as of now does not currently have out of the box native integration with JFrog.

 The next step is resolving all the dependencies from JFrog artifactory, they’ll all be successfully resolved because they’re already cached by Artifactory.

 And then after the build is successful, this is where the CI server will deploy what it built, this includes the module, but also all the metadata about how the artifact was created.

 This is where JFrog ci comes into play, it will first collect all the info about the build, which dependencies were used, which environment variables were active, which artifacts were produced, etc. and collect all this information, deploy the information with the artifact into artifactory as metadata that we can rely on when we make decisions about promotion.

 And this is where the promotion process starts.

 The promotion process is taking artifacts, testing them, and eventually moving them through registries or repositories in artifactory from one repo to another.

 This is done by testing, then contributing more and more metadata, and then deciding based on the metadata, whether the new build should be promoted or not.

 At the end of the day, our goal really is to have our code make it into production.

 Different use cases have different distributions.

 For example, JFrog distribution goes to JFrog edges, which are distribution targets for smaller edge devices.

 Or as in today’s example, we are going to deploy to a production cluster a container runtime which in our example is on Openshift.

 That’s the big picture for you.

 In the previous slide, I mentioned JFrog’s ability to scan for vulnerabilities.

 Well today Red Hat’s customers are escalating issues regarding discrepancies with vulnerability risks on Red Hat containers and packages found in the customers vulnerability scanning tools.

 For example, a customer built a container with a base image of REL seven, they noticed that REL seven has a Health Index of a.

 They then use x ray to scan their image and the scanning tool indicates the image has for example, critical or high vulnerabilities.

 Panic ensues and Red Hat support gets another ticket.

 To help solve these challenges. Our security segment has created a vulnerability scanner certification and ensures that our partner scanning tools are consuming Red Hat security OVALV2 datafeed, correctly identifying files that are installed by RPMs, determining which product installed the RPM to determine the correct severity, state and fix as CDE can affect different products in different ways.

 And finally displays Red Hat data in their UI and scan reports including Red Hat’s 4 point scale for impact, as well as Red Hat security URLs.

 We’ve worked with JFrog on this and they are now one of the first of our partners to receive this vulnerability scanning certification.

 In the interest of time, we needed to abbreviate our usual demo, so you won’t be seeing anything live today.

 But if you’d like to see something live, please reach out to JFrog and request a demo.

 So let’s talk about what our example project is going to be doing.

 We’re going to create an NPM application, we’ll do NPM, install and that sources everything from artifactory and then publish, that we’ll be packaging the deploy.

 Next, we’ll do a Docker build and push to a registry, which in this case is on artifactory.

 Then with the build info, we’ll scan it with X ray, and then eventually deploy it on Openshift.

 So here’s what the pipeline itself looks like.

 You can see how like any proper build pipeline, it’s built on stages, which we’ve just discussed.

 We Have a git clone and then we config JFrog COI, which is RT-config, then configure NPM to get dependencies from Artifactory.

 And this is where we can guarantee they’re scanned and free of known vulnerabilities etc.

 Next we NPM install, then NPM publish, which drops the published image into Artifactory.

 Finally, we have build publish, where we publish the build metadata to Artifactory as well.

 We mentioned JFrog COI previously, everything related to your packages is managed through the JFrog COI.

 It can be used with any tool, not just with Openshift pipelines, it manages JFrog Artifactory, JFrog X ray and other JFrog tools.

 It does a lot. But what it will do today in our example is to wrap the build tool, allowing us to issue NPM commands through the JFrog COI.

 This is how we can guarantee that JFrog COI knows about everything that is going on with our build, it will collect all the necessary info, which artifacts were uploaded, downloaded, which dependencies were used and then finally, it’ll push them into Artifactory.

 Most of the usage of this tool is exactly that.

 You configure with automation scripts for CI\CD, and then you get insight into what’s going on and those sort of closed boxes that run as a service, it effectively becomes your own personal spy, collecting valuable information for you and then deploying that information together with the artifacts all in one convenient package.

 Let’s take a look at what the walkthrough environment looks like.

 In this example, our cluster is deployed onto GCP.

 We also have Openshift pipelines, which was deployed through the operator.

 And we also have an artifactory installation deployed as well, which was also installed using the artifactory operator.

 When we apply the pipeline, we’re going to build the NPM app, build a Docker image, and then deploy that Docker image into the same cluster.

 It’ll show up as a deployment as a pod.

 And then when we expose the deployment by our route, we can see the application running.

 So for our walkthrough, here’s our artifactory installation running via these pods.

 Again, this was deployed via the operator hub.

 Once you install it, you’ll get a high availability install.

 So we’ve got a primary and to secondaries fronted with NGINX.

 Let’s look at our artifactory installation running on our Openshift cluster.

 We have some repos already set up for this example.

 There are some NPM repos, some local repos, some remote repos where artifactory is acting as a proxy to NPMJS, local and remote Docker registries as well.

 Now for our repo, this is publicly available on GitHub.

 So feel free to hop over to to see for yourself.

 Well, let’s take a look at pipeline YAML.

 So this is the Openshift pipeline definition.

 The way it’s set up is you have several reusable tasks, which you

 can put into your pipeline YAML file, these tasks are effectively different steps that we’ll use for the CI\CD pipeline.

 We have a git clone step which clones the aforementioned repo, that’s one task.

 Then, the next task is to configure the JFrog COI via Docker image.

 Then we configure NPM.

 Then NPM install, which pulls down all the dependencies from artifactory.

 Then NPM publish, which will take the dependencies and package the application and then publish them to an NPM repository in artifactory.

 We’ll publish some build information after the steps have been completed.

 And then we do a build and deploy to deploy the application.

 This essentially uses build up to handle the actual Docker image build and push.

 And then we have the actual pipeline definition down here where we reference

 all of the tasks that we laid out.

 In order to deploy the pipeline, we simply use the open shift command line with OCapply-pipeline.

 YAML-n and then your namespace.

 You can think of OC as the Openshift equivalent of coop cuddle.

 Once that deploys, if you click over to the pipeline’s area of the Openshift console, you’ll see the pipeline.

 And if we click tasks, you’ll see the individual tasks that comprise the pipeline.

 You can click into each one to get more information on it.

 If we click into our pipeline, it shows us a nice visual representation of the pipeline, with each of the steps.

 Git clone, rtq config, JFrog COI, NPM config, install, publish, then build and publish, push the image to Artifactory and then deploy it.

 In order to execute it, we’ll need to run the command OCapply-fpipeline-run.YAML-n[namespace].

 So let’s look at the pipeline that runs the YAML file here.

 This creates a pipeline run resource.

 Let’s look at what the code looks like.

 As you can see, it’s pretty simple.

 We’re taking the framework that are declared steps put together and basically just assigning some values.

 Once we run our apply, it kicks off the pipeline.

 Once the pipeline gets started, we can monitor its progress.

 This includes the ability to see the logs for each of the steps, which definitely helps with debugging.

 Once we publish the application with NPM publish, it gets pushed to artifactory.

 If we click back over to artifactory, we can inspect the build and have some very granular detail about everything that’s in our application.

 Once the rest of the pipeline completes, our application will now be available as a deployment in Openshift and all we need to do to get in front of people is to expose a route.

 So let’s summarize a bit.

 We hope this helped to visualize what it means to build out a DevOps pipeline.

 It’s a different way to think about, it’s not just culture, but rather an actual practice that we can implement.

 I hope you can see how using JFrog and Openshift together helped to move through and automate that process to make it as seamless as possible.

 Thank you very much for taking the time to watch today.

 You can find the link to the repo in the notes.

 And if you’d like a live demonstration of what JFrog and Red Hat can do, get in touch with us and we’ll set something up.

 So on behalf of all of us at Red Hat, good luck. Stay safe and don’t forget, keep innovating.

 Thank you

Trusted Releases Built For Speed