DEMO – Actions speak louder than words
In this hands on session, we will take you through a journey on securing an artifact and the surrounding ecosystem at rest and in transit using JFrog and AWS Services.
You will learn and see us implementing software composition analysis(SCA), software bill of materials(SBOM), AWS Private Link, Image Security, Container Runtime Security , Infrastructure Security , Network Security and more on Amazon Elastic Kubernetes Service (EKS).
Adopting Best Practices Across a Hybrid/Multi-Cloud Environment
Video Transcript
Hello. Hello, good morning. Good evening. And hope you all are staying safe. I’m Anuj Sharma and I’m a senior Container Specialist Solution Architect here at Amazon Web Services. I’ve been with AWS for over five and a half years, but having an overall industry experience in infrastructure and application development for over 16 years now. Right from the beginning of my career, I have on multiple hats, ranging from software developer, architect, engineering manager, and product manager and variety of industries ranging from financial services to retail. Looking forward to interact with you all towards the end of the session today.
Today, we will be showing you all three different hands on demos. First one will demonstrate how to establish private link connectivity between your VPC and JFrog SaaS instance so that your artifacts don’t go over public internet when pulled from JFrog SaaS. Second one will go over creating detected controls using audit logging on EKS cluster. And third one will demonstrate aspects of container run time emit security management. We have an interesting lineup to show you all today. So let’s get started right away.
And welcome from our side as well. My name is SvenRuppert. I’m a developer advocate at JFrog and I’m specialized here on the topic DevSecOps. So my background is from consulting times over nearly 20 years in the JVM world so Java related project, mostly legacy projects. And if you want to know more about my history, check out my LinkedIn profile, but if you want to know what I’m doing right now, then I can recommend checking out my YouTube channel. You will see a bunch of outdoor style IT and security related videos around the JVM and ecosystem. So feel free to check. And if you want to get in touch with me, choose LinkedIn, Twitter, YouTube, whatever social media, and feel free to contact me. But now we want to start.
Okay. What we are talking about today is for example, the JFrog DevSecOps platform on AWS and what is JFrog platform. So the JFrog platform is a platform for the whole DevSecOps environment. It means we have a place where we are storing binaries, Artifactory. We have a place where we are scanning all for vulnerabilities. It’s called Xray. We have pipelines as a CI environment. We have the connection to different authentication authorization services. We have distribution, it’s the way to push boundaries down to the edge. So this is in general the JFrog platform. And we have a free version for this, for example, the free tier and in the free tier, you will find the things we are focusing today is Artifactory and Xray. So Artifactory is a binary manager and Xray is the vulnerability scanner. And what is exactly Artifactory?
Okay. Artifactory. Artifactory is the heart of the binary lifecycle because it’s a place where all binaries are stored and you have it in different flavors for your own stuff, where you are pushing stuff through, where you’re collecting stuff from outside, where you are sharing stuff between different departments and so on and so on. So Artifactory is a manager for dependency manages, and it’s able to understand not only the binary itself, but the whole metadata around it. And we have different kind of repositories, from local repositories, removed and virtual repositories, distributed repositories, and so on and on. But we will cover this in a few minutes. If you’re talking more about Artifactory itself, but in general have mind Artifactory is a place to store and manage binaries and is aware of all the metadata that is around a [inaudible 00:04:07] around a binary.
The other component I want to highlight today is Xray. Xray is a vulnerability scanner, and it’s tightly coupled to Artifactory because if we have this place where all the binaries are and we have all the metadata, then it’s a perfect place to start scanning against vulnerabilities. And this is done by Xray. Xray is part of the whole DevSecOps platform and Xray is available in the free tier as well. So if you want to start and check it out, register for the free tier, and then you can work with Artifactory and Xray, and I will show you all the tiny steps you need in the beginning to make your first scan, how to connect to the IDE, how to integrate with your repositories and so on.
Today, I want to use an open source project from Oracle that is available on GitHub. And this is the project Helidon. Helidon is a tiny lightweight way to implement rest services and it contains everything I want to have today as an example to show you all the different angles of development, security, DevSecOps and so on. So it means we have a tiny microservice with core Java, and we will provide a tiny rest service. We will wrap it in Docker and so on. We will scan it against vulnerabilities and many other things, and Helidon is just source project we can use for free. So if you want to check out more about, Helidon, check out the URL Helidon from Helidon project, or go to GitHub, you will find everything there.
Now, if you are compiling the first time [inaudible 00:05:45], we are creating binaries. We need this place to store binaries and this is inside Artifactory. And the place I want to show you is the local repository. Local repository is a place inside Artifactory, where you’re deploying stuff in. And how to create it, you’re going to the administrative page. Then you’re creating, going to the repositories page, selecting local repositories and creating one. After you selected the type you want to store, for example, Maven or Docker or whatever, here in this case Maven, then you have to give it a unique name for your Artifactory instance. You have to provide information if you want to scan everything with Xray and this we definitely want to do, and you have to select the checkbox that’s content of this repository will be shown inside. It’s a web UI so allow content browsing. That’s it. And then just click create.
If you’re coding, you’re not only creating everything by yourself, but you are working with dependencies because you want to have existing stuff already, open source libraries and so on. And for this you need remote repositories. Remote repositories are the virtual proxies that are connecting to the outside world, local hub, or Maven Central or whatever your remote repository is. And the corresponding representation inside Artifactory is the local remote repository. So how to create remote repositories? It’s quite easy. You just go again to the administrative page, go to the repository, selecting create remote repository. Select what kind of remote repository you want to create. You need the URL, maybe authentication or authorization, things like user password, token, whatever. And then again, don’t forget to use this two checkboxes. One is activate Xrays or that all content will be scanned. And the other one is that you want to have it shown inside the UI.
Nobody wants to deal inside a project where it’s a bunch of different repositories. You want to have one repository and you want to get all stuff out and you want to push your stuff in. And for this, we have virtual repositories. Virtual repositories can be created for local, remote or the virtual repositories. And the perfect thing here with Artifactory is that you can declare the full deployment repository. And then if you’re pushing to the virtual repository, the stuff will be redirected to the addressed local repository. So you have to deal just with one repository inside your project.
But how to create? Go again to the administrative page, go to repositories, select the menu item, create virtual repository. And here again, then select the type you want to create, give it a name and then select all repositories that should be part of this virtual repository. The cool thing is we have the feature of this role based access control here on top of this one. So if you’re looking with different users on the same virtual repositories, every user will just see all this stuff from local and remote repositories, he would have access through, and this is very convenient.
So we created all repositories but how to find, navigate, and maintain this repositories from user perspective? So for this, we have to go to the application page. So not the administrative, but regular user interface. There, we can go to Artifactory and inside Artifactory we have this artifact page where we can navigate through all the repositories. You will see all declared repositories that has this checkbox shown [inaudible 00:09:25] UI activated, and then you can navigate through. So a local repository we created right now is just empty. But if you are clicking on the virtual repository, or if you’re clicking on the remote repository, then you will see content.
In the moment you were declaring a remote repository, you will have in this UI two different repositories. One is a remote repository as well. It’s a view on the remote content. It’s not what’s stored locally. And then you have a corresponding with the same name, minus cache, repository. And inside this one, you will find all the binaries you collected over this remote repository, through your Artifactory instance. The virtual repository will give you the combination of both. So the content of those repositories, but the view that is the maximum. So the whole virtual repository content and all local and remote repositories. Okay.
So far, now we know how to navigate through those repositories. We have repositories. We have now this Helidon demo project and how to connect. So for this, we are going to the web UI and selecting the virtual repository we want to use, exactly this one we created right now. And then you will see this URL. This URL, your copy and pasting it to your definition of dependencies. In our case, it’s a Maven project. So you’re using exactly this URL to declare your connection to this repository.
And then if you start building this, which make clean, make verify, you will see that all dependencies are not fed from some company repositories or from Maven Central. It will be collected over your declared virtual repository and what happening is that the [inaudible 00:11:09] going to the virtual repository, this will see that part of a remote repository. It will go to the remote repository, for example, Maven Central, grabs the binary, stored it locally on the Artifactory. We’ll start with scanning and all this stuff, we will cover in a few minutes. And we’ll deliver this to you, to your environment where you’re building it. It could be the CI environment. It could be your desktop. And with this, we are now building a project and all binaries going through this single Artifactory instance and a single Artifactory instance is a perfect place to scan against non vulnerabilities for example.
Talking about vulnerabilities, we want to have it [inaudible 00:11:47] so as early as possible, and one of the earliest possible points in your production is right inside your IDE. And for this we, [inaudible 00:11:55] the IDE plugin and with the IDE plugin, we can connect to Xray. How to install it? Depends a little bit on the IDE. I’m showing it with IntelliJ. So you’re going to the plugin management tab. You’re searching for JFrog, installing the plugin. After you install the plugin, you have to add your credentials and the URL of your Artifactory and Xray instance to this plugin. This it depends a little bit on the IDE you are using, but the mechanism is always the same. Install the plugin, connect to your Artifactory and Xray instance and then you have the information straight inside your IDE.
And now you as a developer, you want to work with this information and how it’s done. You have now your IDE. Inside your IDE you have the connection to Artifactory and Xray, and this means that every time you’re adding or changing dependencies in your dependency tree, for example Maven, this information will be used to ask Xray if there is some information about some vulnerabilities inside this dependency tree, and for this we need the whole dependency tree. You can open this plugin, and then you will see this list of main nodes of the main level of your dependency declarations. And then you can navigate through this step by step, through the last point of child of every dependency.
What does it mean? You see not only the direct, you see the indirect dependencies as well, and then you see for every dependency, if just green, yellow or red, and this means that you have, if it is not green, some vulnerabilities in. And if you’re clicking on this, then you see the list of non vulnerabilities where CVEs with some summary information, how critical it is, this level of how critical it is based on CVSS scores and this, I will show you later in the web UI that you can see all the detailed information about the CVSS values itself, but insides the IDE, you see the CVSS value, and then you can start excluding and navigating through.
What does it mean? If you’re using the right click, then you can go through the declaration and then you see what dependency is bringing this vulnerability in. And you can use the right click to say exclude and exclusion means that you’re excluding this dependencies and then you have to really clear this dependency. If it is a direct or an indirect one with a version number of your choice with less vulnerabilities. And this can be a higher version, it can be a lower version, it really depends on the situation. But this is a functionality of the IDE plug and this will give you straight inside the IDE, a possibility to react immediately if you’re adding first line of dependency.
Okay, we created now this GEL file, and now we are ripping this in Docker and the Helidon project in this hello world project will give you the Docker file so you don’t have to create it by yourself. And you will see that space on different steps so you’re building first inside the Docker environment with all the dependencies, your stuff, and then the [inaudible 00:14:56] shared and provided inside the production Docker file that will be used later inside your environment where you want to use this service. And this means that we have different steps. We have different dependencies. And with this Docker layer, we are adding a bunch of other layers, like the operating system. And inside this, we have vulnerabilities as well.
Okay. Before we start building the Docker image, what have we done with Maven exactly? We created a bunch of repositories, and now we are doing exactly the same for Docker. And for this, we are creating a local Docker repository where we want to push our step three. We are creating a remote repository, a remote Docker repository, where we want to connect to Docker Hub. And on top, we are creating a virtual repository, and then we are aggregating the local Docker repository, the remote Docker repository and this is what we are providing later inside our project.
So how to create it? Go to the administrative page, go then to our repositories. Select create local repositories, select Docker, give it a name, check Xray on, and go. And the same with the remote repository, go to this administrative page, say create remote repository, select Docker again, give the URL from the remote repository, maybe authorization combinations. And the good thing is if you’re doing this over Artifactory, you don’t have this pull limitations that you have if you’re doing it anonymously or with other tools, because we have a special agreement with Docker. And then if you have done this one as well, create a virtual repository. Again, virtual repositories, create virtual repository, select Docker, give it a name, combine all the repositories in and so on. So you’ll see creating repositories is very, very easy in Artifactory.
We’ve created the single Docker repositories, this virtual one, including all the local and the remote ones and now we want to connect. For this we are going in the UI or the regular user interface, going there to the Artifactory page as well to the artifacts page, selecting the repository here, that Docker repository. And you will see on the right screen that we have this URL provided. Additionally, we have this lock in mechanism, and for this, we can go to this set me up instructions where we see, if you have to log in, what connections or how we can connect to this one. And this means normally that we are going to the terminal, logging into this remote repository and that’s it. And if we want to use it then for our Docker images, we just have to select the from statement and then we can start building again DACA images over Artifactory and collecting all this stuff from Docker Hub and other resources over this single dedicated point where later we can start scanning with Xray again, not only Maven, Docker as well. And this is always the same for all dependency managers.
Okay. Building this Docker image is taking some time. Skip this part, or I snip this out and it means we are now ready with the regular image. And how to start? You will find some instructions in this hello world example of Helidon. How to ramp it up, it’s a Docker run command, and then you can connect with your rest API. And yeah, that’s more or less everything. But now we have created Docker image that grabbed all the dependencies of our Artifactory instance. And now it’s time to prepare our deployment. Preparation for the deployment means that we have to deploy our Docker image first to this Artifactory instance, so that you can share it with other instance or mechanisms or production environments, whatever. And this means we have to tag this Docker image and we have to push this Docker image. And this is just a regular command as always. And we are pushing now this Docker image into this virtual Docker repository. And inside the virtual repository, we declared this before deployment repository. So in the end, the Docker image is sent to the virtual repository and deployed into the local Docker repository.
Now we want to check everything. So the Maven stuff and the Docker stuff against non vulnerabilities, and this is done with Xray. And for this, we have to create policies with rules. For this, going to the administrative page, going down to the Xray menu part and then we can create policies. Policies is a logically holder for different rules, where you need a logical name and then you can decide what you want to scan. The dedicated action is inside a rule. So a rule is a stateless atomic action based on CVSS values.
You can declare dedicated CVSS values, but you can use predefined ranges or whatever, and then have to decide what you want to do. And this is sending a mail, breaking a build, starting a web hook and so on. So you have different choices or different interaction possibilities. And then for every single step you’re creating one rule. A bunch of rules are aggregated in the composition called policy, and the policy is a logical name. And here we can decide if you want to scan against vulnerabilities or license issues, and it’s more or less the same behavior. You have to declare if you find something, what should happen. And it has nothing to do if you’re scanning a Docker image, a Maven image or whatever. This is vena neutral, and this is the term policy.
Now we will want to connect the actions we declared with the technology or with repositories we want to scan, and this is done via Watches. Watches are now combining the actions we declared with policies and then say, what is amount of data we want to scan and this is a repository. And a watch will get, again, going to the administrative page, going down to Xray, then selecting create watch, giving it a name, and then you have more or less select the combination of policies you want to activate and repositories you want to scan. So what is always updated if we have an update in our security database, if we have an update in our repositories or if you’re pushing it manually. And then you can see what the watch will present you in terms of information about vulnerabilities of the whole test stack.
So we created now the watches and so we are now able to scan our created Docker image. Why I want to scan the Docker image? Because it will show you that inside a watch, you will scan the whole dependency tree over technology borders. It means inside the Docker image, we have JARs, we have operating system parts and so on and so on. And what we need is a full impact route so it makes sense to understand all vulnerabilities of all technology layers from this piece of binary that you’re pushing to production. And for this, you need to understand the whole binary structure as well as the whole metadata structure. And this is why this Artifactory and Xray combination is so powerful. Because we have access to the binaries and we have access to the whole metadata, the whole dependency information as well. And this means we are not only scanning the binary itself, but we are analyzing the static context of this as well. This is done inside the Watches.
How to look at Watches or how to see all the stuff for this? Going back to the user interface, going there to the security related page. Here, this Xray part and then you have these entries and one entry is Watches. If you are there, you will see all selected Watches. If you’re clicking on this Watches, you will get the whole list of vulnerabilities with the CVSS value, with the summary, with the CVEs provided and so on. And you can click on the details and then you will see the detailed page. And here you will see additional information. References where you can read more, if a CVE is provided, the CVSS value, who found it, what technology is involved, what is a repository that’s containing this binary, and you will see this image in what layer you will find this. And if you find, for example, something inside a JAR and this JAR then used in a web archive, and this web archive is used in a Docker layer and so on and so on, you will see exactly where it’s placed and with how many layers it’s wrapped. So these are watches.
Sometimes you just want to check what’s inside a repository, do you have vulnerabilities and so on, you want to export this one. So you need this [inaudible 00:23:51] view on this. And this is called vulnerability reports, and you will find it in the main user interface done in the security section, this menu item, vulnerability reports, then it will get a list of all existing reports so they are stored there. And then you have this button where you can say, I want to create it now. And this vulnerability report, you have to give it a logical name. You’re selecting if you want to scan against vulnerabilities or compliance issues. Here we are selecting vulnerabilities. Then what you want to scan, for example, you want to scan repositories or build information and, and, and. So here we are just scanning repositories, and then we are selecting the repository itself. I want to scan, and then it will take some time until it’s scanned.
And then it will get the time step view. What is now part of the repositories and what is now the amount of vulnerabilities in this repository and this list you will get, you can export as PDF or CSV or whatever, and you can analyze a little bit of information in this table you will get. If you’re clicking on a dedicated icon, you see some detailed information, again, like CVE and so on. If you’re clicking on the CVSS number, then you will get an overlay window. And here you will see all the detailed metrics that are leading to this final CVSS number.
If you’re not so familiar with this, check out my YouTube channel, I have a dedicated video about CVSS itself. So with the different metrics, what it means and so on. And then you can try to analyze if the CVSS value is affecting your system or how important this will be. So you can start working with environmental metrics on your side, but this is not part of this talk here. So here we are creating, the CVSS or this vulnerability report, and this is a single view on what is a state right now.
So I showed you the whole way from, we are creating the first binary up to, we are creating a Docker image, how to sort in repositories, how to scan against non vulnerabilities inside the web UIs, well inside the IDE. This was a huge amount of information in a very, very short time. So if you want to check out how to do this practically by yourself, then check out my workshops and webinars, and then you can register, for example, for the free workshop. And then we are doing it with a free tier step or step so that you can check all the steps and can do all your first practical steps. But now it’s time to do something with this Docker image. And for this, I will hand over to Anuj because he will take this Docker image and will show you what you can do inside the AWS environment to use this Docker image and what kind of protection he will add to all of this.
Thanks Sven for the wonderful walkthrough. All right, let’s start with AWS PrivateLink. Let’s quickly go over to see some of the business challenges that customers are seeing and the use cases that will be appropriate to implement privately in technology. I promise to keep this theoretical portion short so that we can jump onto the demo right away. Security is top of the mind for CIOs and CISOs, and they’re driving Zero Trust Initiatives across their entire organization to reduce business risk and insider threats.
Zero Trust Initiatives have three key main pillars, users, workload, and network. We are going to focus on workload and network aspects in this session. Customers are looking to restrict workload access versus enabling a broader access to the entire virtual private cloud or subnet. They need private connectivity between BPCs. They don’t trust the third party application traffic, and would like to have a better visibility. Secondly, customers in financial services, healthcare and government sector need to adhere to regulatory measures and achieve compliance, preventing your sensitive data, such as customer records from traversing the internet helps you maintain compliance with regulations, such as HIPPA, EU, US, Privacy Shield and PCI. Thirdly, customers don’t want to run their cloud traffic over to public internet. They don’t want their network resources and services to be reachable over internet as it is one of the biggest threat back vectors. And last but not the least, the solution needs to be cost effective.
Now let’s talk about AWS private link use cases that helps us address these business challenges. There are three key main use cases. Firstly, secure, restricted to AWS services, which are outside of a private network. These services could be AWS services such as database, SQLs and Lambda. Secondly, secure, restricted access to customers’ own homegrown internal applications or third party SaaS applications, such as JFrog SaaS solution, which are outside of your private network. In today’s demo will first focus on how to secure network access to JFrog SaaS using AWS PrivateLink so that the artifacts are pushed or pulled from or to your VPC and to and from JFrog SaaS over Amazon backbone network without traversing public internet.
All right, first up is to procure JFrog SaaS instance. You can sign up for free trial for JFrog SaaS from AWS marketplace. Point to note here is AWS PrivateLink support is only available in JFrog in enterprise tier with security pack. In today’s demo, I have signed up for AWS marketplace to procure JFrog SaaS in enterprise tier, which is valid for 30 days. Once you accept the contract, follow the instructions on the screen to complete the account setup. In the end, once your JFrog SaaS instance has been provisioned, you will get an email with the public URL for the SaaS instance, which you can use to log in.
You would then need to create an endpoint and link JFrog provided PrivateLink with the VPC in your account. In order to do so, navigate to endpoints under VPC and select find my service by name. You will need to enter the JFrog product service name value. These values are AWS region specific. You can look at the name of the service from JFrog online documentation. You will need to select the VPC and the security group in the AWS account with which you want to establish a PrivateLink connectivity with JFrog SaaS. If you have more than one VPC to connect with, you would need to repeat this process. Note, the VPC endpoint ID thus created. Your VPC endpoint will be in pending status until you complete the next steps.
As a next step, log in to my JFrog account. Remember my JFrog login credentials are not the same that you use to log into your JFrog SaaS instance and it’s meant only for administrators. Once you log in at my JFrog, locate PrivateLinks under actions dropdown to create a new PrivateLink. Enter the VPC endpoint ID that you received in the previous step and link this ID with your SaaS instance. Within few minutes, you should see an email confirmation when PrivateLink connectivity is established.
At this point, if you come back to endpoints in AWS console, you’ll see the status of AWS PrivateLink changing to available and then you can copy paste the DNS name. You can set up a C name pointing to DNS name vendored by AWS PrivateLink for convenience. Remember this DNS name will be resolved only from within your VPC, bounded by the security groups. In order to complete the setup, I need to set up a private DNS. For that I go to route 53 and create a private hosted zone named as PE.JFROG.IO and associate it with the VPC which I have created for the PrivateLink connectivity with JFrog SaaS instance. Point to note here is, this private hosted zone name has to P.JFROG.IO at present, and then create a C name pointing to the VPC endpoint DNS name.
Now, let’s run a sample container in AWS C case from our JFrog SaaS Artifactory over PrivateLink. In order to do so, I created an EKS cluster in the same VPC I created PrivateLink using EKC [inaudible 00:32:34]. I also make sure that the node group have the security groups that I used to create the PrivateLink with JFrog SaaS. To confirm if my PrivateLink connectivity is working fine, I just do a ping to the DNS name I created. I use AWS SSM to secure the SSH to the instance, and then do a ping in order to confirm the connectivity. Once you have established the connectivity to be fine, you can reach out to JFrog support to block traffic to the public internet endpoint so that all the traffic can reliably stay within Amazon network over PrivateLink.
As a next step, I want to deploy the pod running into an EKS cluster. I pushed a sample project to my JFrog SaaS instance so that I can pull it over PrivateLink. I can modify the [inaudible 00:33:41] section in your [inaudible 00:33:42] manifest file and point that to the PrivateLink DNS. Once done, deploy using cube CTL commands and inspect the results. And you can see that I changed the endpoint to point it to my PrivateLink and point here. Then you use it to use cube CTL commands to deploy the sample app to your communities cluster. If all worked well, you will see the pods in running state with image getting pulled in from the PrivateLink DNS that set up.
All right, that wraps up the demo around PrivateLink. In the next demo, we will cover the detective controls of auditing and login in EKS cluster, and then see how we can go about querying those audit logs using CloudWatch Metrics Insights.
All right, moving on to the second section, which is detective collective control using Kubernetes audit logging. Collecting and analyzing audit log is useful for a variety of different reasons. Logs can help with root cause analysis and attribution that is ascribing a change to a particular user. When NF logs have been collected, they can be used to detect anonymous behavior too. On EKS, the audit logs are sent to Amazon CloudWatch Logs.
In order to enable the audit logging on an EKS cluster, you navigate to the EKS cluster, go to the logging section, manage logging, and enable the audit logging and save the changes. Point to note. When you enable control pin logging, you will incur additional costs for ingesting and storing the logs in the CloudWatch Logs. When the audit logging updates is successful, you can come to AWS CloudWatch Logs and look for the log group specific to the cluster name. If you analyze a specific audit log, you’ll see that it has two main annotation. First one is the decision, which is authorization K8 IO decision. And the second one is authorization K8 IO and reason. These two explain if the work that you’re trying to take, or the action that you’re trying to take was allowed or disallowed and give any specific reason if it is required.
If you have to query these logs, you can use AWS CloudWatch Logs Insights, and you can write a query, something like this. Let’s say, if you have to query the deployments, which are created in last 10 minutes or so, you can filter the logs with objectRef and resource of deployments and verb of create. And that will give the same output just by filtering the logs correctly. Let’s say, if you have to identify the deployments which were deleted, as an example, I deleted the deployment which we created in the previous sample, we’ll see the logs, something like this, along with the reason. As I said, there are two main annotation, which is authorization decision allow and the reason, and it will have the verb of delete or create or get or list.
Some of the common use cases why you would like to analyze logs with CloudWatch Logs Insights could be listing updates to AWS auth config map. You can use query like this, and that will give you all the updates to AWS auth config map, if at all there are any. And that is very useful to identify any updates or paths to the convict maps for security purpose. You can use a query like this, which will list create, update, or delete operations to a specific rule or to a role binding if you want to change roles to role binding, that will give you about role bindings. Similarly, when cluster roles or cluster role bindings.
If you have to see unauthorized read access to a specific secret, or again, secrets, I don’t have an example, but this is the query that you can use to see the get list and watch and specifically if the response code was unauthorized, and that will give you the results. Similarly, if you have to get a list of failed anonymous access, you can run a query like this, which filters the logs with the username of system anonymous and response code of either 401 or 403.
All right, coming to the final demo for today, which is container image runtime security management. These are some of the things to think about. Not all the vectors have mitigations in Kubernetes. Be sure Docker API services isn’t publicly exposed. Better yet, disable its networking listener altogether. Image itself can be a attack vector. Image could contain malware. CICD process could also be the attack vector. Might be able to get the credentials out of it and push non-validated image that has bypassed scanning.
You can implement AWS PrivateLink, which is bonded by security groups, which will restrict pushing and pulling images outside of your organization. Secondly, the open policy agent is an open source general purpose policy engine that unifies policy enforcement across the stack. Open policy agent provides a high level declarative language that lets you specify policy as a code and simple APIs to offload policy decision making from your software. In the demo today, we will take a look at how to implement OPA on an Amazon EKS cluster, and then take a look at a scenario to restrict container images from an approved container registry you hosted on JFrog SaaS instance for which we have already established a PrivateLink connectivity all by using OPA policy. With this introduction, let’s get onto the demo to deploy open policy agent to the EKS cluster.
All right. First step is to deploy open policy agent Gatekeeper. OPA Gatekeeper is a specialized project providing first class integration between OPA and Kubernetes. OPA Gatekeeper is the go-to project for using OPA for Kubernetes admission control. Plain OPA and Kube Management are alternatives that can be reached if you want to used the management features of OPA such as status [inaudible 00:41:06]. We can deploy OPA Gatekeepers in credibility images and wait for the deployment to complete.
All right. With the deployment complete, next step is to deploy constraint template, which has the Rego code for enforcing the depository URLs that are allowed to be used from this cluster. So it has the constraint template, and it has the necessary Rego code, which has a template on the warnings that has, that can be showed if there are any violations. And we use kubectl apply f the constraint template and wait for template to deploy. And it’s successfully created. And finally let’s deploy the constraint, which has the repository URL and a test of the policy enforcement. And in this one I’m using the allow listing image pulls from the PrivateLink endpoint. And hence, ideally if my Kubernetes manifest file has the image URL specified as this one, part should start. Otherwise, I should get a message that is defined in the constraint template. So let’s deploy the constraint using kubectl apply -f constraint.yaml. And it should complete right away.
All right, so now the next step is to test out. I will first deploy the same app.yaml, which has the allowed listed JFrog Artifactory private endpoint URL, and use kubectl apply -f app.yaml. That should apply the entire application. And you can see that it’s created. I can run the status check by running kubectl apply get all just to see if everything started fine. Yes. So pods started running absolutely fine. That means it is able to pull images properly.
Next step is to use an image which does not have this allowed listed URL. And for that, I’m going to use the BusyBox template published on GitHub as a sample. This BusyBox template uses or BusyBox manifest file uses the BusyBox container image from Docker public. So if I apply that, I will get a message that, “Hey, I am allowed only image repository, which coming from this specific URL and it violates this policy,” which is why it did not even let me start the file.
All right, with this we are at the end of the demos that I had planned for today. Thank you all so much for listening in and hope you all find these demos useful. If there are any questions, please feel free to reach me at Twitter. I’m available at @anujsharma5 Twitter handle. Thanks again, and enjoy the awesome content lineup for today. Thank you again.