Take a chill pill with CloudFormation Registry and Kubernetes

Brian Terry
Sr. Developer Advocate, AWS

AWS CloudFormation is the most popular infrastructure as a code service. It is a de facto standard for provisioning infrastructure on AWS. And yet customers are writing custom code to provision third-party software such as JFrog on AWS.

AWS CloudFormation Public Registry enable developers to automate provisioning of third-party software alongside native AWS service using CloudFormation templates.

In this session you will learn how automate the deployment of EKS cluster with CDK and CloudFormation templates using JFrog modules published in CloudFormation Public Registry.

 

Adopting Best Practices Across a Hybrid/Multi-Cloud Environment

JFrog Artifactory for EKS Anywhere

 

Video Transcript

Hello and welcome. My name is Brian Terry and I’m a Senior Developer Advocate on the CloudFormation team. And I’m excited today to talk to you about Kubernetes and the CloudFormation registry. But first, let’s start with the agenda. The first thing I’m going to do is tell you a little bit about myself and why I’m qualified to give you this talk. Next, we’ll look at some of the challenges when creating infrastructures code and managing infrastructures code. Then we’ll look at some options to make your life just a little bit easier when creating these complicated infrastructure. And finally, we’ll level up our reuse patterns and I’ll show you a very cool tool made by those folks over at JFrog.

All right, we’ve come to my least favorite part of our presentation and that’s actually talking about myself. So I’ve created a slide to where you can actually see a little bit about the things I’ve done over the last 20 years. But recently I’ve been working with AWS. Specifically, I’ve been a developer advocate for a little over a year but my total time at AWS has been about three and a half years. So my favorite part about it is actually being able to travel the world and talk about infrastructures code and really meet some nice people. So if you find yourself attending one of my talks live, please come up to me and talk to me because I love to talk to people and I would love to meet anyone that’s attending my sessions.

All right, with that said let’s talk about the coolest AWS service in the world, well at least I think it’s the coolest service in the world and that’s called AWS CloudFormation. So for those of you that have never used CloudFormation, let’s take a 10,000 foot view of what this service can do. Well basically you code your infrastructure in JSON or YAML, you upload it to an S3 bucket, and then you create what’s called a stack. And that stack represents resources that you’re going to manage in your AWS deployment. And then once you create a stack, you’re able to manage that stack in one complete unit. Although CloudFormation is cool out of the box and very powerful, there are some challenges with building very complex infrastructure using CloudFormation. Well basically some of these challenges are that builders spend a lot of time trying to figure out what is the best practices in creating these resources.

Like what is the best practices when defining servers? How should you arrange your CloudFormation templates? All these are things that builders think about constantly when creating their templates. And then second, applications often share common architecture elements. Like for example, if you create a Lambda function that can read and write to a DynamoDB, you normally reuse that within other cloud nation templates over and over and over again. And then next, modeling infrastructure becomes repetitive. You’re always reinventing the wheel, defining and configuring these servers, integrating with other servers, adhering to best practices. You’re basically doing this over and over again when you’re creating your infrastructure. And then finally, it takes valuable time to build unique features into these templates. So for example, you created your CloudFormation template which your Lambda function that writes to a DynamoDB. And what if you had to adjust it just a little bit for a specific project? Well, it’ll take time to figure out how to create that unique template and to deploy that within your infrastructure.

So luckily those smart people on the CloudFormation team have looked at these challenges and come up with a feature to help you mitigate a lot of these challenges and to help you quickly deploy very complex infrastructure in your AWS account and share these across your organization. And that new feature is called AWS CloudFormation modules. So let’s talk about what AWS CloudFormation modules are. What are CloudFormation modules and why should I use them? Well modules are very cool. So specifically they have three major pluses and those are, it allows you to encapsulate common solutions, they are optimized for best practices and you can reuse these across your organization. So for example, you create an architecture that had a secure the S3 bucket. Specifically, you had this CloudFormation snippet. I suppose that you spent an hour or a day defining this wonderful infrastructure but you wanted to reuse this within your organization.

Well, you would create a CloudFormation module. So essentially that module will look like this. So as you can see in this CloudFormation template, we’re using a special kind of resource called AWS:S3::bucketprivateaccess:: module and that ending part has a special name called module. So what this does is this encapsulates this full best practice of this CloudFormation template and consolidates it into a it almost looks like a one small little resource called a module. So if you look at a module, basically they’re used in the resource section of a CloudFormation template and they’re differentiated by having that fourth section that ends with module. And templates that use modules can be YAML or they can be a JSON. And up under a module we have something called our resource schema. It’s not really important to know how to define a schema because luckily when you create a module this schema is auto generated for you. So you can just think of, if you look at the left side you can think of this as you take this confirmation template and we create a JSON schema that CloudFormation can use to define your module.

So let’s take a look at the module platform. So when you create a module, you’re able to share this using a central tool called the CloudFormation registry. So let’s quickly take a look at that. So if we look on the left side, you’ll see that you can have publishers such as AWS services, internal service teams, the community at large create these modules or these best practices. And then once they create those, they actually submit those to the registry. And then on the other side of that, you have your consumers. These consumers can be developers in your organization or they can be anyone in the world that actually wants to use public modules. They can put these in templates, they can author those in the CDK. So they can use this in their development patterns to reuse this architecture over and over again.

So you may be asking yourself, who has this benefit? Does it benefit at the publishers? Does it benefit the developers? Who does this benefit in creating modules? Well, it benefits everyone. So let’s quickly take a look at how that is. So if we look at the left side on the publisher side, they’re able to encapsulate and share these best practices across their organization. They’re able to distribute updates using native version management built in the modules, how cool is that? And on the other side of that, consumers can effortlessly adhere to best practices and guidelines. And it abstracts away resource intricacies and it allows for easier authoring of CloudFormation templates. And then they can leverage these reusable template code and reduced development time. And then finally, they can build their infrastructure and applications in module components. So the registry has two different ways to access it. You have a private registry and you have a public registry.

So those private registries are modules or reusable architecture that you can find in your AWS account and they belong and stay in your AWS account. And anyone within the organization can use those in CloudFormation templates. But on the other side of this, we have the public registry in which public publishers can create these best practices, architectures and they can be reused in your organization. So for example, AWS can create a module on EKS and you can actually use that in your CloudFormation templates to use this over and over again. And luckily our friends at JFrog have created a module and published it to the public registry. And we’ll talk a little bit about that as we go through this presentation and I’ll actually show you a cool demo of their new module.

Now within the CloudFormation console, you can actually search for modules and within the CloudFormation console you can see a number of different modules from a lot of third party contributors. And one of those happens to be JFrog. So let’s talk a little bit about JFrog new module. So this new module is very cool. It allows you to deploy Artifactory and a fault tolerant, reliable, highly scalable solution in less than 30 minutes. And this module contains all the resources you need to do this deployment and you can use it very simply within your CloudFormation templates. So if you want to follow along, visit this URL.

Now, let’s jump over to the demo. All right let’s jump into my favorite part of the presentation and that’s the demonstration. So for this demonstration, I’ll actually be showing you how to use the JFrog module to deploy Artifactory in an AWS account. Then we’ll create an EKS cluster and actually start playing around with it. So as a prerequisite, what you need is an Artifactory license. If you currently don’t have a license to Artifactory, that’s okay because you can just visit jfrog.com/start-free and obtain a 30-day free trial license. This process is quick and easy. All you need to do is just fill out a couple pieces of information and JFrog will send you a license. So the next thing you need is the CentOS 7 AMI.

And you can subscribe to that by going to aws.amazon.com/marketplace and search for CentOS 7 and just hit the subscribe button. So for this demonstration and to use this module, you actually need three license keys. You actually need one for each replica and one for each master because we’re going to be deploying highly available version of Artifactory into our AWS account. The first thing we need to do is actually store these keys into a secure place. And for this we’re going to use AWS Secrets Manager. And Secret Manager will actually need to create a secret for each license.

(silence).

All right. And we’ll give the secrets the name that we can remember. So let’s do Artifactory-licenses and we’ll keep the defaults the same. And we’ll just hit next to create the secrets. Now for this demonstration what we’re going to do is generate a self sign certificate but I highly recommend you don’t do this in production and actually use a certificate from a trusted, secure authority. Now we’ve generated our self sign certificate. So if we cat that certificate, you’ll see it’s created. And what we’ll do is we’ll actually copy this to our clipboard and then we’ll head over to AWS Secrets Manager. So we’re creating a new secret containing our certificate information. Okay. So this will be our first certificate. And we actually need to create two secrets, one for our private key and one for our public key.

(silence).

And just like we did before, let’s give it a name that we can remember. So we’ll call it Artifactory-certs. We’ll keep the defaults the same and we’ll create our secret. Okay. So if you’re following along, your Secrets Manager dashboard should look something similar to this where you actually have two secrets. All right. So the next thing we need to do is enable the JFrog modules. So you can actually do this quickly by actually going to public extensions inside your AWS CloudFormation dashboard and search for Artifactory or JFrog. When performing your search, don’t forget to click the modules radio button and a third party radio button. I just want to call out one quick fact, in order to use the JFrog module, you need to enable all of the public extensions.

For this demonstration, I’ve already enabled these public resource types in my AWS account. Now, if you would like to enable these resource types in your accounts, you just click on the module and you can actually see things such as the schema, the input parameters, example templates and et cetera and you just click on the activate button. And then once you click on the activate button, you’re able to use these inside your AWS account. But again, you need to enable all the JFrog extensions. To save time, I’ve gone ahead and already pre-deployed an EKS cluster inside my AWS account using the official AWS quick start.

Now that we’ve got all our prerequisites set up, let’s hop back over to the JFrog blog site and actually launch the CloudFormation template containing the modules. So first we’ll click on the link that says install in a new VPC and it takes us to a GitHub repository. And inside that GitHub repository, you actually see a CloudFormation template that uses these modules. So we’re just going to copy the GitHub URL and we’re going to use Wget to download the CloudFormation template. And once I do that, I’m actually going to be opening the CloudFormation template inside my editor.

In this demonstration I’ll be using PyCharm but you can use any text editor including Visual Studio Code or Vim or anything alike. Now with my ID open, you’ll see a couple files inside my project directory. And if you’re following along, you should have the CloudFormation template and a couple of certificate files that those are the ones that we generated using our self sign certificate. So this CloudFormation template is an excellent start for this demonstration but I need to modify it just a little bit to make this demo just a little quicker. So first, we will hardcode values for the bastion and x-ray properties to reduce the size of our deployment. While editing this template, I’m going to point out one specific thing. Notice that with the Cf Lent IDE integration we get warnings for things such as unused conditions and parameters. So this helps you quickly diagnose errors in your CloudFormation template and allows you to quickly fix those. So I’m actually going to do that right now.

Now that we’ve edited our CloudFormation template, let’s go ahead and deploy this CloudFormation stack. So in the CloudFormation console I’m actually going to upload the CloudFormation template that I just edited. And next I’m going to give it a name that I can remember. I’m going to set a password for my database. And notice that we’re leaving open access for the demo but it’s recommended to set this to a trusted IP range. So the next thing I’m going to do is hop over to the Secrets Managers console and copy down the names of the licenses and the certificates that we created earlier. And I’m actually going to use those as input parameters inside the cloud formation template.

Now we just continue on filling out the input parameters for the CloudFormation template. So the next thing we need to do is set the host name. So I’m using the same host name that I use when generating my self sign certificate. Next, I’ll create a master key for my Artifactory cluster. So I’ll use open SSL for this and I’ll just copy the command from the CloudFormation dashboard and run it in my terminal. And now I’ll copy the master key over. The next thing I need to do is select my availability zones. So I choose US east to A and US east to B to make it highly available. And finally, I’ll input a password for x-ray, the x-ray database and I’ll click next. And I’ll leave all these other parameters as their default values and I’ll click next. And finally, I’ll acknowledge that I’m using IM resources inside my CloudFormation template. So I’ll select these three and click create stack.

All right, so this deployment should take about 20 minutes to complete. So while this CloudFormation template is deploying, let’s quickly take a look at a few things. So you’ll notice that upon to stack you can see it’s creating progress and notice that it’s creating a number of resources. So basically it’s deploying the resources inside the module. And you can see there are a number of resources in order to make this a highly available deployment of Artifactory. So we’ll just let this sit here and bake a little bit and then we’ll come back and we’ll take a look at the infrastructure that was created.

Okay, it looks like our resources or our infrastructure is just about deployed. So what we’ll do is we’ll look here in outputs and we’ll search for the URL of the external load balancer. So notice that we have our load balancer URL here. And once we have this load balancer URL, you can actually put this inside a browser and actually go to that URL. But for this demonstration we’re going to make it highly available and we’re going to make it easy to find. So what we’re going to do is we’re actually going to create a DNS record for this inside route 53. So let’s go ahead and do that.

(silence).

All right. So everything’s set up and it’s deployed. We have our route 53 DNS name. We have our infrastructure deployed by CloudFormation and modules. So let’s use a DNS name to log into an Artifactory deployment. Here we’ll use our default username and password and we’ll log into our Artifactory UI. Now while here, we need to do a couple things. First, we need to set a new password. So we need to do that and we’ll make it a strong password. So the next thing we need to do is set the base for URL. So to do that we actually need to use the same host thing we used to generate our certificates. So in this demonstration I used demo-art and I’m going to give it the rest of the DNS name that we set in route 53 and then I’ll hit next. Next, I’m actually going to create a Docker repository. So we’ll go here to JFrog UI and we’ll create our Docker repository.

Now that our Docker repository has been created, let’s create a user. And for this user, again, we’ll set a strong password. And next, now we’ll create a group and we’ll add our user to that group and we’ll click save. Finally, we’ll create permissions and attach those to the group and the user that we just created. Awesome. So now let’s just clone the hello-kubernetes repo. So we’ll do that by basically jumping over to our command line and we’ll use Docker login and we’ll provide it with the demo’s name to our Docker repo and we’ll use the username and password that we just created.

Awesome. So now we’ve successfully logged into our Docker repository hosted inside JFrog Artifactory. Awesome. So now we have our Artifactory infrastructure set up. So now let’s jump over to the CloudFormation console and quickly take a look at something. So I told you earlier that I deployed an AWS EKS cluster using the official AWS quick start. And then at quick start they gave me a number of outputs. So I’m just going to find the EKS cluster name and I’m actually going to use that to configure my kubectl command line. Okay looks like we got an error here and looks like my permission is maybe just a little too tight on this configuration file. So what I’ll do is I changed my old and actually change that to 600 and that should fix this error. So if you’re following along and you get this error, basically you can fix it by opening up the permissions just a little bit. Now we should be able to configure our kubectl command line authorization.

It looks like we got an authorization error but we can quickly fix that by inputting a role. So you’re noticing a documentation. We can just use the –role ARN flag combined with our call to input a role. Next we’ll see if we can connect to a cluster by using kubectl Git notes. And you can see that we can connect to our notes so great. So, so far we’ve set up the infrastructure and we are able to connect to our EKS cluster. Now we need to set the image pool secret for the deployment.

(silence).

Now we’re just verifying that we got the secret name correct. And it looks like it’s created and we’re ready to move over to the next step. So now let’s quickly jump back over to our AWS console and floor image. So to make this a little easier, we’re actually going to clone this repository. And this repository it actually has a make file for building the Docker image and also deploying Docker image. And it also has a hem file that we can use to deploy the infrastructure onto our EKS cluster. So if you quickly clone the project and you take a look at the files, you’ll see that there’s a make file here. And we need to provide a couple of environment variables such as the repository name and the URL and et cetera. So once we’re able to do that, we can actually use this make file to build our Docker image.

(silence).

Looks like our image has been built. Now we need to move on the next step and we need to push our image to our repository. Okay, so now let’s quickly take a look at our Helm chart. And you can actually see that we have a deployment inside our Helm chart. So we can actually use this to deploy our sample hello-kubernetes infrastructure. So what we’ll do is we’ll take a look at this deployment templates. And as you can see, there are a number of values that we can put in as far as the container image, et cetera. So basically this helps us build a Helm chart so we can deploy to our EKS cluster and test out this infrastructure.

Now inside our deployment, we need to set the image pool secrets for the deployment. So we’ll go ahead and set that secret. Looks like it’s not configurable so we need to add this to the deployment template. Now we just verify that the secret’s correct. And you can see that it is correct and we’re ready to go and to start our deployment. So now let’s go ahead and install our Helm chart. So we’ll do Helm install -n hello- kubernetes. All right. And we’ll call it hello-world. So looks like that was successful. And now we’ll check the running pods and you can see it’s still using the default image.

So let’s update the value to point to auto factory. So in our deployment file what we would do is we’ll go ahead and point this Helm chart to our Artifactory deployment. So we’ll change the image here from the default image to the Artifactory repository and then we’ll update the Helm release. So let’s do that. We’ll use Vim to quickly edit this for us. Now let’s just upgrade our Helm deployment using the updated deployment file. Now remember we’re pointing at our Artifactory repository now. So when we upgrade this Helm deployments all the pods should now be using the updated image. So we’ll use kubectl and we’ll use the command Git pods to see if all the pods are using the latest image.

And you see that we are successfully using the new image that’s inside the JFrog Artifactory Docker repository. So if we use Git pods and get deployment we can actually see in the deployment file that we are indeed using that repository now. Now let’s open up our browser and let’s quickly take a look at the deployment. So if we use the URL from our low balancer we’ll see that the deployment is successful and our Kubernetes deployment is currently now running.

All right, so let’s just recap what we did. So the first thing we did is we created a highly available and scalable JFrog Artifactory deployment on AWS using the official JFrog module. Then once we deployed that, we actually connected that to our EKS cluster and finally deployed that image using Helm all in less than 30 minutes. Thank you for spending the time to attend this presentation and please follow me on social media @ineedhelpbrian. So if you would like to learn more about JFrog’s new module or the CloudFormation registry, please see their new blog. So thank you once again and I hope you have an awesome, awesome day.

 

Trusted Releases Built For Speed