Safdar Zaman
Technical Account Manager, AWS

In this talk, we will talk about end-to-end AWS secure DevOps practices across cross region and cross accounts pipelines.

We will also cover best practices around secure configurations, encryptions, segregations, integrations and credentials management using AWS services with JFrog.

We will also present common DevOps security challenges and best practices around them.


The JFrog Platform – Powering Digital Champions

Video Transcript

Hello, and welcome to the session. In today’s session, we’ll be talking about DevSecOps on AWS with JFROG platform. My name is Safdar Zaman. I’m a technical account manager in AWS with over 20 years of experience in IT governance, DevOps security program management Cloud and Digital transformations. In today’s agenda, we will be talking about DevSecOps governance, Secure infrastructure automations, CloudFormation public registry, monitoring and logging, software bill of material secure architecture and some documentation references. DevSecOps Governance is basically a combination of multiple structures, policies, and procedures, which basically helps us to drive the Governance mechanisms. DevSecOps governance is the combination of security processes, procedures, structures, rules, and tools, which work together to provide and automat it, repeatable, and secure DevOps.

When we talk about secure DevOps policies and procedures are an important part of this practice. When we deploy our pipelines in AWS, we have policies which are service control policies. Service control policies offer central control over the maximum available permissions for all the member accounts in our organization. For example, you can limit the regions where you want your DevOp pipelines to operate. You can limit the instance types in your member accounts, which you want your developers to operate on.

We can also enforce governor mechanism like, we can disable anyone from the member account to switch off are to disable CloudWatch or CloudTrail. We can enforce tags with service control policies so that the tracking mechanism is uniform and standardized across the organization. The second important part is the rules, which we need for a secure DevOps. In AWS we provide IM rules. When we use a role in our pipelines, we do not have to distribute long term credentials, such as username and passwords or access keys. To over easy two instances. This all is taken up by the role, which supplies temporary permissions that applications can use when they make calls to the other AWS resources.

We need to have a DevOps developer role, DevOps, DevSecOps architect role and DevOps engineer role. Our architecture has to be secure based on the best practices to enable all of these guardrails and best practices in our architecture. Our pipelines need to have quality, functional security testing and monitoring enablement. When we talk about the AWS CloudWatch, Conflig and CloudTrail, these services provide us are very detailed and comprehensive view of our resources. We can create real time or near real time alarms. We can create notifications for SNS service, which can then be cascaded to other AWS services, such as SQSQ’s, or Lambda functions. We have completely built in code pipelines, code commit, code built and code deploy. As a part of AWS,

We completely support JFROG Artifactory, and KFROG Xray. JFROG is an enterprise repository manager, which is Kubernetes ready and supports containers, dockers, Emchar. Xray is an important part of the JFROG systems. X-ray works with JFROG Artifactory to perform universal analysis of binary software components. At any stage of the application Life cycle, AWS provides quick support, quick start, which deploys KFROG Artifactory and Xray on Amazon virtual private cloud load balancers, S three net gateways in a very high, valuable and resilient way.

DevSecOps infrastructure automations are important part for the security of our DevOps. For the configuration management, we have a AWS conflicts service which provides a detailed view of the configuration of the AWS resources used in our DevOps pipelines. This service provides us a view how the resources are related to one another, how they are configured and what are their historical changes. AWS Config can evaluate the configuration and the setting of the DevOps resources across accounts against internal traction practices, industry guidelines and regulations. We provide manage rules with the AWS conflict service. These rules are built in rules, which can be used, and they’re all based on the best practices.

Few manage rules I would like to highlight, which can play a very important role in the DevSec’s of process. For example required tags. This rule checks, if your resources have the specified tags, we have another manage rule, which is easy to EPS encryption by default. This news checks that Amazon elastic blocks store encryption is enabled by default. In terms of code build, we have certain rules which can help us secure our DevOps pipelines. For example, code build project and for AWS credential check. This rule checks whether the project contains environment variables like AWS access, key ID and AWS secret access key ID.

Then we have code built project source URL managed rule it checks if the GitHub or the bid [inaudible 00:09:38] source repository URL contains either personal access tokens, or username and password. We can develop our own custom rules and add them to AWS conflict. We can associate each custom rule with AWS Lambda functions, which can contain the logic that evaluates your AWS resources and takes the appropriate actions. These rules can be triggered either on schedule or on any change to your infrastructure. AWS conflicts service simplifies the audit and compliance by providing a detailed historical changes records of the DevOps resources. You can have a complete record what changed, when it was changed. And when you are using CloudTrail, it can give you further information who changed it and other related metadata information.

Second thing very important part of secure DevOps, is infrastructure as a code. DevOps infrastructure, which is built by the code is tracked, validated, and configured in a repeatable way. It also helps us hire velocity changes with enforced compliance. We can use AWS cloud formation public registry to automate JFROG resources, deployment and governance. In AWS we have Cloud formation public registry. Which are secure and trusted Cloud formation modules published by JFROG and verified by AWS for deploying highly available, multi easy Artifactory and Xray modules. With this two instances, databases, auto scaling groups, load balances, S3 buckets and others.

AWS Cloud formation integrates with AWS Cloud Trail, which captures action taken by a user role or AWS service in Cloud formation. Cloud Trail captures all API calls for Cloud formation as even including calls from Cloud formation console and from code calls to the Cloud formation APIs. Cloud formation integrates with AWS services like AWS system manager parameter score, and AWS secrets manager for secrets management. Monitoring and logging is a very important part of secure DevOps. Amazon CloudWatch is a monitoring and management service that provides data and infrastructure metrics from more than 70 AWS services.

It provides up to one second availability of metrics and locks data 15 months or less. CloudWatch container insights is another important functionality, which monitors and alerts container management metrics such as, Amazon ECS, Kubernetes, EKS, AWS Fargate, and standalone Kubernetes. We can set alarms on the metrics threshold and send notifications to take actions. CloudWatch events provide us a real time stream of system events and allow us to respond quickly to operational changes and take corrective actions like invoking AWS Lambda functions or notifying Amazon SNS topic. Container insights very important part. We can set alarm on the compute metrics for example, to trigger auto scaling policies. And it gives us the ability to stop terminate, reboot and recover any Amazon instance.

For Amazon ECS clusters you can see compute metrics from your tasks and services for service auto scaling. (silence) AWS Cloud trail, AWS CloudTrail is an AWS service that helps you to enable governance, compliance and operational risk and auditing of your AWS account. It captures users [inaudible 00:15:42] roles or AW services actions on the DevOps resources. It assists in compliance reports required by internal policies and external regulations. It has a capability for providing lock file integrity validation to ensure that our log files are secure and are not tampered with.

It works with CloudWatch logs and alarms to monitor your trail logs and send notifications and take other actions. In case of CloudTrail calls to in case of code, for example, in case of code build calls to create project start build and update project actions, generate entries in the CloudTrail lock files. In case of code commit calls to list repositories, create repository input repository, they generate entries in the CloudTrail lock files. Get client calls to get pulled and get push generate entries in the CloudTrail lock files. With a CloudTrail, we can get the information, for example, the source IP address of the request who made the request, time of the request and other useful information.

So with the CloudWatch and CloudTrail, they work together and they provide us real time or near real time monitoring capability of over DevOps resources. Software bill of materials a very important concept. In secure DevOps, software bill of material contains a list of ingredients that make up the software. So SBOM software bill of material has become a core best practice for secure DevOps, as well as increasingly common regulatory requirement.

An SBOM contains information on the makeup of the software developed and used including list of libraries and modules used in the software, complete information on the CI/CD tools used along with complete built deployment and approval recording history. Information on the SDLC cycles, stages, environments, and the settings used, information on free and open source mode use or libraries used in the software information on the security and vulnerability findings, patching, licensing, and dependencies. With JFROG Artifactory, JFROG XRay and JRFOG distribution hosting on AWS, you can easily get all the granular data you need for SBOM. Organizations which have this data can plan their patching, security vulnerabilities, mitigations open software used in the software package and to know about their vulnerabilities, their capabilities, their weaknesses, their strengths.

So software bill of material is a very important part of secure DevOps process and it gives us a confidence with which the software is made with complete record of the approvals life cycles, software components, libraries, third party dependencies used in the applications. This is a secure architecture, using JFROG Artifactory and JFROG Xray with AWS. Components that are deployed on AWS, for example, code made, code build and code deployed. All of the components are compliant to industry standards, including ISO 37,001, and others.

With the S3, we have encryption access capability with KMS. And in the RDS also we have this capability and not only addressed, but also in transit with SSL and TLS. We can enable multi easy deployments for higher availability. We can deploy containerized applications to Amazon EKS using JFROG Artifactory. XRay is a universal software composition analysis solution that natively integrates with the artifactory and is supported and hosted on AWS. XRay provides capabilities for the security, including error detection capabilities, for example, identifying security vulnerabilities and license violations, deep recursive scannings, which scans artifacts, bills, and release bundles in your system for Java application.

And it can also analyze all the JAR files. Continuous impact analysis, XRay realizes how an issue can impact on one component and it can impact other components, vulnerability database. It has a vulnerability ability database. We can run on demand boundaries scans. S3 works with artifactory and provides unlimited storage, scalability, durability, availability, encryption, and DR capability. With bucket policies we can enable cross account access and encrypted connections over HTTPS TLS. We can enforce encrypted connections for S3 using AWS secure transport condition in our policies. Object locking capability in S3, helps us to provide right ones read many enforcement. We can implement Amazon [inaudible 00:23:10] on S3 to find personal identifiable information and to find any data categories which are defined by GDPR and HIPAA standards.

The secret manager, we can protect and retain the secrets like DB credentials, RDS, database credentials, ETS secrets, API keys, and licensing keys. The secret manager, not only it stores the secrets securely, it also takes care of these secrets rotation. AWS conflict provides us a detailed view of the configuration history of the AWS resources. And you can see how the configuration and relationship change over time. (Silence) At the end, I would like to share some of the documentation references related to security on code commit, code, build code, deploy, and 40 JFROG. Thank you so much for your time. Take care.


Release Fast Or Die