In every workplace, most work gets done at the most cluttered desks. Yet the business also requires an orderly front office to run efficiently. It’s much the same with your DevOps pipeline environments, as the rough and tumble process of innovating code must ultimately produce cleanly released applications.
Continuous integration means that developers perform many builds each day, but few of those builds will advance to production repositories. Working environments for dev and test produce a high proportion of disposable builds and metadata – clutter that doesn’t need to be retained long-term.
While dev/test environments are fast and messy, production repositories need to be clean and orderly. Therefore, it’s a very common practice to divide SDLC pipelines across JFrog Platform deployments for each of these purposes.
How It Works
This strategy of separate, yet linked environments is an ideal way to apply JFrog’s unique ability to help make your operations cloud nimble — running pipeline segments wherever and however you need them to be hosted.
Let’s look at how several of our customers use the way that JFrog DevOps Platform deployments can interoperate across clouds so that developer teams can work in their own dev/test environments, and promote only likely release builds to a more rigorously maintained set of Artifactory repositories for storing production tests and delivery.
In addition to being isolated, a busy dev/test environment must also be speedy, to keep maximum development velocity. That requires that each team’s dev/test environment be both highly available and physically close to each developer team.
This can be fulfilled with JFrog Platform deployments in any hosting mode, self-hosted or SaaS. For maximum speed and security, many choose to maintain self-managed, high-availability JFrog Platform deployments in their onsite regional datacenters. Geographically distributed teams share artifacts, packages, and builds through Artifactory federated repositories.
All files in these dev/test repositories are considered short-term artifacts, so they can be maintained with an aggressive clean-up strategy.
Developer builds that have passed initial tests in dev/test can then be pushed (promoted) to repositories in the separate production environment for further validation.
In this example, the production environment is a JFrog SaaS account hosted in the company’s choice of cloud provider. From this common JFrog account, releases can be pulled for deployment to Kubernetes clusters, or bundled and distributed to other consumption points. The repositories in this JFrog deployment are for long-term artifacts that will be preserved in-place and/or archived through Artifactory cold artifact storage.
Cloud Nimble Variation
In a cloud nimble environment, the roles of cloud and on-prem can be easily reversed in a functionally equivalent architecture. In this example (shown without security points), the dev/test deployments are SaaS accounts federated across different cloud regions (even in different cloud service providers, if desired), while a common production deployment is self-hosted on-premises.
With a software supply chain spread across Artifactory repositories in multiple deployments, how will you maintain a consistent security posture across them all? JFrog Xray’s deep integration with Artifactory helps provide continuous vigilance against security vulnerabilities and license policy violations, with consistency across all domains. End-to-end protection is achieved through attention at key points in the software development lifecycle (SDLC).
In Development – Shifting left security vigilance to the time when code is created can save costs of later remediation. With Xray as part of each dev/test region’s deployment, developers have geolocal access to JFrog’s enhanced database of CVEs, as well as the organization’s private data of custom vulnerability issues for first- or third-party packages.
With a connection to Xray, JFrog IDE integrations can alert a developer within their editor when an open source package referenced by the source code is recognized by Xray as vulnerable. Developers can also use Xray to scan local files on demand for references to vulnerable dependencies. With these and other solutions, developers can make risk-based decisions about what components they use.
In the SDLC Pipeline – Federating repositories shares binaries with rich metadata (build info) to all sites, and enables full traceability from each domain of every shared component’s provenance and usage – including those of third-party binaries revealed by Xray scans. This bidirectional synchronization helps these multi-cloud deployments act as a single, comprehensive software supply chain that’s accessible and governable across all sites. Shared metadata with every binary enables production of a software bill of materials (SBOM) at any point in the pipeline, from any physical site.
These traceability and SBOM capabilities are what empower security and QA teams to monitor for and remediate vulnerabilities as applications progress through the multi-site SDLC. They can set up Xray policies and Xray watches to flag vulnerable packages used in dev/test builds, and ensure they are evaluated and mitigated before advancing to production.
In Production – Each dev/test application promoted to repositories in the production environment includes its metadata, enabling production of a standard format SBOM through Xray. An SBOM is an increasingly important deliverable, as a growing number of companies adopt the U.S. government standard and require one from their software vendors.
By the time the production environment receives an application, the Xray-enabled security vigilance in dev/test environments is likely to have already cleared it from unacceptable vulnerability and license compliance risk. Xray scanning of production repositories provides a final round of oversight, as well as alert SecOps administrators to newly discovered (“zero day”) vulnerabilities in previously validated releases, and help assure a swift remediation response.
Multi-Cloud Development to Delivery
A multi-cloud or hybrid segmentation of JFrog Platform deployments can offer several important benefits:
- Provides development teams a fast, local working environment.
- Enables maximum development velocity.
- Supports frequent developer builds.
- Permits an aggressive clean-up policy in dev/test environment.
- Limits production environment to builds that are strong candidates for release.
- Helps keep an orderly and manageable production environment.
- Limits archiving only to production-quality builds and metadata.
When you’re cloud nimble, you can host either your dev/test or your production systems wherever suits you best – in a JFrog-managed cloud account, or in a self-managed system in the cloud or on-prem – and still interoperate between JFrog Platform deployments as a single, unbroken software supply chain.
Need to see for yourself? Schedule a demo and we’ll show you how.