State of DevOps Cloud Solutions 2020

Every company claims to have a cloud/hybrid DevOps platform that enables and powers developers working remotely. Time to make order of the chaos.*

In today’s digital age, software drives business innovation. The maturity, speed, quality, and security of your software delivery processes have become key differentiators in this software-driven economy. The COVID-19 reality has pushed the market towards faster adoption of remote developers’ access to IT.  Software vendors are therefore in a race to enable and expand cloud DevOps solutions.  Increasingly, teams seek to adopt end-to-end DevOps platforms or tool bundles that decrease their reliance on multiple vendors and ownership of tooling infrastructure. But what should an enterprise demand from cloud DevOps tooling, and what key differentiators should be considered? 

Here, you’ll get a high-level overview of the cloud DevOps platform solutions available today, and what you should expect from vendors  in your cloud DevOps journey.

*This information is based on research performed by JFrog and reflects the solutions available in the market as of April 2020 to the best of our knowledge.

6 Things to Consider for Enterprise Cloud DevOps Transformation

While JFrog has cloud offerings (on each of the public cloud providers and in over 20 regions worldwide) as a solution to these needs, here are 6 things you should consider when engaging with any individual vendor for a DevOps digital transformation to the cloud.

 —- End-to-End —-

Point solutions can do a fine job at one or a few specific things. But an end-to-end DevOps platform in the cloud should reach to the runtime, with a single strategy and point of support. Integrating your legacy DevOps systems on your own may produce spaghetti solutions — including for CI/CD. Keep it simple and speedy instead. Importantly, this also includes security for DevSecOps functionality baked in, or it’s likely a non-starter for you.

Developers today are looking for an E2E solution and an “all-in-one” user experience, however this doesn’t mean that they will compromise on the “best of breed” approach. Therefore, DevOps platform providers should offer a Class A tool stack as part of their platform in addition to very strong ecosystem integrations and plugins to make the developers’ lives easier, respecting the freedom of choice they want. 

An E2E platform also requires a vendor commitment to allow a true “one-browser solution” and not simply bundled tools that are integrated. This ensures a user will have a full experience from a single UI that connects all services.

 —- Universal Package Management —-

All of the metadata and dependencies in your myriad technologies must be supported (such as Docker, npm, Maven, PyPi, Golang, NuGet, Conan, etc. – but also for 20+ more types you may find in your portfolios). Point solutions for single or limited technology types will only serve to frustrate your development teams and require the adoption of multiple solutions and repositories in your organization. Large enterprises have not only myriad technologies, but also a long legacy of deployed, mission-critical applications that must be supported at scale with local, remotely and virtual repositories.

Modern DevOps teams support the entire organization and enable easy on-boarding of more and more entities (organically and inorganically) to work with a single source of truth with all binary types.

 —- Fully Hybrid: 100% Same Here, and There —-

This isn’t about having a cloud solution or not. Many companies that offer cloud solutions don’t have corresponding on-prem/self-hosted options or vice-versa. More still have completely separate solutions that provide different features and different methods that don’t talk to each other, requiring you to learn a new product, user experience and user interface. As you transition to a cloud environment, both cloud and on-prem need to be able to function the same way 100% of the time to ensure a smooth transition (for example, as you go through a cloud migration and need the same tools and functions in both places as you keep the business running). This is more complex than it sounds, requiring the same code base, same QA processes, same architecture and more across environments. You should also look for providers that support multiple regions and clouds for a truly hybrid solution that is accessible.

 —- Multi-cloud —-

Most of JFrog’s enterprise customers have a very clear policy when it comes to a single provider: they will not allow themselves to take a risk of hosting the DevOps and remote IT workspaces on only one of the clouds. While you might think one cloud is enough, you should select a vendor that provides services across and between all major clouds. Keep your options open and your peace of mind intact by avoiding any vendor lock-in and providing maximum resilience. (This is also great for migration strategies between clouds, if required.) This “DevOps democracy” approach also allows your organization an elastic workspace that can be expanded and scaled to achieve true DR setups.

 —- Security —-

DevSecOps isn’t just a buzz word. It’s a requirement. Security as an integrated part of the pipeline that supports all of your package types is now a line-item for many companies. Shifting left is for the entire DevOps organization, so you should consider tools that make it simple. For example, cloud DevSecOps tools should make it possible to block artifact downloads (or break builds) that contain vulnerabilities, requiring tight integration all the way into the repository). Security policies should be easy to define and manage across your repositories. And any cloud security solution should allow you to easily identify the impact of a vulnerability across the entire DevOps pipeline. 

One often-overlooked category in DevSecOps is the issue of open source license compliance. Not only might packages contain vulnerabilities, but they also may contain unknown licensing issues. Solutions should provide scanning and remediation for both types of policies.

Be sure any choice you make in this area satisfies all of these needs and has full visibility for the developer – even into their IDEs – in order to achieve maximum efficiency and peace of mind.

The world of containers also brings a challenge. Your DevSecOps tools should be able to “open” any container and scan several tiers in, and with all packages look for dependencies that include vulnerabilities. This can only be achieved through a strong integration with your binaries repo and artifact management tool.

A Scanning Solution is Only as Good as the Vulnerability Data That Drives it! Cloud or self hosted, you can’t compromise on a database that includes only some of the information reported in the market. A DevOps platform should always strive to be ahead of any hacker and secure all software packages in the pipeline from build-to-production.

 —- Cloud-ready CI/CD —-

Tooling at scale that can centralize and streamline all of your processes is a modern necessity. Managing per project and creating islands of DevOps pipeline automation can leave teams inefficient and frustrated.

Traditionally, application development teams were responsible for creating localized CI/CD automation. This approach provides short-term gains for the individual teams, but ends up being a constraint in the long run since enterprises get no economies of scale across their CI/CD implementations. 

This is amplified as enterprises move towards heterogeneous modern architectures, smaller deployment units, rapid release cycles, and multi-cloud topologies. In this brave new world, building ad-hoc, custom-scripted pipelines for every deployment unit is not a scalable approach, and leads to automation that is expensive to create and maintain, which ends up being a barrier for change.

A modern CI/CD provider should support and scale enterprise-wide workflows (aka the “software supply chain”) that span all popular technologies and  architectures of today as well as keep pace with technical evolution. It should provide a way to assemble pipelines from pre-packaged building blocks (think legos), rather than developing them from scratch. These pipelines can be templatized and shared as libraries across the organization, thereby building a knowledge base that is constantly growing and improving. In other words, your CI/CD provider should give you economies of scale over time in the cloud and help you ship code faster.

Comparing Cloud-Based End-to-End DevOps Solutions

There are myriad tools available to practitioners today, and the choice of what’s best can be a trial-and-error exercise that takes too much time and energy if you don’t do your homework.  JFrog currently has cloud offerings on (and partnerships with) AWS, Azure, and GCP, which can enable you to flexibly focus on your end-to-end DevOps pipeline. Below, you’ll find specifics on how “going it alone” with certain solutions could leave your goals challenged or incomplete, and a fully-integrated cloud approach may meet your needs more completely.

Let’s take a look at some of the most popular or familiar options available in the market today, and compare those with what JFrog is doing. Spoiler: we’ll be drilling into each of these solutions individually in the coming weeks.

 —- JFrog vs GitHub —-

GitHub, from Microsoft, offers an end-to-end solution that encompasses source code, (some) package  management and CI/CD pipelines- which are called GitHub Actions. GitHub is stronger on SaaS than on its self-hosted installations (GitHub Enterprise). Aside from supporting self-hosted runners, GitHub Packages and GitHub Actions are currently only available as SaaS, as are their automated security updates capabilities. However, Actions and Packages only offer basic API support and their CLI is still in beta. 

From a repository management perspective, Github is not a universal solution, supporting only 6 package types (JFrog Platform supports 25+), with no global search capabilities or support for virtual/remote repos. Their security capabilities are still nascent and they don’t offer deep-recursive scans for artifacts and container images vulnerabilities, no addressing of license compliance issues, or configuration of security policies to trigger different actions. 

Finally, GitHub’s CI/CD solution doesn’t support pipeline visualization,  and its tight coupling between source code repository and pipeline configuration makes scaling in the enterprise and cross-team/app collaboration and sharing of pipelines difficult. Lack of Docker layer caching or autoscaling of CI nodes further make cloud-native delivery slower and more cumbersome.

GitHub seems strongest with source code management, OSS team collaboration, and small teams,  whereas its enterprise capabilities to cover the delivery gamut are still emerging.

 —- JFrog vs Azure DevOps —-

The Azure DevOps suite, also by Microsoft, offers an end-to-end solution – from source code (Azure Repos) through artifact management (Azure Artifacts) to CI/CD (Azure Pipelines). Also available is Azure Boards for planning and tracking work within teams, and Azure Test Plans which is a testing solution. 

From an artifact management perspective, Azure DevOps supports only four binary or package types, and does not provide a continuous security solution for scanning of software vulnerabilities or license compliance violations.

Azure DevOps is offered for free with every Azure subscription or for OSS projects up to 10 parallel jobs. While Azure Pipelines may be the obvious choice for Microsoft shops standardizing on just the Azure cloud, it poses challenges for organizations concerned with vendor lock-in or (rightfully) preparing to support the future of multi-cloud and hybrid environments. 

Microsoft’s support for hybrid environments is lacking and the on-prem/self-managed version is missing many of the core features of the SaaS version. The self-managed version is not noticeably or frequently maintained.

 —- JFrog vs AWS & JFrog vs GCP —-

Often seen more as quality infrastructure and marketplace providers, AWS and GCP provide some DevOps-related services and tools, but their solutions are not intended to provide integrated or complete DevOps platforms out of the box. Rather, major clouds tend to provide the “final stop” before deployment, and don’t tend to offer services acros the Devops cycle. While some point solutions like simple Container Registries and very basic CI/CD tools are available, the lack of (or limited) hybrid options, limited or missing software package support, possible single provider lock-in concerns, lack of enterprise-grade, built-in DevSecOps or security solutions, lack of extended metadata and lack of software distribution solutions gives most enterprises pause when they’re looking for full-scope providers. 

Notably, both AWS and GCP do not offer developers package managers (both currently offer only simple container registries that do not support Helm, or generic, virtual or remote repository capabilities), let alone universal package support. A universal, hybrid, ecosystem-integrated package manager with advanced replication capabilities and global distribution capabilities is a must-have for today’s DevOps shops, as companies look to containers, Kubernetes and management of the full artifact lifecycle. As such, they’ll find these major providers lacking in many areas. 

At a core level, major cloud providers’ DevOps solutions are focused around integrations with the generic services they provide (messaging, databases, API gateways, storage, etc.) in order to drive more usage of these services. These services are therefore mostly generic in nature and many tasks are left to the users (regarding workflow, metadata and visibility, etc.) to achieve a truly end-to-end pipeline from code to production. This lack of focus may also impact the maturity and quality of provided DevOps tools, often leaving them at a very basic level.

Fundamentally, neither AWS or GCP seem to be providing end-to-end DevOps solutions or orchestration and pipeline management tools throughout. This leaves many companies reliant on third parties or owning the gaps in the tooling themselves – an unscalable solution for many enterprises.

Most often, we see companies take advantage of partnerships between cloud providers and companies like JFrog to provide end-to-end solutions – often via cloud marketplaces.

 —- JFrog vs GitLab —-

Mainly known for a GIT solution and not a DevOps platform, GitLab provides a SaaS offering that’s limited to a single provider (GCP) in a single region. So, while technically available, enterprises may find it hard to adopt such a limited offering. There is also no SLA guarantee apparent, creating a lack of certainty for any enterprise. GitLab security offerings are also in their infancy (though improving), leaving a question mark for enterprises looking at robust solutions for mission-critical workloads. Further, having multiple SCM tools is very common, and GitLab may be limiting in scope for enterprises with varying source control tools. The problem compounds as GitLab solutions move to the cloud, with multiple tenants sharing a single database resource, leading to possible scaling and/or performance issues.

GitLab’s strength is mainly on-premises, with very limited cloud options. While a solid platform, GitLab falls short on enterprise cloud scalability, hybrid capabilities, SLA guarantees and SCM compatibility. 

Further, many package types required by developers are not supported by GitLab, which could necessitate further tools to support varying projects. This isn’t a minor shortcoming, but hits on one of the most significant enterprise needs – the requirement to support myriad languages, systems, partners and technology types simultaneously at scale. When looking at GitLab’s demonstrations, while they offer a rudimentary package manager, they acknowledge JFrog as the leader in the package management space. As the heartbeat of any DevOps pipeline, a package/binary manager like JFrog Artifactory is the pivotal technology that allows companies to achieve enterprise scale, customize and automate, and effectively meet the most demanding SLAs. At global scale and reach, many companies need hybrid, multi-cloud tools that support efficient replication, multiple caching options, multiple repository types (local, virtual and remote), rich and queryable metadata and more. Without this robust artifact manager as the key piece of the puzzle, most enterprises will find GitLab’s offering lacking for enterprise workloads.

Also, GitLab’s platform enhancement strategy seems to rely on external resourcing for even major feature updates. This can result in additions that reside outside their core teams that may not have the long-term platform roadmap, vision or enterprise scale in mind.

In addition, the GitLab product architecture itself relies on a single server. This server must support DevOps tools that greatly differ in system requirements, such as I/O and database load, request caching and throughput, processing power, availability, etc.

Mixing an I/O-intensive, high-request rate, zero downtime service (like an enterprise binary manager) with reporting/data display focused services – such as issue management, wiki, code review & request tracking – often leads to complicated scale and performance issues due to these services competing over the same resources.

Finally, missing multi-cloud functionality for SaaS may be a concern for enterprises looking to scale  across clouds or avoid any provider lock-in.

So What Now?

We believe a DevOps cloud solution really should have it all – and be on all clouds and apply to every developer’s environment. Enabling developers wherever they are demands this high level of flexibility and robustness. But this is a JFrog blog, so these facts are based on our own research. We welcome your comments and ideas on this topic.