Rethinking the SDLC

Emily Freeman
Principal Solutions Marketing, DevOps; Author of DevOps for Dummies, AWS

The software (or systems) development lifecycle has been in use since the 1960s. And it’s remained more or less the same since before color television and the touchtone phone.

While it’s been looped it into circles and infinity loops and designed with trendy color palettes, the stages of the SDLC remain almost identical to its original layout. Yet the ecosystem in which we develop software is radically different.

We work in systems that are distributed, decoupled, complex and can no longer be captured in an archaic model. It’s time to think different. It’s time for a revolution. The Revolution model of the SDLC captures the multi-threaded, nonsequential nature of modern software development. It embodies the roles engineers take on and the considerations they encounter along the way. It builds on Agile and DevOps to capture the concerns of DevOps derivatives like DevSecOps and AIOps.

And it, well, revolves to embrace the iterative nature of continuous innovation. This talk introduces this new model and discusses the need for how we talk about software to match the experience of development.

 

The JFrog Platform – Powering Digital Champions

 

Video Transcript

I’m Emily Freeman. I’m the author of DevOps for Dummies and the co-curator of 97 Things Every Cloud Engineer Should Know. I am really excited to be here with you all. Well, with you and to share a wild idea. A complete re-imagining of the SDLC. Now, before I even start, I want to be really clear. I want to know what you think.

You can always find me on Twitter at @editingemily. And I want you to make this your own. Share your feedback. Let me know how you might apply it to your everyday work. Most of my work centers around DevOps and I really can’t overstate the sheer impact that DevOps has had on this industry. In many ways, it built on the foundation of Agile to become a default, a standard, that we all reach for in our everyday work. When DevOps surfaced as an idea in 2008, the tech industry was in a vastly different space.

I mean, AWS was in infancy, offering only a handful of services. Azure and GCP didn’t exist yet, at least not publicly. The majority of companies maintained their own infrastructure. Developers wrote code and relied on CIS admins to deploy new code at scheduled intervals, sometimes weeks, months apart. Container technology hadn’t been invented. Applications adhered to a monolithic architecture. Databases were relational and serverless just wasn’t a thing. Everything, from the application to the engineers, centralized. Our current ecosystem couldn’t be more different.

Now, don’t get me wrong. Software is hard. It always will be, but we continue to find truly novel solutions to consistently difficult, persistent problems. Some of these ideas end up being a sort of rebranding of older ideas, but others are a unique and clever take on abstracting complexity or automating toil, or perhaps most important, rethinking, even challenging, the premises that we have accepted as canon for years, even decades. In the years since DevOps attempted to answer the critical conflict between developers and operations engineers, DevOps has become a catchall term and there have been a number of derivative works.

DevOps has come to mean 5,000 different things to 5,000 different people. Now, for some, it’s distilled to continuous integration, continuous delivery, CICD. For others, it’s deploying code more frequently, adding and investing in testing, security gates, et cetera. For others, it’s organizational. They’ve added a platform team, perhaps even a questionably named DevOps team, or they’ve created an engineering structure that focuses on a separation of concerns, leaving feature teams to manage the development, deployment, security and maintenance of their siloed services.

Whatever the interpretation, what’s important is there isn’t a universally accepted standard of what DevOps is, or certainly what it looks like, in execution. It’s a philosophy. More than anything else, a framework or methodology people can utilize to configure and customize their specific circumstances to modern development practices. The one characteristic of DevOps that I think we can all agree on is that it attempted to capture the challenges of the entire software development process.

None of the derivative works have been that ambitious, that bold, instead focusing only on a segment of software delivery. It’s that broad umbrella, that holistic view, that I think we should breathe life into once again. The challenge we face is that DevOps is an increasingly outmoded solution to a previous problem. Developers now face cultural and technical challenges far greater than how to more quickly deploy a monolithic application. Cloud Native is the future, the next collection of default decisions and one the DevOps story can’t absorb in its current form.

I believe the era of DevOps is waning and, in this moment, as the sun sets on DevOps, we have a unique opportunity to rethink, rebuild, even re-platform. Now, I don’t have a crystal ball. I’m not completely certain what the next decade of tech looks like. None of us can be sure of that, but I do know that I can’t write this story alone.

I need you. I need the amazing community of developers. That said, I have some ideas that I think can get the conversation started. I believe that to build on what was, we have to throw away the assumptions that we’ve taken for granted all this time. In order to move forward, we must first step back. The software or systems development lifecycle, what we call the SDLC, has been in use since the 1960s. And it’s remained more or less the same since before color television and the touch tone phone.

Over the last 60 years, we’ve made tweaks, slight adjustments. We’ve added pretty colors. The stages and steps are a little bit different. In Agile, we’ve bent it into a circle and DevOps, an infinity loop, but across all use cases, the SDLC has become an assumption. We don’t even think about it anymore. Universally adopted constructs like the SDLC have an unspoken permanence.

They feel as if they have always been and always will be. I think the impact of that is even more potent if you were born after the construct was popularized. Nearly everything around us is a construct, a model, an artifact of a human idea. The chair you’re sitting in, the desk you work at, the mug from which you drink coffee and sometimes other beverages. Buildings, toilets, plumbing, roads, cars, art, computers, everything.

The SDLC is a remnant, an artifact of a previous era. One in which software security was a physical concern and women were still called computers. I think we should throw the SDLC away. More accurately, I think we should replace it, replace it with something that better reflects the nature of our work. A single, threaded, linear model designed for the manufacturer of material goods cannot possibly capture the distributed complexity of modern sociotechnical systems.

It just can’t. And these two ideas aren’t mutually exclusive. That the SDLC was industry-changing, valuable and extraordinarily impactful and that it’s time for something new. I think we are strong enough to hold these two ideas at the same time, showing respect for the past while envisioning the future. The infinity symbol is widely used to visualize the DevOps tool chain. It was a way of more or less bending the SDLC into a loop, through which companies could iterate. And like the SDLC, it implies a linear flow as in you plan and then create or develop, verify and test, package, build and on and on and on. The DevOps interpretation of the SDLC does not allow for a pause, a pivot or a loop back as required.

Now I don’t know about you. I have never once in my life had a software project go smoothly in one go, no matter how small, even if I’m the only one working on it. Software development is chaos. It’s a study in entropy, and it’s not exactly getting any more simple. The model with which we think and talk about software development must capture the multi-threaded, non-sequential nature of our work. It should embody the roles engineers take on and the considerations they encounter along the way. It should build on the foundation of Agile and DevOps and represent the iterative nature of continuous innovation.

When I first started thinking about this, and I thought long and hard about this, I was inspired by ideas like extreme programming and the spiral model. If you’re not familiar, it’s worth a look up. I wanted something that would have layers, threads even, a way of visually representing multiple processes happening in parallel. What I settled on is the revolution model. I believe this visualization is capable of capturing the pivotal moments of any software scenario. Now I’m going to dive into the discreet of elements in just a moment, but I want to give you a minute to really absorb the idea, to have a first impression.

I call it revolution because, well, for one, it revolves. Its circular shape reflects the continuous and iterative nature of our work, but also because this is revolutionary. I am challenging a 60-year-old model that is embedded into our daily language. I don’t expect this to be widely integrated into production workflows tomorrow. Instead, my mission with this is to challenge status quo and create a model that I think more accurately reflects the complexity of modern Cloud Native software development.

The revolution model is constructed of five concentric circles describing the critical roles of software development: architecting, developing, automating, deploying, and operating. Intersecting each loop are six spokes that describe the production considerations every engineer must consider throughout any engineering work: testability, securability, reliability, observability, flexibility, and scalability. The considerations listed are not all-encompassing. If you thought that, you’re right. There are of course things not explicitly included, but I figured that if I put 20 or more spokes on here, some of us might get a little overwhelmed, myself included.

You might also be wondering why operating is smaller than architecting. Is it less important? Absolutely not. When I was first designing this model, I looked at architecture for inspiration. The Guggenheim Museum has one of these shapes that caught my attention. It has this stunning circular ramp with which many of you are probably familiar, perhaps even walked. Now in a perfect world, this model would be three-dimensional, show layers, even movement. But any model, I believe, must maintain its meaning even in two-dimensional visualizations.

And thus, one of the roles had to be the smallest. And one of them had to be the largest. I chose operating to be the innermost part because it represents the software process for me. When we’re architecting, we’re thinking big, we’re abstract, designing, dreaming. Everything is possible in that moment. But as we move through software delivery, we become more embedded in the system and our choices become more constrained.

Now let’s dive deeper into each of these elements. The first is roles. This is a sort of novel way of thinking about things. We’ve long used personas as the default way to divide audiences and tailor messages, essentially to group people. Every company in the world right now keeps repeating the mantra of developers, developers, developers, but personas have always bugged me a little bit because I think this approach either oversimplifies someone’s career or needlessly complicates it.

I mean, few people fit cleanly and completely into persona-based buckets like developers and operations anymore. The lines have gotten fuzzy. On the other hand, I don’t think we need to tailor messages so specifically to call out the difference between, say, a DevOps engineer and a release engineer. It’s just not necessary. But perhaps most critically, I believe personas are immutable. A persona is wholly dependent on how someone identifies themselves.

It’s intrinsic, not extrinsic. Their titles may change. Their jobs may differ, but they’re probably still selecting the same persona on that ubiquitous dropdown we all have to choose from when registering for something. I was a developer. I will always identify as a developer, despite doing a ton of work in other areas like DevOps, AIOps, DevRel. In my heart, I’m a developer. I think about problems from that perspective first. It influences my thinking and my perspective, my approach.

Roles are very different. Roles are temporary, inconsistent and constantly fluctuating. Now if I were an actress, the parts I would play would be lengthy and varied because naturally I would be successful, but the persona I would identify as would remain an actor, an artist. Your work isn’t confined to a single set of skills. It may have been a decade ago, but it’s not today. In any given week or sprint, you may play the role of an architect, thinking about how to design a feature or service; a developer, building out code, fixing bugs; an automation engineer, looking at how to improve the manual processes we often and refer to as toil; a release engineer, deploying code to different environments or releasing it to customers; or an operations engineer, ensuring an application functions in consistent, expected ways.

No matter what role we play, we have to consider a number of issues. The first is testability. All software systems require testing to assure architects that designs work, developers that code works, operators that infrastructure is running as expected, and engineers of all types that code changes won’t bring down the system. Testing, in its many forms, is what enables systems to be durable and have longevity. It’s what reassures engineers that changes won’t impact current functionality. A system without test is a disaster waiting to happen, which is why testability is first among equals at this particular round table.

Security is everyone’s responsibility, but few of us understand how to design and execute secure systems. We’ve seen this recently. I mean, I struggle with this. Security incidents, for the most part, are what I would call high-impact, low-probability events. The really big disasters, the ones that end up on the news and get us all free credit reporting for a year, they don’t happen super frequently. And thank goodness because we know that there are endless small vulnerabilities lurking in our systems.

Security is something we know we should dedicate time to, but don’t often make time for. Now, let’s be honest, it’s hard and complicated and a little bit scary. The risks are high, the jargon is different. It’s just a lot. DevSecOps, the first derivative of DevOps, asked engineers to move security left. This approach meant that security was a consideration early in the process, not something that would block release at the last moment. But this phrase, “move security left”, should show you how embedded the SDLC is into our culture.

It relies on the SDLC, that phrase. “Moving security left” in that model. This is also the consideration under which I’m putting compliance and governance. They’re not perfectly aligned. You’re right. But I figure all the things you have to call lawyers for should just live together. In all seriousness, these three concepts are really about risk management. There’s different topics: identity, data, authorization. It doesn’t really matter. The question is who has access to what, when and how. And that is everyone’s responsibility at any stage.

Site reliability engineering, or SRE, is a discipline, a job, an approach for good reason. It is absolutely critical that applications and services work as expected for the vast majority of the time. That said, availability is often mistakenly treated as a synonym for reliability, and it’s not. Instead, it’s a single aspect of the concept. If a system is available, but customer data is inaccurate or out of sync, the system is not reliable. Reliability has five key components: availability, latency, throughput, fidelity and durability.

Reliability might be the end result, the thing we’re working toward, but it’s resiliency that, for me, describes the journey and the steps that developers can take to improve reliability. Observability is the ability to have insight into an application or system. It’s the combination of telemetry, monitoring, logging, alerting, available to engineers and leadership.

There’s an aspect of observability that overlaps with reliability. That’s why they’re neighbors, but the purpose of observability isn’t just to maintain a reliable system, though that is of course important. It is the capacity for engineers working on a system to have visibility into the inner workings of that system. The concept of observability actually comes from linear dynamic systems and is defined as how well the internal states of a system can be understood based on information about its external outputs. It’s critical that when companies move systems to the cloud or utilize managed services, they don’t lose visibility and confidence in their systems.

The shared responsibility model of cloud storage, compute, managed services of all types require that engineering teams be able to quickly be alerted to, identify and remediate issues as they arise. Flexible systems are capable of adapting to meet the ever-changing needs of the customer and the market segment. Flexible code bases absorb new code well, they embody a clean separation of concerns, they’re typically partitioned into small components or classes, and are architected to enable the now and the next. In a flexible system, chain dependencies are reduced or eliminated, database schemas accommodate change well and components communicate via a standardized and well-documented API.

The only thing constant in our work is change. In every role we play, creating flexible solutions that grow as the application grows is critical. Finally, scalability. Scalability refers to more than a system’s ability to scale for additional load. It implies growth, a system’s ability to mature and flourish over time. Scalability in the revolution model carries the continuous innovation of a team and the byproducts of that growth within a system.

For me, scalability is the most human of all the considerations. It requires that each of us in our various roles consider everyone around us: our customers who use the system and rely on its services, our colleagues, current and future, with whom we collaborate, and even our future selves. We have to write/read that code too. Software development, it’s not a straight line, nor is it a perfect loop. It is an ever-changing complex dance.

There are twirls and pivots and difficult spins, forward and backward. Engineers move in parallel, creating truly magnificent pieces of art. The issue is those moments of pure magic, artistry, the moments when we are our best, most put-together selves, they’re fleeting. The prima ballerina falls in practice. Sometimes during the show too. The first chair violinist, a literal concert master, plays the wrong note. Your tests don’t pass. Your code doesn’t compile. Your work silently errors. It fails in production. That’s always fun.

You don’t make a deadline. The PM is mad. They’re always mad. It’s chaos. I mean, it’s not you. It’s the computer. And here’s why I think everyone gets mad and stressed. We expect software development, stunningly, to be a straight line, but it never is. It’s like the progress bar on our favorite Internet service provider when there’s an outage. They’ll say you’ll have streaming back in eight minutes and then three hours and then two minutes and then two days.

We continue to measure progress in a straight line. Product launches are discussed in terms of red, yellow, green. I appreciate the Toyota production system and how much it’s discussed in DevOps circles, but we are not making cars. This isn’t a checklist. And once you attach the driver’s side door, it’s just always there. In no production line does attaching door cause the catalytic converter to break, but your small change, that little piece of code, it can bring down the entire system or slow down requests from an entirely decoupled service.

I am passionate about this new model and approach because I believe that it will help developers in their everyday work. How can we teach business leaders and product owners and scrum masters that prediction in feature delivery is a bit of a fool’s errand when the model we use is a straight line?

Now I don’t expect this model to look exactly like it looks now in six months, a year. In fact, I would almost consider that a failure. I want your opinions and experiences to shape this and one person and you operate in completely unique circumstances, but let’s look at what revolution could look like in practice. Imagine, in a post-incident review, your team is trying to figure out what went wrong, what went right and everything in between. Let’s say Mike was primary on call, but poor Mike has just had a baby and, exhausted, sleeps through the alarm. Jose, a developer, woke up and after stumbling toward his computer in the dark, read through the alert and opened the monitoring tooling. He quickly realized that the database was throwing hundreds of exceptions. Never a good thing. Initially, Jose assumed something had been configured incorrectly.

Must have been a provisioning issue. It’s a good guess. He continued to dig into the issue while asking others for help. Jose was able to access a graph that showed a spike in database activity and he compared that to the changes in the application made around the same time. Aha! Just kidding. It’s never that easy. Is it?

Here’s the clue. Every recent database transaction, she had the same article ID. Turns out the comments on a particularly polarizing article exceeded the limits originally provisioned for the database. The immediate fix was to set the limit impossibly high until morning when the operations team could properly enable auto-scaling. During the post-incident review, a few developers involved noted that there were no uniqueness constraints stored in the database. Development team time was allocated during the next sprint to allow for duplicate rights. This map, this revolutionary model, gives internal and external stakeholders, including customer facing, non-technical colleagues, the necessary context to understand any given process. It’s even more powerful when explaining delays and incidents, complex setbacks. I believe the next 10 years of tech will be focused on developer experience.

How do we make development better and faster? Yes, but also how do we make it more enjoyable? How do service providers abstract complexity without exaggerating simplicity or obfuscating observability. And how do we innovate, not just in our technology, but how we model it? I can’t wait to hear what you think of this new model and this approach to software delivery. I’m excited to see how it changes and adapts to scenarios we face in software development and how engineers in every role and at every organization tailor it to meet their specific constraints and challenges. I believe this is just the revolution to get us started. Thank you so much for having me.

Weitere Ressourcen

Trusted Releases Built For Speed