Use Case – Building the System of Systems – creating a universal CI/CD pipeline for legacy applications
Abstract:
Mark Maxey / Raytheon, May 2016: Much of continuous integration and delivery (CI/CD) literature is targeted at projects that have similar profiles, i.e., small projects releasing a few highly modularized services targeted for the internet with little or no coupling between them using Git, Jenkins, and automated provisioning tools like Chef or Puppet. This may be true for CI/CD unicorns, but what do you do if aren’t one of the cool kids on the block?
Talk Transcription:
All right. Well, welcome. I appreciate ya’ll coming here just before lunch. I’m glad it’s not just after lunch. I think I would get a different audience here in about an hour or so. So ya’ll are all primed and hungry. Hungry to learn. I hope. Or maybe otherwise.
So my name’s Mark Maxey. I work at Raytheon. My email address is right here. Mark underscore R underscore Maxey at Raytheon dot com. So you can contact me through that or we can chat afterwards. As I’m going through here I’m going to pause and be sure to ask questions or throw out comments. Please no rotten tomatoes or anything else like that.
So we’re going to be talking about a use case that we have at Raytheon. For what we did and how we used Artifactory.
So just to orient you a little bit to Raytheon. Raytheon has a whole bunch of stuff that we do. We have about 100,000 employees. We have many different campuses spread across the US. We build things like the Patriot Missile, microwaves, ground systems for controlling, GPS and weather satellites, and some the other stuff that you’ve heard from Adrien a couple of days ago about a headset displays on helmets and other kind of stuff like that. But one of the things that we do quite often, the one — the stuff that I’m most involved with is building system of systems.
So what typically happens is that we come together and we have a system that we want to enhance or a system that we want to combine with other systems and so we need to aggregate these things together. Most often these systems are not created from scratch. We have almost no green screen startups, at least in my industry. We are always creating something from something else. So we’re always pulling things together, extracting two different things to make some new thing to come together. And so whenever we do this, we’re often faced with a lot of challenges that come with dealing with legacy applications. A lot of times you’re bringing together different technologies, different processes and different pipelines that may or may not have existed before.
So what we’re going to be doing today is we’re going to be talking about one such use case that I’ve been working on recently.
So I’m just a dude. Okay, so I show up on a project and they’re like, hey Mark, can you create this widget for me. And I’m like, okay I’ll create the widget. But, you know, what do you want that widget to be like? So, for me I come in and I start talking about nonfunctional requirements. You know, availability, reliability, are most popular nonfunctional requirements. But I think that speed, productivity, reliability, efficiency, all of these — cost — I think these are pretty important nonfunctional requirements too. At least these are the kind of things I’m interested in solving. But at least when we come into projects, at least in my industry, we’re often faced with fixed requirements, schedule, and budget, which is obviously a bit problematic.
So you’re kind of coming in with a huge amount of constraints upfront. You can’t overhaul the legacy architecture, you have very long deliveries, you have a lot of people that don’t really understand continuous integration, continuous delivery. And really oftentimes when I come into a project, I’m just the guy with a bunch of ideas. And the only power I have is the power of persuasion. So what do you do when you get put into these kind of situations? Well, that’s what I’m going to be talking about today.
So here’s a brief outline of what we’re going to be looking at. It’s gonna cover a lot of what we’ve done so far. What we are currently working on and what we’re coming up in the future. And as we’re coming through this, I’m going to be, kind of, […] out some things that I think are interesting enhancements around Artifactory and things that perhaps Artifactory might consider looking at in the future about kind of partnering with us on a collaboration.
So, first off, like many people, we don’t have a single version control system. We use multiple version control systems across this one project. And not only one company but we often have multiple companies that are often collaborating together to create the product that we create. So the first idea is, okay, when you have so many different VCSes that are out there, so many different pipelines, how do you get all these guys just to play together in a single story. Well, Artifactory is where we saw as the integration ground. So, we don’t necessarily care what you do in your version control system as long as it’s repeatable and reliable and tagged and all that kind of stuff. What we do care about is saying, okay whenever you get done with your stuff, you’re going to dump it into Artifactory and that’s going to be the place where we start enforcing some policies and constraints and the like.
One of the things that we do is we use Jenkins to do our builds. And Jenkins works nicely because it works across multiple version control systems. And one of the things that we do whenever we build across the different version control systems is we dump all of the output for a single baseline into a single repository. So this is a bit distinct or different than you might hear from different people. In the past, several years ago I, like many, had one repository that had all sorts of versions, all sorts of branches, all sort of baselines in it and I personally found that fairly challenging to keep track of because then you had to start relying on IV statuses or tags and other things like that to be able to delineate that and you end up your version numbers get all wonky and being, for us, we maintain something in the neighborhood of 20 to 30 different kind of branches going on at any one time. Having all that being in one mother of all repositories gets to be pretty confusing.
So very quickly, right up front, we decided that one repository represents one baseline represents one branch. So all of — no matter where your version control system is coming from. No matter what your branch name is or Git and other equivalent version control systems we riggered Jenkins build output to everything that logically goes together that’s logically part of the same branch. We dumped that all into a single repository.
That happens to have some pretty nice side effects. You know, you can look in one repository and see everything for that one baseline and then you can have different repositories with different baselines in them. But then whenever you’re looking at deployment time, it’s super simple because we can point our deployment side at a single URL and that one URL represents the one repository and we don’t have to have any, sophisticated logic to figure out which branch or which version to use or other stuff like that, we just go get the latest from that one repository, and we know that the latest in that is for the right baseline that we care about.
So the first challenge in order to set up all of this is answering the question of how do we get our stuff into Artifactory. Well, that’s challenging because not everybody uses build technologies that supports publishing and resolving dependencies. So I think that question came up earlier in this morning’s keynote talk about well, how do you do this with legacy technologies. Well, I’m going to raise my hand and say, I think we’ve come up with a pretty decent answer here and it surrounds the use of Gradle.
So whenever we’re talking about short term solutions we’re talking about ways in which we can easily inject ourselves into the legacy applications without having to make them change their world. We want a gentle introduction where we can say, okay, here’s a really easy low hanging fruit. Easy first step where you can take your existing builds. We don’t want to jack with it too much. We just want to be able to take whatever you put out right now and put it into Artifactory.
Well there’s a couple of ways we can do that. The first way with Gradle is a very crude way. It’s not one that you typically use Gradle for. But we use it a download and upload repository. So in the first use case that you see in the upper right hand corner here you’ll see we use Gradle as a download manager where we express dependencies in it. We download them, push them off into a certain directory, then whenever the […] build kicks in, it can be perl scripts, makefiles, Ant, whatever we don’t really care what the […] way of building is. It’s all good with us because as we’re not touching you at all. We’re putting your dependencies into the right place where you expect them and then on the outside, once you’re done with the build, we recognize that the build is done we go discover what those build output is. Maybe we package it up in rpms or zips or some other way. And then we can shove it up into Artifactory so it really turns it into a setup, a execution, and a teardown or a publishing phase for it.
So that works with […] builds that aren’t very nice. I’ll say. But for […] builds that are traditionally written in Ant, they’re, you know, slightly better solutions because Gradle handle has really nice Ant integration in it. So for some of our builds, we simply ramp our Ant builds in Gradle. So we execute Ant builds using Gradle directly. Ant targets, turn into Gradle tasks, and we use the same dependency resolution, inject those dependencies in as Ant properties. And then on the outside, we publish the Ant output and package it sometimes in different forms and publish that up to Artifactory.
And that worked pretty well as an immediate step but what we found is that as people start to get a whiff of what Gradle can do for you, as they start to see the DSLs that are there. As they start to see the conventions that come from it and how easy the Gradle conversion is, more and more people start to want to convert to Gradle. So for us and our project, we — part of our long term strategy is to convert to Gradle and just to use pure Gradle builds. So one of the things that we do to support that is we wrote several plugins that introduce enterprise or project specific standards for us for both Java which is really nice. But we also support some pretty nonstandard things. So for example, our build support, native builds in C++ on both Windows and Linux using Gradle. We also support Java code generation for example with Jax B or Sequal J, Jax WS, or any number of different Java code generation stuff.
So one of the nice things about that is that our build side we end up having Gradle build scripts that almost have nothing but dependency declarations in it. Because our Gradle build scripts end up being exclusively about depending what your — declaring what your dependencies are and then our enterprise plugins take care of all the other build logic for us. Not just the build logic but the packaging, the publishing, the dependency resolution, all of that kind of stuff is done for you. So you end up with like five line build scripts instead of, you know, having 20 to 30 different projects each with their own set of build technologies, each having hundreds or thousands of lines of build content behind it. A fairly nice transition.
So how many here are familiar with Ivy or Ivy configurations? Sweet. Okay. So — so for everybody but you, a one minute lesson in Ivy. So Ivy is a dependency declaration technology that is really nice. It has nice integration with Ant and it integrates nicely with Artifactory as well as Pom files. I certainly — I definitely prefer Ivy because Ivy has this thing called configurations. And Ivy configuration is basically a bucket for files. And so whenever you’re publishing files, you say, okay I want to bucket certain files that are related to each other in certain ways. And so, for example, common buckets are things like data, or compile time, or runtime, or configuration artifacts, or documentation, or source files. So whenever you create packages, zips, rpms, whatever that aggregates these things into nice packages, you want to take those different files and put them into buckets and so for example all of your, you know, HTML documentation of your Java docs, all of your user guides and all that kind of stuff, you may want to put into a documentation configuration and that helps you be able to aggregate things together that go together well.
On the flip side in Ivy, you actually depend on configurations and so when I’m expressing a dependency on somebody else, I’d like to express my dependency on their configuration so whenever I’m expressing a dependency I’m expressing it on your runtime declarations, or your API. For example, if you have a public API, then you put all the jars or shared libraries or whatever into your API configuration and that lets me depend on you, but lets me depend on an abstraction. I’m not dependent on a specific filename, I’m specific — I’m dependent on a concept that says, API. Whenever you change the names of your files, or you change the cardinality, you decide to go from one to two or three to one or whatever that is, it doesn’t break me because the only thing I’m dependent on is the concept of I’m dependent on your API and you get to define what that API is.
So we use that. Not just at build time, which is really nice, but we use that at deployment time too. So whenever we go to deploy things, we don’t look for specific files in Artifactory, we look for specific modules and we say we want to deploy that module’s runtime configuration. So whenever you get to the teams that get to define what is in that runtime configuration it can be jar, shared libraries, config files, it, you know, any amount of thing that they want. But they get to choose what that is, but that abstracts us from our deployment time system because the deployment time system gets to be a little bit dumber. They don’t care, they’re not as tightly coupled with the components themselves, they’re only coupled the abstraction of runtime itself. So we found that to be a nice way of insulating our build and deployment systems and giving the individual teams the power to determine what that is. And ironically they figure out what that is by simply configuring their build dot Gradle.
[Audience]
We do. So Gradle configurations work really nicely, we currently don’t use Gradle at deployment time, which is one of the choices there and also Artifactory doesn’t speak Gradle natively and so whenever we do our builds, our builds generate an Ivy xml from the — from the Gradle configurations themselves. And so the Ivy xml and the Gradle build script logically are equivalent. Just like the Ivy and Poms are often mostly equivalent to each other. So Ivy is what Artifactory speaks natively and it tends to be, like I say, a nice bridge between our build and our deployment side.
So the next thing that we had to address within our systems is reuse. What we found is we had quite a few monolithic systems that exist out there and, you know, it’s awful nice that we have something that can build and deploy something but what we’ve found is that some systems like to publish 35 gigabyte zip files and say, hey deploy this. And we’re like, what? We get a little confused by what all that big monolithic thing is. Not to mention there’s a lot of problems with patching and all sorts of stuff like that. Or expressing dependencies.
And so one of the things we started doing is saying hey, we need to understand what services you publish, we need to understand what your subsystems are. So we took a look at Michael Feathers’ book, Working Effectively with Legacy Code, and it said, hey we need to identify what the seams are and start publishing those seams to Artifactory independently. Well, one of the challenges right away is we found often times the subsystems have a lot of spaghetti code in it, have a lot of legacy stuff that you don’t want to muck with. And so we had the challenge that says, okay how I, you know, solve exposing dependencies. How do I solve patching at a granular level without having to solve the last 15 years of history that went into it. Well, the way we found to do that is by using, again, Ivy configurations and also build tricks.
So, let me talk about Ivy configurations. So for example if you have one build that creates one big zip what we encourage people to do is to say, hey, you know, take your build, do exactly what you’re doing before, but instead of creating one big massive jar file or whatever that is, create several different ones. Start to componentize it into different pieces. We’re not asking you to change your source code, we’re just asking you to change your build to create build — better build artifacts for us. And then within those build artifacts we start to bend them into different configurations based off of what their purpose is so that we can add deploy time then we can choose to deploy different parts of it at different, based off of our needs.
So that’s okay, it’s not awesome. Part of the challenge with this is whenever you do it at the component level or at the configuration level, then your version number for all of the configurations stays the same. So they can’t vary independently. A slightly better way of doing this is by simply changing your build system to take the same source code but create — have different builds for it completely. So sometimes we take the one big monolithic component and we say, okay we’re going to actually, instead of creating one big, you know, mother of all builds, we’re going to have two slightly smaller builds or maybe 10 smaller builds that are going to create that same individual output but then each one is independently versioned, can vary, you don’t have to — the build times go up because you don’t have to wait for everything else. You have independence. Part of it can fail when others succeed. And you don’t have to wait for the whole world to work before. Getting your one little change out there into the universe.
So these are strategies that we use to make sure that we don’t have to change the source code, we didn’t have to refactor the world just to be able to get modularity coming out into our Artifactory so that we can have other things express dependencies on it as well as independent modularized deployments.
So this is the first area where I think we’ve put in some pretty interesting work around Artifactory. One of the challenges that we have on this current project is that we deploy to something in the neighborhood of 25 to 30 different places, about four different clouds. We have multiple state multiple locations — geographical locations involved. A ton of different users. And subsystems that are each independently contributing to our baseline. We collaborate in some loose fashion but we certainly each operate at different schedules and different paces from each other.
So one of the challenges that we have is just keeping track of what the heck is going on. We really don’t have a — we didn’t have a very good mechanism for doing that outside of spreadsheets and human heroics. So what we did was we actually introduced a API — an API to Artifactory for deployments. Actually, technically it’s two APIs.
The first API is a notifications API. So our deployment system, whenever it’s going to deploy a module, one of the first things it’s going to do, of course, is it’s going to resolve it from Artifactory. Once it resolves from Artifactory when we’re going to turn around and call a notification API on a — for a user plugin that we wrote in Artifactory. That notification API is going to tell Artifactory who, what, where, when, why. It’s going to talk about the specific file names, versions. It’s going to talk about the context of the deployment. It’s going to give a lot of different contextual information up to Artifactory. And then that Artifactory user plugin is going to take that and understand what hosts were done, you know, where it’s not, you know, how uniform it was, what subset it was. All that kind of information was given to it. And it’s going to create a whole bunch of Artifactory properties on the filenames and or the version associated with it.
This is a pretty powerful concept for us because we turned around and wrote an API, a query API for that. So now we can go back up to Artifactory and we can ask it, so for version X of module Y, where is that deployed. Or for such and such environment, what is deployed in that. Or what has been — not been deployed in X number of days. Or is this certain artifact, is it still used anymore. So there’s a lot of questions that comes out of this that we have found pretty useful. And I’ll talk a little bit more about that in a few minutes.
The next area of Artifactory that we have enhanced a bit is in the area of metadata. So Artifactory does a great job with build output. One of the things that it also integrates it with is things like Jira and other agile lifecycle management tools. But one of the things that we found that needed to be enhanced is Artifactory’s ability to keep track of that metadata.
So like on my previous slide, we talked about deployment tracking. We certainly invented a whole plethora of properties in order to keep track of all that stuff, but it sure would have been nicer if we had a first level concept object called deployment that is just like a first level build object. So build in Jenkins and all sorts of CI servers have a really nice build info object that has well defined set of information. It has hierarchies associated with it. It has really nice data graph associated with it. We certainly could have used that on the deployment side.
And similarly on the metadata tracking side when we’re talking about ALM tools or version control integration. We certainly invented our own metadata to keep track of all of that. But as it turns out all of this stuff has a fairly tight relationship with each other. So for example the change sets that come in rely very heavily to the features that are being developed. Well, in our world, we decided that we don’t want to deploy features, I’m sorry, we don’t want to deploy version numbers, we want to deploy features. And so when we present to our operators what — or to the people that are actually deploying system, we don’t want to present to them a version number, we don’t want to present to them a chain set ID, we want to present to them something in their natural language that they understand that they can deploy and what they understand dates back to our agile lifecycle management tool. So we choose to deploy features, not version numbers, not change set IDs.
And so there’s a graph relationship that describes how things are related to each other and it actually turns out we use Artifactory metadata to keep track of the foreign keys into the various systems that we have. So if you’re using Jira, you know, we have a key, foreign keys, that allow us to go and use Jira’s REST API. For, we use rational […] concert whenever we need to find out information from it we use its REST API or it’s public APIs to go query it. So we don’t try to reinvent the world in Artifactory metadata. We try to put just enough metadata into Artifactory so that we can then hop into that other system to go find out its awesome stuff.
And then we have dashboards that present all of that together to the universe. So instead of having, you know, certainly you can go into Artifactory and go to the release tab where you can go find the build info tab. And you can find stuff like that and certainly build info tab you can hop over to Jenkins or to Bamboo or whatever to go find that build and that’s really nice. But what we found is that we needed a comprehensive world view that says, okay, how do our deployments, how do our features, how do our chain sets, how do the deployment orchestration, how does all of that all fit together? What is out there? What is not being used? What is being used? How do I know when I find a bug, what to write a discrepancy report against? So this is a — we use this kind of information as contextual clues and to be able to give our users information that is in their language so they don’t have to hop between the different ecosystem tools. Which are all individually great, but we need some higher level representation that speaks their language.
So I’ve heard a lot in the last couple of days about the challenges of purging. So, I too, we have found that purging to be challenging. Our data stores, our storage fills up just like everybody else does. And we have repositories filled with any number of stuff that’s out there, some of it is awesome, and some of it’s not. So how do we distinguish between what is awesome and not awesome, well, I personally find that choosing to declare something that’s not being awesome based off of the, you know, date that it is or the fact that it’s been 10 days, or the fact that it hasn’t been downloaded in the X number of whatever days. I find these things to be okay policies but not necessarily the best type of policy. Because it’s pretty arbitrary about what you decide to purge on.
You know, certainly having Jenkins, configure that policy in Jenkins purging. The Artifactory plugin. That’s really nice. The Artifactory content, that’s really nice. But what we found is that once we introduced deployment tracking, it opened up a whole new type of policy that existed to us that doesn’t exist before. The policy we want to use and are starting to use is to purge things based off of what is not used anymore. So as opposed to waiting X number of days and just kind of hoping that it’s not right, or setting your purge policy to wait, you know, a month and knowing that nominally things are not used after three days, and then having, you know, that 27 days where things are in this kind of limbo land where you’re not sure about.
We purge things based on what is not used anymore so we can query our REST API to be able to find out whether or not if an artifact has been used anymore, when the last time it got used, and then based off of wherever that last time it was actually used, and when I say used I mean at deployment time. So the last time that it was actually deployed and used in a real environment by a real individual, whether it be dev or prod or anything in between, we keep track of all that and so once something is not used after, you know, thirty days, sixty days, whatever that is, we are going to move it to the trashcan and then we let the trashcan take care of its natural purge policy.
So, once it actually finally gets deleted from Artifactory then we’re going to intercept that call and actually delete the Jenkins job associated with it as well. So one of the things that we want to keep — make sure of is that Jenkins and Artifactory are always in sync and that we’re always keeping track of the things that were actually used. Anything that we don’t need anymore we want to get rid of. We don’t want to keep a year’s worth of snapshot builds that nobody references anymore. And being able to keep track of that across the different subsystems that we have is challenging with purge policies whenever sometimes a subsystem maybe publishes five things a day, sometimes they may not publish something for a week, depending on what their builds and check in policies are like and all that kind of stuff. So being able to track stuff based off of what is used is actually a fairly powerful concept for us.
So one of the things that we’re starting to get into is how to promote the various subsystems. So if you can imagine, 25, 30, 40, 50, 60 different subsystems that are out there, some of them are gigantic and some of them are modest sized, and very few of them are actually kind of tiny, you know. So think a hundred K worth of code as being the medium sized systems and think, you know, maybe a couple million lines of code, for one module that gets created for that 35 gigabyte zip file. So our challenge is, okay so each of us are operating independently, how do we actually pull together one system that actually works. So, we’re not creating, you know, 60 individual products, in the end we’re delivering to one customer, one system, with a whole bunch of features that are all associated with it so how do I know that subsystem over here actually integrates with this subsystem over here.
Well, we solve this problem in a couple of different ways. The first way is using semantic versioning. Semantic versioning has several challenges with it but we currently think that its valor — its merit is worth the investment into it. So semantic versioning solves the problem of making sure that physical interfaces bind together. So that I know that things change incompatibly are not going to be resolved into the same builds, they’re not going to be resolved in the same deployment. So we guarantee that using semantic versioning and that’s all nice. But, one of the challenges that we have is that there’s any number of non-physical ways in which things don’t work, so any number of, you know, just because you have a Java API and you hooked together that doesn’t necessarily isn’t the only way in which our interfaces come together. A lot of our interfaces are over JMS, or message keys, or web services, or any number of file systems or ftp or any number of other kind of interfaces that are out there for which semantic versioning doesn’t really quite fit that. And so, one of the challenges that we have is trying to say, okay how do we recognize when you implemented and have tested a certain interface that I need to go and take advantage of.
Well this is where multi-phase promotion actually starts to come in. So we actually have five different types of releases in our current system. The first type of release is when each of these individual twenty to sixty different subsystems do their testing. And this is what you traditionally think of as CI. So you go through an individual component testing phase and they come out on the other side and give a thumbs up we do a traditional Artifactory promotion within that.
So each of our other types of faces — phases involve some type of promotion where we’re aggregating different ones of those components together. So, you know, these five over here are finish their testing and these three over here have finished their testing, now I want to test the eight together inside a single one. So how do we know what that is? Well, we use Artifactory metadata to be able to recognize what things are past testing, which ones have not, and then we run promotion based off of an AQL query to be able to figure out what is worth coming together so that we figure out, hey these things are being promoted together into a repository, we’re going to test these, we’re going to give them, once they past promotion, past testing, we’re going to promote them. And then maybe these eight get put together with these 15 over here, and then we aggregate things together in tiers until we get to the top level where we have all sixty playing all nice together.
Okay, so here is the third concept that we find pretty interesting that we are going to experiment with. This is an actually an idea that came from Fred but we’re going to take a step out here and see whether or not it’s viable or not. So whenever we publish to Artifactory, of course, Artifactory creates and tracks checksums, SHAs and […]. And all of that is fantastic and good. If you were to go look at the build disk, however, you’re going to find for an awful lot of the artifacts and packages that you push up there that the disks are always going to come out as being different even though they’re logically the same. Why are they different? Well, timestamps and, you know, checksums, and you know, usernames and other kind of stuff like that, that doesn’t impact the real world execution of the thing. It doesn’t impact the interface, but what it does is it monkeys with your checksum of it and makes two things turn out to not diff correctly. And that produces a lot of confusion, a lot of challenges along the way.
So what we’re interested in is trying to look at this thing called the true checksum. So what we’re going to be doing is at build time, we’re going to look at a package, a zip, a jar, a shared library, or whatever it is, and we’re going to allow the build time technology to declare what the, quote, true checksum is. Now that true checksum could be basically an exclusion list. You know, don’t look at the meta NSF in a jar file, maybe ignore the rpm header in it. I don’t know what that’s going to exactly look like, but we’re going to give people, maybe we’re going to give them regexes to ignore certain parts of the manifest. But we’re going to give them some ability to declare what is important and what is not important in the packages that get created.
So as part of that build, we’re going to create the true checksum, and we’re going to tag that whenever it gets published to Artifactory. Well, that’s actually the interesting part and this is where the Artifactory side comes in. Is one of the things that we’re going to try to do on the Artifactory side is have a user plugin that we’re going to query that says, okay, does this true checksum, does it already exist in Artifactory. Because if it already exists in Artifactory, then we’re not going to publish this thing up there already because we’re just going to reuse the previously existing artifact because the true checksum represents two things that are logically equivalent. The real world behavior is identical even through part of their metadata might not be exactly the same because of build output.
So if something already exists in Artifactory, we’re not going to deploy it to Artifactory, we’re just going to reuse it in there. Then at the end of build time, if nothing changes about the build, then we’re not going to publish the build at all to Artifactory. We’re just going to reuse some previous existing build that exists out there. That’s going to solve a lot of problems that at least we hope it’s going to solve quite a few of our problems because our builds tend to generate a whole bunch of stuff. Our downstream dependencies tends to build a whole bunch of stuff and publish a whole bunch of stuff to Artifactory that isn’t really needed because it already exists logically there. We’re not introducing change sets, we’re only rebuilding things to verify upstream behavior. So we’re hoping that’s going to solve that problem as well as reducing our load on the database in our backend for all the churn necessary to update stuff that is really not necessary.
One of the problems that we also have is trying to keep track of requirements, so I talked about things that logically go together, at least in our world, I don’t know about yours, but in our world, we deal with hundreds or thousands of requirements documents. Everything that’s specify from specifying interfaces, things that are given to us by customers, internally generated, externally generated, all sorts of stuff out there, so we have to declare compliance.
So one of the things that we are going to be exploring is how can we release both the software and the documentation, which includes user guides, as well as requirements, and interface specifications. How can we CM control both of these at the same — in the same fashion? Because to me, we release software and we have some pretty sophisticated ways of dealing with that. But our document management and our requirements management is, you know, at least a couple of decades behind that. And they’re treated very differently from each other. And so trying to draw the traceability lines between each of them is pretty challenging and I think publishing and CM controlling documents and building them, and releasing them, and promoting them, with the software is a pretty interesting concept that we’re going to be exploring.
And then finally we’re coming up, this is near the end, auditing. We — once you do all this in Artifactory, being able to create version description documents and being able to say, what is deployed where and who did it and all that kind of stuff is pretty super easy.
So I’ll wrap up here and leave just a couple minutes for questions but the two points that I’d really like to leave ya’ll with is the fact that, you know, we work in an imperfect world. We do not work in the land of green screens where everybody is some, you know, super well-intentioned person who is always wanting to overcome and wanting to do continuous integration, continuous delivery. We have all sorts of legacy challenges about bringing up code, but yet we have a pipeline. We have something where we are taking a few steps, introducing to — people into a continuous integration, continuous delivery pipeline. And that’s something that’s better than what existed before.
It’s not perfect, but maybe here in five years, in 10 years, we’ll be to the point where we can decompose these into microservices, exposing granular services, and having automated testing and all this kind of stuff all comes as a prerequisite — as a next step of actually having stuff in a well-defined pipeline to begin with.
And the way that we did it was very simple, we just simply went and looked at what was on fire, we went and solved problems that were practical and feasible and then we turned around and we just asked the same question again, what’s on fire, how can we solve it. And we just keep iterating like that until I retire.
So, that’s it.