DevOps and Dependency Management for Non-Hipsters (aka C and C++ programmers) – Johannes Nicolai, GitHub

Short, reproducible build steps are a must-have for any continuous delivery pipeline. In the embedded and cross-platform C and C++ world, declarative dependency management is still relatively new and a big roadblock to fast, reproducible and secure releases. This talk shows why package management is a good thing and how, an Open Source, platform independent package manager manages dependencies of C and C++ libraries. A demonstration shows how to build a fun GitHub application on many platforms within minutes using, AppVeyor, Travis CI and Jenkins.


All right. Welcome everybody to the DevOps for Non-Hipsters, aka C and C++ Programmers, talk. Actually the non-hipsters part probably deserves a little bit of explanation. But before we go there, let me just quickly introduce myself.

I’m Johannes Nicolai [inaudible 00:00:22] GitHub. I work at GitHub as a Solutions Engineer. Solutions Engineers, well they discuss all kind of development related challenges with the communities and customers. These are some of the customers I’m responsible with. I’m mostly working in German speaking regions and they’re, I guess many of you know, Germany has a good tradition in doing engineering work. We have many car manufacturers, many of those customers do coffee machines, MOT COT devices. So it’s not a surprise that many of those customers actually do something in the embedded space. Do something in a C and C++ space.

I took a plane from Germany to this conference and at the Frankfurt Airport, I’ve seen this commercial, “Why invest in Germany? Ask our machines.” I actually think this is a pretty stupid commercial. Shouldn’t ask the machines, they can’t talk. You should ask the people who wrote the code for those machines.

I asked many of my customers, but also other folks not mentioned here, about their challenges when it comes to DevOps or continuous delivery in what they’re doing with those machines. And this is a slide with the anonymized quotes from those folks.

I could have mentioned that some of you might sympathize with statements like this. Typically starts with this idea of continuous integration, which is a prerequisite for anything DevOps or continuous delivery. And in the embedded C and C++ space, it’s not unusual that builds take 72 hours, or even more if you count the entire pipeline.

This idea that you just have one commit in isolation and test with it, this is often not there. There’s just one built and it contains your commit, and contains the commits of hundreds of coworkers as well. If you then dig deeper, why build actually take that long, you learn well all of those packages have a lot of dependencies to other packages. And those dependencies are build over and over from scratch again.

You ask, why do they always build things from scratch? Why this the way it used to be? And there are so many different permutations and optimizations so we can’t do anything else. And then you ask, how do you even get all of those dependencies together? And they say, well we have custom build scripts for that.

I say, well wait a moment. You have custom build scripts, how do you figure out, for instance, if you have a security vulnerability in open SSL, Heartbleed attack, what components are affected? And then I got a really interesting answer, which was, well, we hire interns and those interns, they go through those filled scripts and they figure out what is affected.

I don’t know whether I should ask you to show off hands whether any of those quotes sounds familiar or whether it’s too embarrassing. Okay. Some people are brave. Thank you for that. I actually also asked my colleagues here in the US whether this is just a German thing, and it is not. There’s also many car manufacturers and other industries in the US where they have similar challenges.

Now comes the non-hipster part. There’s a clear solution to tackle that problem, which is declarative package management. Where you don’t build stuff from scratch all the time, you just have a declarative way of saying what packages you depend upon. Then you get the binaries from there. You don’t put them into Git directly. You have semantic versioning and all that stuff. Anybody who is doing Java or Java Script knows exactly what I’m talking about, but when I initially asked whether this is something you could do in the C and C++ space, I literally got this answer, “Package managers, isn’t that something for those hipster languages?” Java Script, Java, because they have it so simple. They have just one binary, which matches to a version.

In C and C++ it depends on the operating system. It depends on the compiler flex. It depends on the architecture. 32 bit, 64 bit, statically linked, not statically linked, and all of this matters. You have different binaries even if the version of your library is the same.

So thanks but no thanks package manager, it’s a thing and we built something ourselves but there is not something we could use and I thought well I’m not a hipster. I think you can use package managers. [inaudible 00:04:57] There’s package managers out there, like Conan. So challenge accepted.

It’s not so much about using those technology or pointing to those technologies, it’s more about how do you convince your coworkers, how do you convince your boss, that adopting a package manager for C and C++, it’s really worth it and not just something like the cherry on the cake?

From the agenda of this talk, I will spend the first third about arguments. How you can convince your boss, how you can convince your coworkers, that package manager is a good thing and not just a nice to have thing. It’s arguments that actually worked at the companies I was presenting at the beginning.

Then the second part is Conan IO. Obviously a great package manager, which is specified based on whatever C and C++ requires. And I thought, I’m doing a fun example, not a hello world example, but something close to GitHub and our core philosophy.

And then last but not least, those C and C++ non-hipsters have a point that you need all of those different permutations, not just one version of an artifact. So I will show you how you can build something like this easily with ESHA pipelines and some other CI systems. So let’s jump right in.

How do you convince those people who just say, “Well this is just for hipsters, we don’t really need it.” I would say any company nowadays as they are agile, and I learned a nice term today in the keynote from Nicole, full agile. So basically pretending you’re agile.

If you really want to put the money where your mouth is, you can ask your managers or your coworkers to look at the agile manifesto. The agile manifesto says, in its first sentence, “Our highest priority is to satisfy the customer through early and continuous delivery of valuable software.”

That has two important aspects. The first one is this continuous delivery. You want to always ship thin slices of product. You don’t want to have risks accumulated over multiple months, maybe even years. And there is a litmus test you can do for your company, whether you’re really agile, you can ask whether you are still celebrating your releases. If you know exactly which release you’re on, if you celebrate them, or if you also have to block your vacations, if there are certain month where nobody can go on vacation because this risk just accumulated too much, you don’t know what’s coming on, you’re not really agile, you’re not doing anything remotely to continuous delivery.

Releases have to be boring, they have to be a non-event, nothing to really celebrate. That doesn’t mean that you always have to deliver those releases to your customers, but it has to be at least releasable. This is only possible with package management and proper dependency management, which you will see in a moment.

The other aspect is about this value. It’s not a secret that even though a customer said they wanted something, it’s not necessarily what they need in the moment they try it out. I know automotive customers, they work two years to develop certain functionality in their navigation system, and when the feature was out, the customer just said, “I have this on my cell phone already in Google Maps and it works way, way better. Actually the only thing I need is to get my Google Maps in your heads up display. Give me that functionality.”

So basically two years of development effort wasted. If you have the ability to ship more often and maybe just to a better customer group, then you also have the ability to course correct because your customers don’t know better than you do what they want. They only find it out if they use it. And if you want to do anything like a B testing and course correction, then package management is important.

Why is it so important? If you look at these seven principles of the DevOps Bible and I guess not a single DevOps talk would be complete without citing that book. There’s one aspect of it which is, only build your binaries once.

What is meant by only building binaries once? Well, it refers to your stages in your pipeline. The idea is that you compile everything you need in the first stage, and then you’re using it as an immutable artifact so that you can be sure that when you run tests against that artifact, that this is the same artifact you use in production.

I’ve seen automotive manufacturers who had problems in the real car because they used a different compiler in the car for the latest stage, and all the performance tests were done against a different compiler version. This is only possible to prevent if you have immutable builds. If you build things once and then can really rely that the tests you did are actually working against what you are then also rolling out in production.

The second aspect is time. This is a standard deployment pipeline. Now imagine you compile everything from scratch in every of those green dots. Then it’s easily explainable why the heck it takes 72 hours, or even more. And this entire feedback loop, this entire idea that you can course correct, the entire idea that you can test in isolation, that releases are a non event, you can forget about it. For continuous delivery, you need some proper artifact management because otherwise you’re just waiting forever and ever until you’ll see the results.

Okay, so now some people may say, what’s the big deal? Immutability. I get it. I compile everything and then I check it into Git. This is typically the call we get from automotive or OEM customers when they first start adopting Git, which is, “Our servers are crashing.” And the reason why is, let’s imagine this is the open SSL library. It’s 100 megabyte, well it’s smaller, but it’s an example. And then you have this Heartbleed bug, you fix it and then you have another functional fix.

Then you already have 301 megabytes in your Git repository. Just for this one version of that library. Git is designed, that if you do a clone, you get every single version of your binary ever. You get 300 megabytes for any [inaudible 00:11:31] or for any customer or developer checking stuff out. And now imagine you don’t have three versions, but 100 versions, and you have different permutations of your builds.

Then we’ve seen customers where every single clone is 800 gigabyte over the wire. This is not only problematic for your network, it’s also problematic for the main memory of your Git server. So it doesn’t matter whether you GitHub or any other server, they will just crash, period. If you enable or if you allow developers to just migrate from Perforce or Subversion and keep the binaries, you will have endless pain.

That’s why dependency management is a good thing. I guess everybody, even the people who are currently not using a standard package manager know that it’s in theory a good thing, but many don’t know about those bonus features. These modern package managers, they do a lot more than just downloading some binary files. They know exactly all the different build systems and operating systems.

Imagine there’s a new GCC version out, or there’s a new Mac OS compiler out. Then normally those companies start to build and modify their custom build scripts, typically even multiple times, so detect that new compiler version, or you just use something like Conan IO, where the community takes care that all of this is just handled transparently.

Those modern package managers, they also know about the licenses of your software you’re depending on. You can even block certain licenses, so you don’t have to have this intern figuring out whether by some magic include pre-processor macro, you’re including a GPL component. You get this out of the box just as a report.

The same for security alerts. You can also define your own security alerts, or functional defects, if you want to and immediately see all the components effected so that your interns can do more important stuff than just debugging, which is error prone anyways.

Then if you have any kind of SDK, let’s say you’re managing an IOT cluster, then hundreds, or even millions, of devices, be it cars, be it toasters, have to download your software. You don’t want to download this from a single server. It’s just not possible. You want to automatically distribute this across the globe. You want to have access export restrictions in place. You want to have reporting. You want to have signing. All of this is included in modern package managers. You don’t have to build stuff like this from scratch if you’re just working with state of the art.

And last but not least, that’s probably my favorite feature. I’ve seen cases where you’re basically using a new version of a library and it just crashes, you just get a [inaudible 00:14:25] whenever you started and you don’t know why. The source code looks exactly the same as the old version. And then it turns out it was actually about a different compiler option optimization level used.

If you have this metadata, what compiler versions, what optimizations have been used to produce each of your libraries, you can just compare them from version to version to make troubleshooting much easier.

Those package managers also have a built in promotion concept, meaning you can, for instance, give your developers freedom in the exploratory phase to use any kind of C and C++ libraries and later you say, now it’s going into real production, now I want to lock this down to 10 packages, which I really trust.

So all of those things are on top of the basic functionalities of package managers and the next slides are not here for me to talk. It’s more if you don’t know those dates later on, if you want to do an internal presentation, your company, you just have the material. Like also why sub-modules are no silver bullet. Like if LFS is not a replacement for proper dependency management.

In a nutshell, they have certain functionality, but they don’t have any of those goodies like security alerts, semantic versioning, dependency and detection, or license detection, and they don’t know [submore 00:15:50] deals or Git LFS doesn’t know anything about the different dimensions which are relevant in C or C++ either.

Let’s switch gears and talk a little bit more about Conan, which is a package manager specifically designed for the needs of C and C++ programmers.

Some of the people who are driving this project are in the room and they know way more than I do about this. They’re also at the showcase upstairs about Conan. So ask them if you have more questions. They developed this based on lots of consulting experience in the C and C++ world for decades. They know their stuff and they made sure that it’s working on multiple operating systems as it’s written in Python. It’s all across platform including cross compilation. It knows a ton of different build systems, operating systems, and all the specifics between them. It’s not just a hobby project. In fact, JFrog acquired the commercial parts of Conan and it provides support for it as well.

Conan doesn’t work differently than any other package manager in the sense that it just figures out the dependency graph. It also has conflict resolution, so in this picture you can for instance see that open SSL and boost, they both depend on [Zlive 00:17:11] and Conan negotiates that they get a version that both other consumers can live with, so that you don’t get into any funny link problems.

Conan has a free open source reference implementation where it serves the packages, but it’s also built into JFrog Artifactory and JFrog Bintray and there DSL support for almost every single CI CD solution outside. And you will see more when I come to the live demo with Azure pipelines.

The way Conan works is that whenever you’re asking for a library, let’s say for instance the lip boost for this Mac and then 64 bit, then it just uses all those different dimensions that matter, creates a check sum, and checks whether there’s already binary inside the Conan server. If so, it just downloads it. If not, Conan also has this concept of so called build recipes. So it’s possible to also build things locally if they are yet not in the upstream repository. It’s not just the binaries, it’s also a recipe how you can produce those binaries.

So that’s, I say, enough theory. Let’s come to an example. As I work at GitHub, I thought I shouldn’t do just as simple hello world project, something which is closer to GitHub. At GitHub it doesn’t matter in which team you work, there’s 12 common principles which we always follow. It’s something like command on the ground, just what to do the right thing. And we call those the Zen of GitHub. I could easily do an entire hour just about these Zen of GitHub, but don’t be afraid I won’t do it.

The only thing I wanted to mention is that you can also get to the Zen of GitHub, if you call our API, APIGotHub/Octocat, you always get a random Zen of GitHub in this ASCII art. It’s not fully shipped until it’s fast.

So the idea now is let’s build a Conan application which is using [lip co 00:00:19:19], the C library, to get us a random Zen of GitHub, but only do it if you are in a good repository. If the working directory is a good repository, then give us this nice Octocat ASCII art, otherwise don’t.

The project is called the Zen of GitHub, and on GitHub. Inside the repository, read me, it’s exactly described how it works and what the output should be. You also see this dependency graph and the source code is really pretty straightforward. It’s just to include for the Zen of GitHub, the custom library and the lip kit.

If we are in a good repository, we figure out in this if statement and then just call the Zen of GitHub main function. Otherwise, we just say that you have to switch into each repository.

The dependencies to Conan libraries we do in a Conanfile.TXT, and here you see we depend on the custom Zen of GitHub library and on a lip kit too. It’s also possible to specify a version ranges. Although this is pretty unusual in the C and C++ world.

So let’s actually get those dependencies from Conan now. The steps are document in the GitHub repository. I am first cloning something. Is that visible? Yeah, I think so. I’m just making it a bit bigger. I’m cloning this repository and I’m creating a build directory. That’s the first thing I do. Then the second thing I do is I want to install those dependencies to lips and GitHub in lip kit. Conan install, and this will actually fail.

The reason why it’s failing is a while lip kit to is a common project, a common repository, and it’s in their standard Conan Central Artifactory Repository, this is not the case for this custom lips in GitHub. I just built this for my own, so I have to add another Conan remote to my Bintray where I have the package recipe for my custom library. Inside companies, you typically only use custom remotes to just ensure immutability in the [inaudible 00:21:35] control.

Now I am edit this remote and now I’m trying to install the dependencies again and this time it should actually work. It’s downloading those dependencies. If I wanted to just get the recipe but then build something locally myself, I could also use this minusminusbuild argument and then it would locally build the lips in GitHub for me.

Now that I got everything installed, I have to include it somehow in my make files. For that, I am generating a snippet just for a release build, and this snippet for anybody who’s interested, is then just included in my main C make file. I basically have a macro, which is initializing Conan, and then it will populate the Conan lives environmental variable, which I then just add as a link target to it.

And last but not least, I will build the entire thing and try it out. I go to the bin directory and call Zen of GitHub, and this is just telling me that I’m not in a good repository as expected. So let’s create a good repository, and repeat, and here we go. Keep it logically awesome.

All the steps I have done, they are also part of the slides. Like this dependency graph, as you see that there’s also some indirect dependencies being drawn in too. You see how you can add your custom Conan remotes. How you can clone repositories. How you install dependencies. How the semantic versioning works. How you can build certain libraries from scratch. If you’re interested just in the build recipe, how you can change those dimensions and whether you want to have it statically or dynamically linked, and how you can call the help.

That’s a good point. When you need any help for Conan, it’s a greatly documented open source community. So if you go on GitHub and check out the documentation, you will find everything you need.

Now we have seen how to actually consume a Conan dependency. So how do you actually create a library which can be consumed? One calls this the Conanify a library. To have a Conan version of it. I’m just showing this based on the Zen of GitHub, lips and GitHub library, which is basically a fancy wrapper around lip curl. If you look into the source code of the lips and GitHub, it’s basically just using lip curl to call this API

But it does have a manifest with the build recipe in it. Just have to find that. I’m showing that best in GitHub. That is basically the manifest, which is saying, I’m this library, this is my version, this is my license, this is where these are the options. I support static and dynamically linked options. I want to include the source code, and this is the dependency I depend on.

I don’t have to actually come up with that from scratch. If you have Conan installed, there’s a nice command called Conannew that automatically creates a scaffold for you so that this manifest for your library is automatically created. Even better, there’s a community called Bin Crafters where you can find hundreds of standard libraries already in a Conanified fashion. If you want to know how you Conanify lip open SSL, don’t do it yourself, or lip Q, or lip boost, or open SSL. This is all already on GitHub. Just search for Conan and the name of your library, and you’ll probably find a good example how this is working already.

But let’s just build this Conan library as well locally. In this case I’m just cloning the lips and GitHub, repository CD into it, run the test case, and build the entire solution. If this works, we will see another cute Octocat coming up. So this worked. Last point now is to upload this back to our remote so that other people can also consume this library.

And here may now say, okay, Johannes, what’s the big deal? You just build one version of the library just for your target architecture, which is a Mac, which is typically not an embedded something he would have in production. How can I build for all the operating systems and permutations we need in my company? How can I cover for all of those permutations of 32 bit, 64 bit, Mac, Windows, GCC, 5.4, 5.3, and so on?

I like to demo this based on Azure pipelines to build all of those things. It all starts with GitHub, again. I will go into a pool request I created for this warm up thing, this warm up edition of lips and GitHub.

What you can see is that the build definition, how I can come up with such a pipeline is part of the source code itself. Can you show of hands if you know about declarative pipelines? Would say probably 70, 80%. This is pretty standard outside C and C++. In the C and C++ community it’s still something unusual, I would say, at least from my experience. The idea is as we heard in the keynote.

GitHub, you want to safely experiment with even your build pipeline. How often does it happen that you go to your release manager and say, “I have a great idea how we can cut our build time in half, if we just use this and this trick.”. And they say, “Awesome idea, but we have a release in two weeks.” And in two weeks, you have something else to do. So if you have the build pipeline version with your source code, you can just safely experiment in a pull request.

You see my first experiment actually failed, but then in the second commit, in this pull request, you see that all the tests were successfully running through. So treat your build pipeline S code. If we look into this declarative pipeline for Azure, you will see that we are doing a metrics build. So we are building on Linux, Mac, and Windows in parallel. You also specify all the different architectures you want to test this on. And at the end you’re basically just calling the build script, which is done here in, line 58, and I want to give you some more idea about this build script because this is where all those permutations come in.

The metrics in your CI CD system gives you the operating systems, but to get all of those different permutations of statically linked, shared linked, or compiler versions, this is done as part of the build prior, as part of the so-called Conan multi packager, but you specify all the dimensions you want to iterate over.

If this is too abstract, let’s just check what Azure pipeline did for this pull request. If we go to the checks tab of GitHub and see how well this was actually built? And then we see this metric spot. We see those three different notes and inside those steps, the buildings, the solution, you can then figure out that it’s actually going across many, many different permutations here, 32 bit, 64 bit, statically linked, dynamically linked, and I guess in your company you will probably have many custom dimensions, which you can just add to Conan and then you just say over which of those dimensions to go through. Then you can also delete certain subsets.

If you say, for instance, for Windows, I don’t care about 32 bit binaries, you could also remove them from the pipe then set again. This is an example how it’s done with Azure pipelines, but you could also easily do this with any other CI system.

The Conan community, for instance, itself is using Jenkins and they have a live CI [inaudible 00:30:49] Conan, CI JFrog info where you can also lock into read only and you can learn how they, for instance, building packages.

Here, this the groovy DSL, where they go over multiple operating systems and there’s even some built in DSL into this Jenkins file where you can easily connect to Artifactory repositories and upload packages. If you go into my GitHub repository lips, then GitHub, you will also find examples for Advair and for Travis CI.

I hope you’ve got an idea how a DevOps or non-hipster pipeline could look like. It all starts that you can now safely experiment with your build definition. You don’t have to pray to a build manager to change the build definition for everybody. You just experiment in your pull request in isolation and if it works well, then it can be merged and everybody can benefit from it.

This pull request will then trigger your CI system, which is getting the dependencies off your library, you specified from JFrog Artifactory and then you have this Conan multi packager to go over all the permutations of operating systems and build flex to generate all the packages you need, then upload them back to Artifactory and if ops is agreeing with them and want to really publish them to the customer live live, they can do this by artifact promotion to JFrog Bintray, or JFrog distribution so that you can also download them from a CDN and they are not affecting just your single server, but they get from the closest server on the planet for your customers.

Why do you want to do all of this? Well, because you want to have your weekends back. You don’t want to treat releases as something big in your life where you can’t go on vacation, where you pray that there’s so much risks accumulated. You also want to develop something that your customers actually want and the only way you can do this is if your customer can tell you that sometimes what they specify is not what they really need and you don’t want to have your interns wasted on figuring out impact and all of those things. You want to have them work on productive stuff. You don’t want to waste anybody’s time adding support for Mac Yosemite by your own, or maybe even by three different people in three different parts of your organization. You just want to rely on the open source community standing on the shoulder of giants here.

It’s really important that it’s not about the tools first. I’ve shown you how you can do this with GitHub, Conan, Artifactory, and Azure pipelines, but it’s really important if you want to convince your team to talk about the why, not about the how.

They first have to build the buy into why you should do that. Why you want to have your weekends back. Why you shouldn’t celebrate your releases. Why you need to have immutability.

I hope that this talk gave you some ideas of how you can actually accomplish this, not just with the how, but also for the why. So please go back to your teams and tell them about it and you will see that even if you don’t see yourself as a hipster, I certainly don’t, it makes sense to join the continuous delivery and DevOp spend wagon. Thanks for your time.


Try JFrog for free!