Use Case – Does size matter? CI/CD in a large scale infrastructure

Abstract:

Loreli Cadapan and Rohit Kumar/ Oracle, May 2016: Metrics driven complex orchestrations: To orchestrate asynchronous processing on multiple systems (Jenkins, Artifactory, distributed test environments) into an overall continuous delivery pipelines, we developed a Clojure-based Domain Specific Language (DSL) that provides high-level macros for expressing and graphing complex and highly configurable CI/CD workflows for Oracle product teams. Along with sample use cases of product continuous delivery pipelines, we will cover statistics related to the usage of this event-based orchestration server.
Docker registries at scale:
At Oracle, numerous product teams utilize docker as part of their cloud continuous development. Each product team is able to use their own internal docker registry based on Artifactory’s docker support, enabling teams to manage their projects in a distinct registry, exercise better access control to their docker images, and share images across the organization.

Talk Transcription:

[Loreli] So, you’ve probably read the title. So despite the name of our talk, I can guarantee you Donald Trump won’t be here to present. Sorry Jim. Sorry to disappoint you.

Okay, so instead we’re going to be talking about how we scale our continuous integration, continuous delivery at a large organization such as Oracle. So, my name is Loreli and my co-presenter is Rohit Kumar. And let me just give you guys a little bit of introduction of who we are, the teams that we work for. We’re in the central development engineering team in Middleware at Oracle. And the charter for our team is really to be able to improve product quality, developer productivity, making CI or CD much quicker, improving the velocity of our pipelines, and also enabling product teams, giving them the autonomy to be able to do their product builds well and be able to improve their product — productivity as well.

Okay, this is our safe harbor statement. I’ll let you guys read it for a little bit. If you can read, hopefully. Okay.

All right. Growing pains. So in a large company such as ours, we had to go through various growing pains. But the biggest ones, the two biggest ones that I can definitely say are the dependency management and how we managed the lifecycle of our binaries. How we consume our binaries. The other one is the continuous integration, continuous delivery across product teams and across the distributed systems. The — let me give you guys a background of where Oracle was and how we wanted to improve.

So essentially we wanted to improve how we consumed our dependencies. In the past, the development teams had to consume their dependencies through the same SCM as where their source code lived. And so we needed to decouple that. The other ones are being able to test. And so in order to be able to test, we needed to not just support our continuous integration system, but also our test farm, our binary repositories, and so forth.

Okay. So the first growing pain which is our dependency management. Again, we had the same tool that where our source control was and as well as our binaries. So it was very tightly coupled and we wanted to decouple that. ADE was the SCM that we were using at the time. And then now we wanted to be able to get away from that and provide flexibility for product teams to be able to consume differently.

The other one was the course grained consumption. We realized that product teams, because they were consuming from label, or an ADE label, they just basically said, well, okay, I’m going to depend on this full directory structure. So there was — there wasn’t really a way to be able to define your dependencies in a much smaller granularity.

The other one was the circular dependencies. Product teams were depending on each other, it was very incestuous, and it was really difficult to be able to manage that.

So eventually we realized, okay we need to change. We need to move on. We need to move on from just having a tightly coupled binary repository and hence we decided to go with Artifactory. Artifactory was something that we had decided on after evaluating a couple other vendors, such as Nexus. And the main reason why we choose Artifactory is — is basically it provided better service, better support, and just better product in itself.

So we have now, we’ve got six — six instances of Artifactory in six different data centers. And all of them are all HA instances. The main one being four nodes and eventually becoming five nodes. So, now within the Middleware organization, they’re all consuming through the Artifactory. There’s no more consumption through their labels and so forth, and it’s actually increased the velocity of our pipelines.

Okay, so we had actually presented this last year. So it’s probably not as impressive anymore as this year, but wanted to give you guys a sequel or continuation of where we are. At this point we’re at 80 terabytes. So we’re using 80 terabytes of storage. 47.1 million artifacts. On a daily basis we have about 39.1 requests. We service 39.1 requests. Majority of those are actually download requests or pull requests. And then about 1.5 million are actually publishes. We actually did a quick analysis just last week to see how many terabytes are we — do we have on a daily basis in terms of data flow. And we realize we’re actually producing about 85 terabytes per day on that. Again, six global data centers and our nodes are typically about 256 gig in memory size.

So ramping up. At the very beginning of 2013, we started with none in terms of storage in Artifactory. And now we’re at 80 terabytes. So you can see that, I don’t know if you can tell the difference between the blue and the black marker, so around 2015, late 2015, we realized, okay we need to have a better model in terms of how do we manage our storage. How do we manage the size of our storage? And hence we decided we need — we needed to do some continuous deletion, continuous purging of our repositories. So around this time, we kind of stable — we were stabilizing the storage usage and then at the black marker we decided, okay we needed some way to be able to control the — the quota. Having a quota management in place for the — different product teams. We realized some product teams were much more of a hog in terms of usage than — than others. And hence we needed to identify those product teams and make sure that we are, you know, we are managing the storage and so forth.

Okay. Growing pain number two. CI/CD. So how do we scale our continuous integration, continuous delivery? The one thing that, as I mentioned before, we had not just the continuous integration systems that we were supporting, but we were also supporting other distributed systems such as our test farm, Jira, and various other systems that we needed to be able to integrate with within our pipeline. And hence that was something that we realized, okay, well, we can’t just have some, you know, Jenkins job that we support and hook them up into a pipeline. We needed something better. And we realized that we needed to introduce a new system call, which is essentially an event based system.

Again, the pipelines were spanning several development teams. These teams were varied in sizes. And now we had to support about 6000 developers. All doing their builds in parallel, all publishing to Artifactory. The other thing that we realized is some of the development tools are actually getting overburdened. We couldn’t use just one Jenkins instance, we needed to have a multi-master.

And the other one is divisibility. How do we get visibility into our pipelines? Who’s promoting what? How often do product teams are – are promoting? What are the product teams – product teams that are actually not promoting and so forth?

And so what we came up – well actually let me give you guys background in terms of how did we get started. So we started with one master. It’s actually a Hudson master at the time. Not Jenkins. And this master, obviously, was memory bound and IO bound. But we realized, okay, we needed to be able to support multiple teams at the — in the beginning when we were on boarding the teams there were about two or three product teams that were actually on boarding and they were fine. They were fine with just one Jenkins instance. But then, soon, soon after that, we realized, okay we cannot support 6000 developers on just one Jenkins instance. And so we came up with a multi-master approach.

So we decided we would have an organization — per organization, each organization had their own master, a Hudson or Jenkins master. And with designated number of slaves. Each master, or each organization, was able to control their plugins, control what versions of the plugins they wanted. And control the various software that needed to be installed on the slaves.

Okay. And then eventually what we realized is some of the organizations, or some of the masters, had slaves that were very idle. And we needed to come up with a way to be able to utilize our resources much more effectively, much more efficiently. And so we decided to also integrate Mesos behind that. And also be able — which allowed us to be able to use the resources much more efficiently and also be able to provide some of the resources that we had […] to other systems, like our test farm. And we were also, based on — with this integration we were able to get some analytics and reporting in terms of how many resources did we really need at a time. What are some of the organizations that are needing more resources than others?

So, then we came up with okay, well we have the multi-master, we have various Jenkins, but how do we integrate this into a pipeline. Into a promotion pipeline. How do we test, how can we test our product builds using, let’s say, our test farm or what we call this D to E topology. How do we integrate our documentation — doc translation, wptg? How do we integrate our patching system? How do we integrate the Artifactory? How do we integrate our visualization? And hence, we came up with Carson.

So Carson is essentially an event based orchestration. And we actually have someone here that started it all. It was his baby and we’ve pretty much taken care of his baby. Since he’s gone and left Oracle. But Carson has since evolved. Carson has since scaled up. And we’ve had a lot more product teams also on boarding. So not just on premise teams, we’ve got cloud teams on boarding, we got documentation, product teams on boarding as well.

And, so I will let Rohit go through this architectural diagram essentially but essentially what Carson was providing — the benefit that Carson was providing was being able to integrate these distributed systems across our organizations.

So here’s just an example of one promotion pipeline that we had initially. So it started off with someone, a developer checking in code change. Checking it into whatever SCM that they were using either Perforce, ADE, Git, you name it and then from there being able to trigger a build, right. On a CI system such as Jenkins or Hudson. And then being able to publish that into Artifactory. And then once they publish into Artifactory, this pipeline, essentially, will test that specific product against the latest promoted product.

So that happens multiple times a day now. It used to happen much less frequently. And then we also have a second pipeline, which is right here. Which has been called the level two pipeline. Which — and that is essentially doing our integration testing. How do we integrate all of the products teams — all of the products into one pipeline?

And based on that, some of the things that we realized what Carson can do at a scale is that we got 460 orchestrations. So the pipeline that I just showed you before, it’s just one orchestration. We have 460 different orchestrations that product teams have on boarded and created through Carson. Per day, we have about 18,000 orchestration executions, and then in terms of how many Hudson and Jenkins masters do we support, we have about 238 right now. And 515 — about 515 VMs for our slave agents.

In terms of events, how is Carson able to scale? We got about 81,000 events that come in to — that are processed by Carson per day. And about half of that or a little bit more than half of that are actually coming from our CI servers. Our Hudson and Jenkins. And then a few are coming from our test farm. Where we’re actually doing the tests. So essentially when we do the promotions and we do the tests within the pipelines, we need to get events back on whether a product preflighted or actually succeeded or not. And so those are all based on events.

Okay. So I will let Rohit go on with the Carson.

[Rohit] Thank you Loreli.

[Loreli] Thank you.

[Rohit] Okay. Just going back to. Right. So, Carson actually came about because we needed — we needed a system that could orchestrate across, you know, a lot of these applications, multiple CI server instances. Product, like, product team A could have a master but would need to call a job, you know, another Jenkins master. So that was the first reason that Carson came about. So. And the — that feature of Carson that, you know, that provides, we call it the orchestration server.

So it’s basically for orchestrating workflows across, you know, multiple distributed systems. Now what exactly does that mean? So, Carson is event driven. So one of the things that, that we realized was that if you had pipelines on one single master, that needed to call jobs, on another system or on another master, instead of like pulling for status, or relying on all that, if you rely completely on an event stream, whether they come from other Jenkins masters, or from, you know, other distributed applications. It actually drives the orchestration much more efficiently, much more reliably.

And, so for example, right. SCMs, greater SCMs. So you have a Git commit, someone pushes a change to master, for example. Right. So any kind of SCM where it’s Git, whether it’s Oracle’s own SCM, things like that, so […] you just json encoded data, that encapsulates some kind of state change, some kind of, you know, transactional information.

So obviously jobs on Hudson and Jenkins CI server instances. So all of them, whenever a build succeeds, whenever a build fails, whenever a build starts, events like that. Similarly tests — tests, completing tests, starting and things like that. And you can imagine events coming from Jira systems. The Jira application which is like someone abates an issue, an issue’s created. Different kinds of story types that you get there so deploy and change requests, that get accepted, you get an event back, and you can — you can actually move your pipeline forward. Right?

So we are getting events basically from using – using a messaging layer in between. So we’re currently using, you know, […] B broker such as […]. And at the same time we also provide, you know, a webhook subsystem where you can just a simple HTTP post and notify — and notify the authentication server. So what this allows us to do is that — and the way it’s been written is that you can always plugin new event streams, right. Or new job systems. So a lot of status pluggable and when I talk about the architecture — it will become a little bit more clear.

So this is the architecture. I know it’s slightly hazy. You can’t make that out. So all of the jobs systems are on the left are event sources on the left. So you can have multiple Hudson masters, multiple Jenkins masters. Any kind of CI server, right. So all we need, like our requirement from all these job systems or event sources is that we need events from them. So we have plugins for Jenkins and, you know, Hudson that are deployed whenever we provision a new master and that plugin, kind of, generates events. So a build gets finalized, a build starts, we’re able to publish events to a message broker.

Similarly like with some of the other things that we have internally in Oracle like the documentation and translation applications as well as the test farm. Of course like Jira and Artifactory provide, especially Jira, provides like a webhook plugin where you can define events — you can kind of define, you know, when a webhook would get activated. What kind of poster would it send, things like that. Right? Same thing with SCMs.

So, on the left is event streams and job systems, in the middle we have the messaging layer, you know, webhooks plus messaging, you know, brokers. So messages come in there, they get into the authentication engine and that simply drives the pipeline like. So a job finishes and a job starts. Because we’re getting — we’re getting a bunch of events, we’re able to orchestrate across all of these. Right?

So some of the elements of Carson, I mean, it’s obviously it’s, there’s a REST API, there’s a UI, it’s backed by transactional database. And it’s, as I mentioned, the event streams as well as well as the job systems are pluggable. So you can always, if you need to define a new — if a new – let’s say you have a new CI server that you want to, you know, start sending events, create a pipeline, you can — you can just plug it in. So that’s the other thing, we just need like a REST API. If you’re a job system, we need a REST API and vocation language. Meaning we need to be able to use a REST API to invoke, a job, a starter build on your, on your application on your CI server. Right? And one other, like, the language that we provide is a DSL. We provide a DSL that you can use to write these orchestrations that span these job systems. And a bunch of like related services that, you know, use Carson to provide initialization, to provide analytics, and to also like feed in data into the authentication server. Right?

So that’s the larger view, like of the architecture diagram, now I just want to spend a little bit more time like, how is this different from, let’s say, the Jenkins 2.0 pipeline plugin. Right? Where pipelines are a feature in the CS server.

So one of the things is that, I mean, like I’ve been driving, right. Event streams can come from anywhere and if you have a pipeline that, you know, today, okay, it’s Jenkins but tomorrow you want to use CD Go. Or you want to go on some other CI server. If you want to be able to orchestrate across these instances, Carson provides you that feature. It provides you that capability. Right. So job systems are like event generators, you know, doing actual tasks and steps they’re usually like synchronous operations that you’re doing there. The orchestration server that — that Carson has is mostly asynchronous actions. And it’s – it’s, it’s always kind of dispatched actions and react to events that come back, right. It’s not doing processing of its own. So it’s just lightweight event handling and dispatch, right. So any kind of workflow as well. So any kind of workflow that you want to implement. You use the authentication server for that.

So, so the — the way—the way we allow users to define their pipelines because now there’s a new application that is, you know, spanning all of these different job systems. As well as event streams. Right? So, so Carson from the beginning was written in Closure. And since it was written in Closure, I mean one of the reasons that it was written in Closure was that we did not know what kind of, you know, event streams would come down the line, what kind of new jobs system would have to be incorporated. And it allowed us to create a DSL that was always evolving. Right?

So orchestrations are defined using a Closure DSL and compiled by the orchestration engine. The fundamental element of our DSL is like a handler which is basically composed of three things. So the three things are the channel, or the job, or the event stream that, you know, you’re responding to. So it’s like a channel that you define based off of the event stream that’s coming in. So Hudson, Jenkins, farm, things like that, right? Jira. So that’s the first element.

The second is, you know, what kind of filters you going to apply on that – on that event stream, right. Whether a job is successful, if it’s a job event, right. Where it failed, things like that. And the actions are what kind of, you know, what are the downstream activities you’re doing. So if you think of actions as, you know, a downstream nodes or jobs in the pipeline, that — that, you know, connect it from this current node, right. So the DSL is usually, you know, it spans multiple masters, multiple Jenkins hosts, multiple job systems, and event sources. And it allows you to do things like, you know, if you have a central job, let’s say that scans your code, right? And multiple pipelines can call into it. The job itself doesn’t have to change anything, it can be, you know, part of multiple pipelines and just the orchestration is going to take care of co-relating, you know, a particular build of that job with the orchestration — the orchestration instance that invoked it. Right?

So just a little bit on, you know, how the DSL looks like. So it’s obviously Lisp so, you know, it looks like that. So, okay. I mean, I’m a huge Terminator fan so that’s why the name is T800.

So this is the simplest, a simple orchestration, right? So we’re surrounded by a def org symbol and there’s the def start handler is a handler for starting an instance of the pipeline, right? One run of your pipeline. So coming back to the three elements of the handler, it’s the channel, the predicate, and the action block. So if you were to like expand into what this DSL is doing, it’s saying that, okay, events that are coming from the DCS channel, which is developer code service, you know, Git stuff. On that event channel any event that comes in check, you know, the organization in the Git commit or the Git push is you know myorg, the project is this project. Right? And obviously all of these filters are handed together so it’s that kind of — that kind of operation, right? And you can, like I’ll come to that, you can always define your own custom predicates, right? I mean the orchestration is like lightly sandboxed but you’re free to define, you know, define very complicated filers there, right. Like if you want to do, like check that this is Monday if you committed, you know, into the readme or do anything like that.

Then we come to the third section. Which is a then block, which is the action block. And this is where you dispatch actions. Right. You take further actions that move the pipeline forward. And, like I said, are interfaces to all these job systems where we are dispatching work is usually just a REST API call, right. So obviously Hudson and Jenkins provide a REST API. There are issues with that, but we’re still able to work around them and, you know, start builds on whatever jobs that you want.

So this particular start job, for example, all it’s doing is that it’s saying for this — for my project build, which is a job on, you know, on a Jenkins or a Hudson called myorg, start a new build, take these parameters and, you know, there’s a parameter name, there’s a parameter value, and within the context of this orchestration, call it just build job. Right.

The second one, which is like, I’m just showing you one of the complex DSL elements that we have that are pretty unique to, you know, the kind of orchestration capabilities that you might need. This one is saying that, okay if in this orchestration’s config for this specific instance, there was a jobs list specified, right, dynamically. So if I needed to, like, trigger five different jobs, which could defer each time, for each run of the same pipeline, right. But I still want to react to it the same way. So you can do that, you wrap your, you know, you wrap your current, your start jobs into like a overall block and it’ll […] dispatch, you know, each job, a build on each job. And then you’ll actually be able to react to the whole set of jobs or the group of jobs finishing together. Right.

So some of the examples. So we started off with defstart, right. So there’s a few other elements of the DSL so there’s a defstart, there’s a deftransition, things like that. But fundamentally all of them are doing the same thing. It’s that, you know, you have a channel or you have a job that was previously called in the orchestration and when it finishes or when an event comes into a channel, you react to it, right?

So this one, for example, is, you know, a Hudson job finishes, start a build of job name […]. When that finishes, start these two jobs, but refer to it as a group. Right? So in your initializations, you actually see them as, you know, these two jobs are related, I want to refer to them on these two jobs […]. Correct? And the deftransition, all it’s taking is a previously started build, you’re able to, you know, rely on that finishing. The orchestration engine is like abstracting out all the – all the event handling, it’s able to correlate what you’ve previously — a job that you’ve previously dispatched whether it finished or not. Because it’s getting – it’s getting like a firehose of events from all of these job systems and event streams. Right?

And, so like I showed in the previous slide, with the start jobs macro, like one of the fundamental things about the authentication engine is it allows you to do dynamisms. So. So you’re not limited by plans that have a very static shape, you can define your pipeline and say that this particular block in my pipeline is going to be dynamically rendered. Right? I don’t know what kind of jobs might get executed here, I will let, you know, the developer decide and let, you know, some configuration decide, things like that. Right?

So just to give you an example. So for example, a translation request comes in. And, you — so you have one single Hudson job that does that operation. Right? It makes some changes, it builds something. So you might sometimes get a request to build in, let’s say, thirty different languages, sometimes, you might get a request to build in, like, just two languages, things like that. Right? But in your pipeline, and in your initialization, it’s just that, okay, all the requests that I got, I’m building them, each build is happening, for each different entry that I got, but I’m able to initialize it as just one set of jobs. Right? So it give you parallelism but it also gives you a layer of abstraction for reporting for initialization. That is pretty useful for any kind of — any kind of workflow, right. Not just —

Right, so some elements of the features, right. Like I previously mentioned, why we chose Closure. There’s a LISP, there’s functions, protocols, multimethods, and things like that. Right. So for example, the filter of the predicate that you see here, called successful. It doesn’t care, it doesn’t — you don’t need to specify here what kind of job you’re applying it on. Right? So Closure’s multimethods and things like that allow you to, you know, infer the kind of, the type of the object that you’re working on. Similarly protocols, I think they’re like very generalized interfaces. So all of our job systems are protocols. Our messaging layers are also implemented as a protocol. So if tomorrow we want to add, let’s say, another broker like Kafka or things like that, we would be able to just plug it in and define the protocol for Kafka, right.

So because the DSL is just a LISP, right? The code is, I mean, the code looks like the language. The code looks like data. So these are s-expressions that you can, you know, just go parse it like a tree or whatever and that’s what we do, we are able to construct, like, nodes out of your – out of the DSL that you’ve written and able to initialize both the static, you know, how you’re pipeline would look when it ran. As well as based on the runs of the pipeline, we would be able to initialize different instances of your orchestration, right.

So just a brief, brief points on like what kind of Closure libraries it’s built upon. Right. So we, entirely use asynchronous handling everywhere. So the library that we use in there is called code dot async. It’s pretty lightweight. It gets, you’re able to do stuff, you know, asynchronous programming quite easily, the code is compact. We have had, like, I mean, we have had it work pretty well for us. So and the thing about each orchestration is that because each orchestration is like just a module, a Closure module, you’re able to, you have complete Java interrupt, you know, access. Right? Obviously you can’t do like system slash exit, things like that but you still have access to define your functions within just your orchestrations so it’s not that you’re planning, like, a whole, you’re creating a whole plugin. What you’re able to define custom matching that you want, or custom predicates that you want, to be able to, like, make really complex decisions in your workflow. Right?

So. So the DSL also provides some pretty unique features. I think I already covered the dynamic job execution, things like that. So using the DSL we are able to like support stuff like, you know, repeat until this is successful. Or resubmit it this many time until it’s successful. We have pluggable throttling protocols where you can, for your orchestration, you can define when to throttle. So when to throttle, when to, you know, when an event occurs, when what to do when you release something from a throttling state, right. And how it works in the DSL is that it – so for example, if I go back here.

So all that you have to do in your DSL is just, okay it’s not showing that. So all that you have to do in your DSL is – is wrap your, you know, your actions within these other blocks. Right? So they are like hooks that change the behavior of your actions. So if you want to, if you want to throttle your orchestration within the orchestration it’s just define an implementation of the throttling, right. You can obviously use predefined elements and DSL predefined implementations of the protocol.

So you can do things like run me just max five instance — instances of my pipeline. After that, queue all of them, you know, and then when you release just do the latest run instead of, like if a product is building and there are like, there are requests in the queue, just take the latest one. And run the latest one and skip everything else.

So just to reiterate, we are able to, like, compose all of these DSL elements and modify the execution at runtime based on parameters or based on configs that you enquiry from another system. So in that sense, yeah, it’s very dynamic. So even though you have an idea of what your pipeline should look like. An actual run can, you know, go and take weird branches.

So because we are backed up by our database, all the orchestration is, you know, it’s just data and […]. We have a REST API that serves it all up and that REST API is like the basis for initialization as well as our reportings and analytics. Right?

So basically the optimization engine, besides setting up, you know, all of these event handlers, right. Whatever handlers that you define in your orchestration. It is also recording as the events come in. What is happening on each transition. So it knows when a job starts, when a job is finished, what event made that transition happen, things like that, right. So this allows us to, you know, serve up, like, json representations of your pipeline. Right.

So just to show you what the DSL, like the graphs that you can construct over the DSL analytics aren’t clear. So like this graph is a representation of DSL. And we’re just doing that by compile time parsing, you know, the DSL. Just looking at how the elements of the DSL, how the handlers, what the relationships between the handlers are. Right. So we are able to obviously construct, like, blocks of, like, basic static graphs of your pipeline. So like the pink ones that you see here are these blocks that will be dynamically rendered. Right. So they might be composed of like five or hundred jobs. Yeah, so we have reporting that is based off of that.

So we are looking at, like, the future of this. This is mostly in processing so like preprocessing of events that we get so that we’re able to better, you know, better partition the event stream. For orchestration. Make sure that because we rely on distributed systems that we’re fine if something goes down. […] a single point of failure or not, things like that. Right. And we’re going to explore how we can, you know, track our deployment – deployment pipelines, right. Continuously deployment. And provide much more virtual visualizations and analytics.

I’ll let Loreli finish actually. Just the last thoughts.

[Loreli] Thank you Rohit. So just a conclusion.

I think, as you can see, the way we’ve – the way we’ve fundamentally designed Carson, it allows us to integrate with new technologies and allows us to stay up to date with the very fluid and evolving technology that we have today.

It’s event based, and it’s also, basically the backbone of our orchestration of our CI/CD model. Not only that, we’ve been able to also leverage on Carson through other challenges that we had. One of them was the quota management. How do we manage Artifactory? Right? So we’ve implemented a quota management through Carson. We’ve also implemented car — repo provisioning through Carson. This is before Mission Control came into being. So we’ve actually have — we have ref APIs that support being able to provision new repositories into Artifactory.

Being able to support new organizations which is being able to create new organizations, new Hudson, new Jenkins, provisioning those, provisioning the slaves, and allocating slaves through Carson. So essentially it is the backbone of our CI/CD model and that’s how we’re able to scale.

Now in terms of does size really matter? We’ll leave it up to you to determine that. Thank you very much for your time.

Try the free CSS tidy which lets you beautify stylesheets for your websites.

Trusted Releases Built For Speed