Implementing Modern CI in an IoT Medical Project – Jacob Lärfors, Ravi Sudhireddy, Verifa & Siemens Healthineers

For medical systems, industry requirements and regulations add complexity to the release process. Together with Siemens Healthineers Point of Care, Verifa have significantly accelerated release cycles of their software platform for their next-generation medical devices, with Artifactory sitting at the center of the scene. This presentation conveys experiences in building Continuous Integration (CI) pipelines for a functional safety project in the Embedded Medical (FDA/62304) market as a result of a major project at Siemens Healthineers Point of Care. An overview of the implemented pipelines is shown that enables higher levels of compliance, testing, quality and security. Artifactory has become a central piece to the delivery process and is used in multiple contexts, from managing build artifacts, to automating configuration and infrastructure, and serving as a proxy for a strict internal R&D network.

 

> Learn More About swampUP 2020 <

VIDEO TRANSCRIPT

Thanks everyone for coming. What we’re going to talk about today is what we’ve done at a division of Siemens in terms of implementing CI pipelines. And I guess we can just begin so we can introduce ourselves briefly. I’m Jacob on the left here. This was a picture from my submission to the, the swampUP videos. So I do juggling, I play music, and I like CICD and dev op stuff. I come from a company called Verifa and we’ve been working together with the Siemens now for about two years on this project. And so basically most of the things you see here are things that I’ve been involved with or sort of driven. And…

So, hi, this is Ravi and I’m working as a DevOps engineer with Siemens Healthineers Point of Care division. So yeah, I’m a CSED DevOps enthusiastic like and everyone over here and Jacob called me like I’m a curry master. So that’s something which I will take.

Yeah, well I didn’t want to just have, you know, my hobbies and then Ravi, come on. So yeah, the Siemens Point of Care is in Boston. And I’m based in Finland, so I’m flying over to Boston quite often. And when we’re here we end up having curry like five days a week and Ravi knows his stuff. So yeah.

Cool. Okay. So what we’re going to do is we’re going to talk a little bit about points of care, Siemens Point of Care, what they’re doing. I’ll briefly introduce my company as well, Verifa, and then we’re going to go through a few different themes of what we’ve done in RCI pipeline. You know, going through actually building the pipeline, the testing, OSS compliance, management, managing results, you know, basically some of the main things that we have done, there’s too much to cover in this short slot. So hopefully it should be, it should be relevant.

Yeah, Siemens health needs. So everyone might already know that Siemens Healthineers is one of the leading companies in the world that produces, that manufactures the medical devices. And the Point of Care is an organization unit within the Siemens Healthcare, Healthineers. And so I’ll talk a little bit about Point of Care. What we do. And a point of care is basic.

So for the folks who doesn’t know what point of care testing is, like point of care testing is basically the tests that are done for the patients when they’re at the bedside. So, for example, imagine if there are no point of care devices. So what will happen? The patient samples are being taken by the medical receptionists and also the doctors and they used to send it to a laboratory.

The laboratory is for results and that could take about like days and even hours or if the medical laboratories are not within the premises. So, the doctor still has to take care of these patient without knowing the results because the results are not known to the doctor by that, within that period of time. So, we have developed these point of care devices wherein you can just take these handheld devices to the patient’s bedside and just take the samples and you can just get the results then and there itself and the site. So, that is what a point of care manufacturing does. And so we are basically leading manufacturers also in blood gas analyzers and also the urinalysis devices. We are doing great for over decades. And why do we have to come for the dev ops?

That’s an interesting question because when we are doing good business, why do you have to come for dev ops? So with this emerging technology we have doing great business and everything is going good. Then with this emerging technologies we have a lot of competition in the picture. So we have to manufacture the devices that are like sophisticated and that should be that handheld where you can be able to provide updates to the devices, like within hours, if not minutes whenever it is required because you don’t want the devices to fail when you’re taking, when you’re getting the results. So, that’s the reason. And due to these reasons, we are now developing a next gen products. So, which are like Android based products. And that is where like we have all the CICD everything come into picture.

 

Yeah. I guess the, the interesting point where the Android was the, now we wanted, well traditionally with point of care they did product lines, right? So they did a product line start to finish and then they went to the next product line start to finish. And basically there was no reuse across these product lines. So now they’ve developed a Android platform, which is basically the handhold device and the product lines will just extend that. So we have an interesting case there where we want to reuse this code, make sure that everybody’s using the latest version. And right now we have one product line that’s shipping soon. But I mean this is going to be used in lots and lots of cases. So yeah.

Cool. About Verifa. So we’re a relatively young company and we are specializing in CICD pipelines and pretty much everything around CICD pipelines. So we help with setting them up, we help with infrastructure, we help with analytics, we help with test automation, we help with OSS compliance and… Yeah, a lot of… Basically everything involved in CI stuff. So I get involved in a lot of different areas, which I love doing. So if anybody wants to find out more than, you know, you can come and talk to me after this. Be happy to chat.

So about the modernization besides, you know, needing to grow with the emerging markets and everything. This is something that we took from Atlassian. So the source is down here, if you want to use it sometime. It’s been around for some time. And I brought this in when I joined the project and I was like, does this look familiar? And I didn’t really get a reaction because I think everybody knew that this feels very familiar. So this is the cycle of manual delivery. So what we’re trying to do is we’re trying to take all these painful activities and start to automate them so that we don’t end up having this… These slopes. So, yeah, that’s a part of the motivation.

Where did we begin? We’re not in a context, it’s medical and it’s FDA and it’s regulated. So we’re not in a context where we were like, right, we need to just ship fast. What we wanted to do is we wanted to build a really good quality product and we’re more focused on quality than on speed. And I know there was the talk where they said, well the stability equals speed because it’s kind of related. But it was more about building the quality product in-house rather than being able to ship externally quickly and frequently. So we were like, all right, what can we put in our pipeline? And we were like, well, you know, we want to build the software, right? We want to do some static code analysis. And again, big focus on quality here. So we didn’t just want to put a static analysis tool in and then say tick.

We thought, okay, well we need to comply with coding guidelines. So we started looking at tools that could help with the coding guidelines. So in the CC plus plus based, there’s things like Misera their security standards, like certain CWE. We had our own internal coding guidelines. So we are like, okay, let’s build these coding guidelines into our pipeline and part of static code analysis. As well, there are, you know, security static analysis tools, too. So finding things like array buffer overflows and general defects that might leave your code or your system in a vulnerable state. And there are pretty sophisticated static analysis tools now, as well. So, I mean, we have a lot of static analysis tools running in our pipeline and we’ll get to this later.

We also wanted to do architectural analysis. So we had a pretty good design of the software to begin with and we wanted to make sure that as we developed our software and as the software evolved, that design wasn’t just, you know, a word document or a piece of paper that we had some point in time. We wanted to make sure that the end system actually reflected the design. And also that the design evolved together with the software system. So we put a tool in the pipeline which took an XML definition of what the design presented, and then every time we run the pipeline, then we get kind of a validation of, does our software actually reflect what we wanted to make.

We wanted to do unit testing and with that mutation testing, as well. So mutation testing for those who aren’t aware are, it’s when you run exactly the same unit tests but you flip the bits. So if you have a plus make it a minus, if you have an equals make it not equal, you’d have less than make it a greater than. Run the same unit tests. And what you’d expect to happen is that your unit tests would fail, right?

If your unit test doesn’t fail, well, then how good is your unit test? Like how good are the conditions that have been placed on the pass/fail criteria? Again, some integration tests, some verification tests, some performance tests. So these are like nonfunctional requirements. Some sort of thing where if you run a test once it works. Sure. But what if you run that same test over the period of two weeks or a month? Does it still work? Obviously we want it to have artifact management and, and tie that into the pipeline. We wanted to have OSS compliance both on the legal side from licenses and stuff, and also on the security side. And, I mean, that was just the beginning lists that we had when we were like, all right, let’s build a pipeline. And that’s already a fair few things. And again, it’s not just a tool and a tick, but we really invested time and going into each of these areas and defining what it is that defines quality or what it is that defines security in each of these areas.

So we had a challenge. We had all these things that we wanted to do as part of our pipeline, but we don’t want to run a pipeline that’s going to go on for like five hours or something because, I mean, that’s not really going to help anyone. We still want, you know, quick, quick feedback. So one of the things that we came up with at Verifa and this project has been where we’ve kind of pioneered it and implemented it was this idea of Tx. So Tx is basically different time points in the development process. So I can talk you through it here just briefly. If you’ve got a development team up here, they will create some feature branch or some branch where they can work on and we have T-zero. So this is the earliest possible stage in the development life cycle where you can run any kind of testing.

So this would be, you know, running start to code analysis and running unit tests at the developer desktop as an initial kind of quality check. Once they commit their code we can use Jenkins and pipelines and things and start running these. So the first branch that they pushed to would be T-one and here we would define a list of stages that we can run as a quality gate. And if that quality gate isn’t passed, then, well, they should fix it. Once it is, they can then merge to a shared integration branch and we can run something like T-two, which is again more checks. So it’s kind of like a pyramid of going from providing feedback quickly and then working up the levels of quality and security assurance kind of the further on we get.

This was something that we created after working with, well in the project for about three months and we saw, okay, this is a vision, this is what we should aim for. We are really close. Like we’re really close. There’s a few things left to do, but I’m sort of happy that we have come this far. So, and anyway, this isn’t actually the implementation obviously, so I’m going to talk a little bit about the different bits of this now.

So I guess everybody here is doing CI, so I’m pretty sure everybody’s been through this fun joy of, okay we want to build a CI system, we want to get Jenkins. So we were given a Windows machine and we want to proof of concept. So we had our Jenkins master on a Windows machine, which has all the stuff that Jenkins has, the JVM, the workspace, the conflicts, the build note, everything on one machine… All configured manually and the machine was limited in resources.

And for those of you who know kind of the Jenkins community, then this is a JenkinsStein is what it’s called. And it’s a very common pitfall where you just cram everything into a Jenkins master and it just scales to the point where it blows up, and you probably have like 300 percent more plugins than you actually need because people just install them and don’t uninstall them. So we ran into this, not unexpected, so we were like, all right, we want to do things better. So we have, you know, we’re software engineers, we can do great stuff. So we wanted something that’s scalable. It doesn’t just break down, and we have control over changes that are going into it. So the natural result was while using containers and configuring Jenkins as code. The timing for this was pretty great because I’ve worked with hacking around in groovy scripts and configuring Jenkins masters in the past.

It’s not a whole deal of fun, but about a year ago, maybe a bit more now, there was a general release for this Jenkins configuration as code, which is a plugin for Jenkins, which means you can define the YAML files to provision a Jenkins master. So we converted our old JenkinsStein to a YAML based configuration as code Jenkins master that we can just spin up and it’s production ready, and while however long it takes to start a Docker image. So that’s great. This is the configuration as code logo and we’re running this on Docker Swarm at the moment, but we recently just got a on-prem as your instance. So we’ll be moving things over to Kubernetes, now that we have a Kubernetes engine. We still have Windows agents, they maintain manually. But again, now that we have a Zuora, we’re going to be looking at Terraform and Packer and you know, creating these immutable Windows agents that we can just start up and kill as we like and everyone will be happy.

So that’s kind of the infrastructure and the configuration that we set up for Jenkins and while I… It made a huge difference. Yes. Huge difference. And it’s working really nicely again, and when things go wrong, we can just revert or we can check the gate history. We have pull requests for making changes. So, everything’s pretty well-controlled for you. So yeah.

About our pipelines according to this Tx methodology. So again, we had lots and lots of different types of testing we wanted to do when we wanted to balance the feedback together with the completeness of the testing. So our solution was to define stages in our Jenkins files using Jenkins pipelines and basically have a Boolean parameters for the stages in the Jenkins files. So I know it’s very common to have a, you know, based-on branch or based-on tags or whatever.

But we didn’t want to start adding loads and loads of complexity into every single Jenkins file across every project. Instead, we wanted to have a really simple, do this, yes or no, for the different stages in the Jenkins file and instead put the logic into our, which creates the pipeline jobs in Jenkins. So this is one huge benefit of configuring Jenkins as code is that now your Jenkins master is basically like a pipeline. So you can put all the common logic there and then your pipelines just inherit that. So, we have these do builds, do static code analysis and do unit tests and there’s a couple more. It’s a bit more complicated, but you get the idea. And when we create our Jenkins master, we have these parameters configured. So basic workflow for same as in Docker, we have a Docker file with the Jenkins YAML and the C job we built that and it basically creates a Jenkins master where we have the environment variables, the Windows notes that we want to be connected and then the pipelines.

And you can see here for example, project X DevCI has these parameters do build to start to code analysis. True. And then so on and so forth. And those point to our repositories where we have the Jenkins files and then everything’s there. So deployed onto Docker Swarm and this is not rocket science. We’re just following dry principles. So we try and put all the logic into the Jenkins master and into the C jobs. And so the Jenkins files are kind of as simple and clean as possible. Overcoming another challenge we had where the Jenkins files were like, they were like growing and they were growing and this really helped. Cool. So…

So common platforms, so managing artifacts. So this is another interesting topic because, as we said, we are going to develop a next gen product for the Siemens Healthineers Point of Care, but… And this common platform is a common instrument platform that is being developed on Android based and the core assets that are from the common instrument platform are being used by the different product teams to develop that product. So we are basically going to develop a common instrument platform.

So here the challenge is, we have to make sure that these core assets that are developed on the common instrument platform has to be distributed to the different teams. So we need to understand how we need to manage the artifacts and also make sure that how we label these artifacts because we need to make sure that what version of a core assets are being used by the different product teams. So that’s where the challenge is and that’s when we find out that Artifactory from JFrog has helped us in achieving this. And we have a lot of success in doing this right now using the Artifactory.

Yeah. So I guess if you refer to the picture, we have kind of the platform team, which is a couple of core assets, but really this is maybe 30, 40 packages, AR files for Android. And then the product teams have the app, which extend these. And yeah, one of the big questions wasn’t, you know, how can we just give this to the product teams, but how can we control which versions they’re using and make sure that, when we test the platform once, we can reuse those tests again for the different product lines. So I guess…

So yeah. So as you see in the picture this is the complexity that we have. For example, the core assets are dependent on each other. You know, like we have to make sure that all the artifacts are, and the core asset for example, is also dependent on all these other components, like third party components, like a griddle and everything. So we have to make sure that all these are stored in a central location where we should be able to grab this and we don’t want to have like different network locations any everywhere in the world and like making sure that everyone is deploying from everywhere. So we have to have a central repository and that’s where we have the Artifactory. Yeah. So for example if you see in the Artifactory, like as I was saying, a one core asset is dependent on the other core asset.

So in that way like the core asset X is basically using the third party components like Griller and NPM for its compilation. And we are versioning that and also labeling that, and we are storing them back in the Artifactory. And then the core asset Y is dependent on the binaries of the core asset X. So it takes the core asset X, the version level of the core asset X and also the third party, like components, that are required such as Griller and NPM for its compilation. And then once the core asset Y is built, it’s being deployed onto the, it’s been published to the Artifactory. And then we have this app, which uses the both core asset X and Y with its own third party other components to create the app and then we are storing that back. So this kind of workflow is possible using Artifactory and we could be able to integrate this into our Jenkins pipeline. So like everything is automated for us right now, so we don’t have to do anything manually.

I guess the manual point is really worth mentioning that because the method that was going to be used for sharing these platforms was going to be to zip stuff up and share it via file share, which doesn’t seem too crazy. I mean, I’m surprised at how often it’s done, but this is a massive improvement because now we can, well we don’t have to do anything to ship a new version. It’s just taken care of by pipelines. So…

So yeah. So how are we managing the artifacts that are going into the Artifactory? So it’s basic pipeline. So in this pipeline you are saying like, just get the sources, latest version sources. For example, for core asset X and then you’re incrementing that version, and then applying the version. So if you can just… Yeah, so what we are doing basically is, get the sources, just update the sources and then increment the version that is on the Artifactory. So we have to make sure that there are versions that are being defined in the Artifactory for each core asset and we have to make sure that it’s being incremented, so we know what kind of versioning is being provided to the product teams.

Once the build is done, we are publishing and labeling. So basically we are labeling the artifact for core asset and then, for example, if you consider this as a core asset X, again, like we will automatically trigger the downstream, which uses the same workflow mechanism. Basically it updates the sources. And basically what we are doing is here, like we are also labeling the sources. We are also labeling the Artifactory, the artifacts that are generated out of the… That has generated out of this bill so that we have a traceability of what sources are being used and what artifacts are being used into the… are being published into the Artifactory.

And I guess, the real interesting thing here is, okay, let’s say we built the app at the end and we run it in our pipeline and some automated tests fail. We want to know exactly which version of the app was used and maybe the code resides in one of our core assets. So we want to know exactly which version of the core asset that was. And we want to be able to get the specific version of that core asset as well. So we have… Well, we have traceability from the app and the versions all the way down to, you know, the exact piece of code that was shipped together with that, which is useful if you’re debugging stuff.

All right, so next topic. The OSS stuff. So yeah, we want to trust open source code but we want to verify it as well. And we want to make sure that when we use it, we have the right licenses. I put the Verifa here because I write verify and I always end up writing Verifa now by accident or the other way around. So I’m always like writing typos now. So I almost did it in this presentation and thought I’d just leave it there. What we have done, this is actually something from our marketing material, but it reflects exactly what we’ve implemented it. So I thought I’d just stick it in. We have OSS compliance built into our pipelines now. So the basic workflow is that together with the build, and as part of the pipeline, we do the library identification.

So we use Y source at the moment and that gives us a bill of material. So it tells us the whole tree of dependencies that we’re using, the licenses and security vulnerabilities about those as well. And we use an open source tool called Software360 which is basically a database for storing components or third party libraries, whatever, about their licenses and such. And it’s a place where legal can go and they can tick, tick, tick or cross to say whether we can and cannot use different licenses. And we’re synchronizing across the latest bill of materials against what’s already in the catalog. So that we get this, we get this dif as part of every pipeline run. And together with that we have a licensed debt indicator so we can see at any point in time how much debt do we have to pay in terms of getting our project cleared.

The goal is continuous compliance on the licenses and security side. And in the long run we’ll be adding more of the continuous delivery stuff where we are generating OSS readme with copyrights and such and a clearing report together with our final artifact binary. So these are the steps we’ve taken to implement OSS compliance. Should actually be third party compliance into our pipeline. So we have pretty good visibility now of what we’re putting into our software and what state it’s in. And the idea now is that if a developer were to pull in a GPR3 library or something, then we get it flagged immediately. So they don’t start developing the code base around this. And then three months later we’re like, hey, you suddenly just got to rip this out now or figure out a way to comply with the license.

So onto automated testing. I’ll quickly cover our products, which is… Well, we have hospital networks, so again, we’re not in the cloud and these hospital networks are probably closed and don’t have internet access. So that poses an interesting challenge. The doctors will hold the Android devices. These Android devices are connected to engines, which is where we put test tubes, blood samples, urine samples, whatever. They communicate via Bluetooth. And when we do the reading on the device, we get the results back to the Android device and we’ll have a database on sites as well where we can have a secure channel to store information on the hospital network. So that’s pretty much the setup that will be our customers will be using. So yeah, we wanted to replicate this as much as possible in our lab so that we test this as closely as possible.

So we have an R and D network, which again is pretty much offline. And we put some machines there, we connected Android devices, we have an emulator for the engine. So we run the emulator there and we have an emulator for the database. So, with this, we can pretty much emulate the end environment on the Android side. We built a few more of these and these are all on premise. And we are using a tool from, from Finland called. And it’s basically like a server together with clients here. And this server is controlling the clients or the notes. It’s typical server client architecture. What’s really cool about it is that it knows everything about these Android devices.

It knows the phone it is, the device, the manufacturer, the version of Android, OS, the applications that are installed on there, the versions of the applications, like pretty much everything. So we have one central dashboard that we can go to, to see our entire testing lab. And this makes it nice because when we run our pipelines, we just send a test request to… Well, to this server and it figures out based on some parameters like the device pool or the project, where to run this, and it will then push it down to an Android device. We’ll run some UI tests and test automation stuff and feedback some results to the tower server. So we have kind of one entry point for our entire testing lab. Now we use this for the integration tests, the development team writes, we use it for the verification tests that the test team writes.

And we’re also using it for the nonfunctional requirements. So one of the things we wanted to do is reuse test cases as much as possible. So we’re already building out a whole test suite of test cases and one of the nice features of towers that we can just take those test cases and say, hey, put these in a random order and then run them for like two weeks. So we don’t have to create any more test cases to have our nonfunctional tests running or our stability tests running. We can just reuse the tests that we’ve already written. And actually this has a knock on effect as well because this requires you to write really good quality test cases, as well. You’re not like test cases that you can run in any order. So we have pretty much everything. Everything is solved in terms of test automation with this center.

Yep.

Yep. So you have talked about so many tests, so much of testing and so much of quality. So where are the results going then?

Yeah, exactly. We have a lot of data being produced, like a lot, a lot of data. We have all this static analysis, we have all these unit tests, we have all the on target testing, we have the OSS compliance. So we needed to make that data usable by the development teams. So we use SonarQube for basically a code quality dashboard. We don’t really use the SonarQube analyzers as much. We use lots of different static analysis tools and we’ve integrated them into SonarQube so that we have one central repository for code quality information. So that means once we ran all our static analysis, we have the one source of truth for quality defects. Then we can just reference that as part of a quality gate and either tell the dev team that, hey, you’ve got, you’ve introduced new things or actually your code is of high quality.

We’ve got the pipelines as well, which obviously do all the different types of testing and all the different activities that we have set up. This is a picture of the of tower. So these are our test results and where we can go and get basically any information we want about the automated tests that we’ve run overnight. One really good thing here is that we get profiling information, as well. So if we run our pipeline based on a check-in or overnight, and a test fails, the obvious thing we need to do is go and like reproduce that failure to try and figure out what went wrong. Where we’ve captured all the profiling information, like a CPU usage, RAM usage, battery usage, any crashes on the Android device, all the different services that are running in an Android device.

So we can just go to our tower server interface and get all the information we need to go and debug an issue. So we shouldn’t have to go send an email or something to a developer to say, you know, something has failed. Reproduce this so that we can fix it. Instead, something has failed. Here’s all the information you need to go and fix it. You don’t need to do anything other than actually fix it. But we leave that to the dev team.

And what we’re working on now, we had a proof of concept a few weeks back, is putting together a like a… we have all these different dashboards but our dashboard on top of the dashboards. So some way of collecting all these results and information into one centralized place. And we’re looking at Mosaic now, which is an open source framework for this to try and you have to try and bring it all the relevant information and represent this information to different people.

We care about… Well we care about this test lab. You know when these nodes go offline, we want to know. We care about the Jenkins environment. When that goes offline, we want to know. We care about pipelines failing. The dev team, they care about, well, defects, findings, tests, failing, you know, the stuff that they’re involved with. And managers, they care about seeing… Well, the different activities we’re involved with, code quality indicators, OSS compliance indicators, latest snapshot of test results and stuff. So right now we’re going to work on defining our kind of like high level dashboards and bringing all this data that we have and make it representable and usable so that when people ask us what’s the status, [crosstalk 00:00:33:06], we can just point them somewhere.

Yeah, it’s in the dashboard.

Cool. So future plans. So what are our future plans? Just do what you’re doing now. So don’t break anything. Just keep doing and then it should be continuous. So everything should be continuous as we are talking. So we want continuous pipelines. We want continuous delivery, we want continuous integration. So yeah, start doing everything we do and make it continuously.

So yeah, in fact, we… With medical devices, we have this a CIL, the configuration item list, which is basically what we need to ship together with the product. And we are now strategically going through that to tick off. Okay. Can we do this? Can we do this, can we do this? And then put that into the pipeline. So I think where we are today is CI like a pretty good mature CI environment. And now we’re looking at moving that towards CD, which in regulated environments and such, is… Well, it’s possible. So we got… Yeah, that’s like the next big leap for us, is getting there. Yeah. We have this notion of dock ops as well. I mean, everything’s ops. So doc ops is the generation of objective evidence of a test reports and stuff that we want to ship together with our products, either internally or externally.

And, well, we have to, yeah. Continue learning and adapting to new technologies and I guess also helping the other guys within our teams and our company try and adopt to it, too. So, that’s what we refer to as human ops is when you really want to do something cool and great, and other people don’t agree. Well, yeah. We’re not all perfect.

So, yeah, I guess as a summary, we have a pipeline. We have lots of things integrated into that pipeline. We have pretty good feedback cycles, and I mean, we haven’t talked so much about Artifactory, but it’s pretty much serving everything that we do. And that’s in fact a theme for us, is that we don’t really talk very much about Artifactory because it just works. And it’s just there, like in the background serving its purpose, which is exactly what I like technology to do.

Try JFrog Artifactory for free!