Edge Computing: A Cloud-native, DevOps-friendly IoT – Roman Shaposhnik, ZEDEDA

Connecting IoT devices to the Internet is not new but deploying and running real-time edge apps at hyperscale using these devices is. Edge Computing evolves Cloud Computing by keeping what’s great about the Cloud model (developer friendly APIs and Software-defined everything), yet applying it in the harsh physical and security environment of sensors and ruggedized industrial PCs. Watch this session to discover design and implementation of a novel Edge Computing platform created at ZEDEDA Inc. (later donated to Linux Foundation under the LF Edge umbrella). Explore a new, special purpose, open source operating environment that has to securely run on billions of ARM and x86 devices called EVE (Edge Virtualization Engine) and walk you through how EVE enables the same DevOps culture to increase developer velocity on the Edge.


So hello, my name is Roman, and today we’ll be talking about Edge computing. So, how many of you are familiar with what Edge computing is? Because, see full of your arm, maybe it’s a very short presentation. Okay. Anybody else? Oh, okay. So then at least, some of you will hopefully learn something today. So, I hope at least, most of you are familiar with IoT, because it’s an IoT track. And historically if you look at IoT, I mean, IoT has been really more of a domain of just embedded, right? Something that you would basically build on pretty constrained devices. And it would be a sort of single purpose built piece of hardware. Single purpose built piece of software, typically a realtime operating system, or something like that.

And none of it would be even remotely relevant to the conference that you’re all up to date. Because at the end of the day, the conference you’re all at today is about liquid software, right? It is about how we can rev up, and churn the software on any kind of computer equipment as quickly as we would like. And now it actually used to be even remotely possible on these IoT devices, right? So in that sense, we’re sort of living through a little bit of a revolution.

And actually I was reading this book that I just got because I got invited to this conference, and I actually had a really good sort of slide in there, called liquidity of things. Kind of, predicting that pretty soon we will be dealing with a lot of internet of things, that are essentially based on the same set of principles, that we take for granted in cloud today, where we would basically try to modify the software in flight, we will try to do rolling deployments, all of the stuff that DevOps community has developed over the past 10 years.

Now again, IoT is a tricky term, because it has been with us for quite some time, and I think it originated about maybe 10 years ago. And a lot of times when people say IoT, they actually mean kind of like Raspberry Pi, people tinkering with their, I don’t know, Christmas lights, right? Or something. Worst case scenario, consumer IoT, your sort of webcam, but that’s not really the interesting part of the IoT. So the interesting part of the IoT, these days, and I think this is something that will actually truly push this whole community into more of a liquid software type of deployments, is this type of IoT.

And now we are actually getting much closer to the Edge computing. So I actually brought a box with me today. So this is the box made by a German company called Siemens. Siemens is, you should probably know, is one of the biggest, actually probably the biggest industrial manufacturer in the world. It’s responsible for not just your factories and plans, but it’s responsible for whole city infrastructures. So Siemens basically this box, right? It’s no bigger than my palm, right? And you can take a look at it later. It basically has 3 Ethernet connections, 4 USB connections, and 1 display port. And it’s definitely a sort of looking by the size of it, just like Raspberry Pi. So, I’m sure you all know how much Raspberry Pi costs. How much do you think this box costs? Any guesses? Anybody? It costs 1,100, and that’s the minimum amount of stuff you can put inside the box, like just 2 GB of RAM.

So when I tell everybody, everybody goes like, “Well, why would anybody buy this?” And then of course you go back to Siemens, and you ask them that question, and they go like, “Well, we don’t know why would anybody buy it. What we do know, is that the lead time on this box today, because it just came out to the market in about February, the lead time to get one of these is four months.” They cannot keep up with the demand, at the price point of 1,100 for the cheapest box. Why? Well, very simple. Because this box is actually built to withstand environments that your Raspberry Pi will just melt in. This box is tested to be in that environment for at least two months, that’s the brilliant time that’s on this box, right?

So now we’re actually talking reliable IoT, and I think, that actually makes all the difference. Because now, the conversation shifts from, “Well, it’s just a cheap piece of computer equipment, and we don’t really have to support it much, because it will probably break pretty quickly anyway. So we’re just tinkering with it.” Now, we’re actually deploying these types of boxes, and now I’m actually showing you Dell boxes because Dell is very much in that same business as well. Now we’re deploying those types of boxes, and it’s a real critical infrastructure. We’re deploying it at plants, transportation, energy generation. And the hope is that by sprinkling them all over the place, we will essentially arrive at kind of the cloud outside of the cloud, right?

Now basically everything will have a very reliable piece of computer equipment attached to it, and also actually very capable piece of equipment. Because even this tiniest, a box from their product line, like I said, it’s 2 GB of RAM. It’s actually Intel Atom, fully capable of anything that Atom processors are capable of. It has about, I think, 24 gigabytes of flash. So it’s basically now we’re talking kind of the size of the computer I used to have in my data center about seven years ago, right? So now we are actually finally ready for the liquid software. That’s my premise.

So this is called Simatic IPC127. You can take a look later. I mean, it’s for sure Intel. Now, the reason why everybody is actually interested in deploying these types of boxes, can probably be summarized in this slide. By various projections from Cisco and Gartner, basically the amount of data generated at the Edge will pretty soon completely dwarf the amount of data that actually resides in your data center. And we all will have to deal with it, which is don’t know how yet.

And what I mean by dealing with it is, we will have to deal with it from a legal standpoint, because some of the data cannot even be transferred to the cloud, even if we wanted to. Like if you are taking surveillance footage in a city infrastructure, right? A lot of times we just have legal restrictions on where the data can move. A lot of times just the latency alone, would actually force you to deal with the data stream on the device itself.

So I think to me, that is a killer App, for an infrastructure like this. And how to build this killer App, nobody knows. Which is actually great news for again, somebody who is very much invested in liquid software, because the whole point of liquid software is to keep iterating over and over again, until you actually build something that people can actually use, and it’s pretty satisfactory. So a long story short, Edge computing is inevitable. But if you look in how we do it today, it’s actually pretty abysmal.

So again, from the same book, I kind of took the slide where on the X axis we have the update priority, on the Y axis we have user control. And as you can see, all of the industrial equipment is basically sort of along the bottom of the axis here. None of it is even remotely sort of touching the liquid software. But again, this is happening. I mean, Siemens is selling tons of these. So as software practitioners, of course, we would have to come up with an answer. And of course, as software practitioners, the answer we would come up with is, Edge is going to be huge, so let’s just rub some Kubernetes on it, right? It’s because that’s what we do in a data center. How could this be different? The capabilities are kind of the same, so let’s just try it.

So, how many of you have read these reports that came out about, I think last year like mid-year, maybe a year ago, about the Chick-fil-A guys trying to deploy Kubernetes at scale at all of the restaurants. Like one person, anybody else? Okay. So, Chick-fil-A basically was faced with the very same problem. So they have these boxes, not particular this kind, but these types of boxes, essentially sprinkled all over the retail chains, retail locations. And then they wanted to orchestrate the experience of the people coming into that retail location, right?

Maybe they wanted it to be, let’s say predictive maintenance, so to speak of the menus, right? So it’s like if they detect that there are more people coming for a particular type of chicken, they would actually sort of pipeline the preparation of the chicken more, right? They wanted to basically do sort of predictive maintenance stuff. And of course they wanted them to do locally, and there was nothing available, so they came up with this Kubernetes idea. And very quickly they discovered that Kubernetes is not really built for that type of an environment. Even though they were okay doing the prototype, I believe that by now their prototype has been scrapped.

So another example that you might want to check out again, because what I’m trying to say is, the Edge is here, so you might as well figure out how to take advantage of it. And let’s learn from the people doing it last year. So another example was a KubeEdge. This is mostly led by telco companies. Huawei, the Chinese telco manufacturer is a big sort of proponent of KubeEdge. So KubeEdge was kind of like this idea with telcos, sort of guys trying to standardize what Kubernetes would look on the Edge.

Again, I wouldn’t say it completely failed. It actually migrated over all the time into the incubator of CNCF, Cloud Native Compute Foundation. But there is not really a big following. Because whenever somebody tries to deploy it on anything that’s sort of outside of a telco view of the Edge, and I will tell you a little bit more about telco, non telco view of the Edge later on. Again, it’s very difficult to use KubeEdge for Chick-fil-A type of use case.

So, I think, and this actually came out with this year, a few months ago, so I think the only worthy contender, if you truly want to rub some Kubernetes on it, is the K3S from the nice guys that trencher. So I highly recommend you check it out, because they for the first time realized, that it’s not just the issue of the Kubernetes. So again, what’s the issue with Kubernetes? Very quickly, is the fact that the protocol is super chatty, is the fact that the protocol absolutely doesn’t like dealing with the network that is intermittent, and that the implementation is super freaky huge, right? It’s like if you ever ram an agent on your note, it takes space, right? Again, 2GBs but still I would rather not spend 1 GB on just Kubernetes.

So K3S guys, basically came out with this project, essentially keeping the Kubernetes APIs, but changing the implementation underneath, and optimizing it for this type of a use case. So that was one, but I think what was super genius on their part, is their trying to deploy it on something like this. They very quickly discovered that, it’s not just the issue of the Kubernetes, because that’s the last mile in deployment. The first mile is, what is the operating system that is running on this, right? Because so far, people have been building embedded operating systems, and Yocto based operating systems.

So a lot of times that’s not even rare that you will find that the data center is some awful operating system that you cannot even rebuild. And at that point, your Kubernetes is completely stuck, because all of a sudden Lipsey is not just quite Lipsey that everybody expected, right? So nice folks at Rancher, not only deep K3S which is fine in its own right, and I highly recommend you check it out. In fact, this talk was supposed to be a talk by them, but then they pulled out. So I actually, sort of helping them a little bit here. But I also highly recommend that you should check out, the K3-OS, for operating system. So that’s how they are trying to solve in a universal way, basically the first mile problem.

They’re basically saying, “Okay, so the Kubernetes itself has to run on some kind of an operating system. We have to control that because otherwise we are not really capable of running on whatever is given to us.” So they’ve built a very smart sort of Linux kind of a distribution, which they then, and again, that’s what makes me so excited about their efforts, they basically said, “Okay, so just deploying an operating system here is half of the battle. Because again, remember liquid software, right? So what do we do to actually update this operating system? What do we do to actually protect that operating system?” Well, one choice is, go to the same vendors who are selling you all these solutions today to run in your data center. But the trouble is those vendors, again, are not ready to run on these types of boxes.

So the K3-OS is basically this attempt to say, well, let’s actually extend Kubernetes down. So we’ve been just dealing at the level of container management, but let’s actually extend the API so we can actually deal at the level of even bias. So, because remember, there’s a lot of software that you don’t even think about. There is bias, there is UEFI, that also needs to be upgraded from time to time when vulnerabilities get discovered. So they basically say, okay, let’s try to extend the Kubernetes APIs all the way down, and the combination of K3-OS, plus K3S, is basically how you deploy Kubernetes APIs on all of these computers sprinkled in your retail locations. Very, very interesting efforts. I’ll highly recommend checking it out.

Now we touched upon it briefly, but again, let’s kind of take a step back and understand, why do the nice folks at Rancher have to struggle with all of these issues. And I submit to you this is because of this. If you look at the Edge requirements on the surface of it, this box looks a lot like a server, I used to have in my data center, right? Seven years ago, but still not that long time ago, that’s the stuff on the right. But in a lot of other ways, this box is exactly like this, my cell phone. Why? Well, because these boxes get stolen all the time. These boxes get tinkered with all the time, because they don’t exist within the perimeter security of a data center. People walk up to them all the time, stick around them, use B sticks, they disassemble them, they try to steal secrets, they try to use it as an attack vector into the rest of your IT infrastructure. The list goes on, right? So that’s one aspect of it.

So just like my cell phone, if I forget it, then the bar needs to be protected, and everything on it needs to be encrypted. And we’ve developed a whole bunch of operational sort of knowlEdge of how to do it with mobile. That is one of the big chunk of the requirements for the Edge. Another chunk of the requirements, which is even more interesting, again goes back to the mobile. If you think about traditional data center, the networking is pretty well defined, right? You basically have a guy who defined your topology, you have top of the direct switches. Yes, sometimes they go down, and sometimes you have to deal with it. But most of the time it’s not like on my Android, it’s not like you have to hop between LTE and WiFi, and back to LTE again, and then to another provider of the LTE, and yet the application that’s running on the Android has to steal function. None of it is even remotely relevant in the data center setting.

So the Edge challenge is at this unique blend of, how do we deliver software in a traditional data center setting? Plus how do we actually protect the hardware, and how do we protect the software that’s running on these boxes, in a very mobile like fashion? And not surprisingly, Google was actually one of the first ones to realize that. And they came up with this thing called Android Things, by essentially trying to extend just traditional Android operating system to run on the Edge. It unfortunately came with its own set of problems that we will not get into today. But I submit to you that from now on, we will be basically dealing with two types of computers. So one type of computers is computers inside of data centers, and they have a very predictable set of requirements, we know what those are.

And when I say data centers, I don’t mean your private data centers, a public cloud provider, like Google would actually have a hyperscale data center. It’s still a data center, right? It still exists as a data center someplace. And then, we would basically have all of the computers outside of the data centers. And that’s the biggest distinction that we will have to deal with. And to me that’s the definition of the Edge. And if you look at Linux Foundation, well, Linux foundation has by the way, for those of you who don’t know, I mean, for a long time they have been completely sort of doing things outside of Linux. Spawning these open source foundations, very successful ones. So one of the successful foundations that are dealing with computers inside of data centers, or obviously Cloud Native Compute Foundation, CNCF, right? And it’s been super successful, Kubernetes is great, absolutely.

Linux Foundation has created yet another foundation called LF Edge, to basically allow the industry to essentially work on the common set of requirements, of what Edge computing will be today. And the rest of the talk, we’ll be talking a little bit about LF Edge, and more specifically about the component that I’m building. So hopefully that will be interesting to you. But before we get there, like I said, Edge means different things to different people. So, once you leave the data center, first Edge that your hit, is basically the Edge of the, we call it, partial Edge, right? This is basically the Edge of the telcos, but not true Edge. It’s the Edge of the remote part of the telco network. It’s all of the central offices, like these big buildings that we used to have to house our telecommunication equipment.

These are the COs, and telcos still have to operate them. Now again, these are sort of computer still inside of the data centers, right? Because it’s a building that’s protected, and telcos own everything inside of it. But it’s already outside of a traditional sort of core in that part of the telecommunication company. So that’s the second from the right. Right? And then very quickly the third from the right, that’s the Edge of the telco network. That’s actually the base station, and telcos have a lot of angst about upcoming rollout of 5G, because that’s what will get affected the most.

All of the central offices will still remain roughly the same, but the base stations will be numerous. If now we have just a single cell tower, now with 5G we’ll basically have a base station pretty much with every street corner, right? So all of that will actually have to be managed. And now then we’re actually talking much more of the Edge. Because again, these are the computers that are now much closer to their requirements to something like this. And then, the Edge basically bifurcates, and it either gets to be an industrial Edge, your city infrastructure, your grid energy management companies, all of that. Or it can be your consumer Edge, your house, your home, your car. And Linux Foundation basically decided to take care of all of the Edges pretty much from this point on, right?

So for the telco, there’s a project called Akraino that is trying to take care of a little bit of the CO issues, but mostly about the base stations and all of that. And then there is the project that I will be talking about called EVE, Edge Virtualization Engine, which is for the industrial Edge. And then there is a Samsung project called Home Edge, which is again, not surprising given that it’s the company that produced pretty much all of your washers, and dryers, and fridges, and all of that. Now in the middle of it, there’s a little bit of the middleware. So EdgeX Foundry, is that middleware that is sort of close to, if you know what Cloud Foundry is. It’s basically kind of Cloud Foundry for the Edge, where they try to basically abstract you as a developer away from some of the concerns, but those are just middleware concerns, right?

Just like Cloud Foundry and spring to some extent would attract you away from what a database is. EdgeX Foundry would attract you away from, let’s say, what a sensor is, or what an industrial protocol is. Because, apparently there is a lot of industrial protocols, and you actually want interoperability between them. And it would be nice to have a middle layer, that would actually be able to translate between them. So that’s exactly what EdgeX is.

Now I will be talking about the EVE, Edge Virtualization Engine. And if you want a little bit more of a functional view of the Linux Foundation projects, there’s actually quite a few of them playing in the Edge space. So obviously, again the big projects like Akraino, EVE, Home Edge. But there’s also kind of like, I call them infrastructure projects. So they’re hypervisors for Acorn. There’s different kinds of operating systems, specialized one like Zeefax. There’s Photon OS, which is a much more constrained operating system. There’s actually quite a few projects in the Linux Foundation that are coming together to sort of create this LF Edge. By the way, LF Edge is super new. So it just got created in February this year. So we’ve made a big announcement, and all of that.

So people always tell me like, “Well, it doesn’t look as exciting as CNCF.” And I’m like, “Well, CNCF didn’t look that exciting when it just got started.” So, again will it get to be as exciting as CNCF? Well, if you join, it will be. But if you don’t join, it won’t be. So it’s actually a pretty simple proposition. So within this big ecosystem, my company and sort of the open source project that I lead, we focus on the industrial use cases. So again, boxes like these. And just to recap some of the basic principles, like I told you, we need to build an infrastructure that is essentially in business of running applications, right?

So the first question that we have to answer is, what is an application? Right? Because again, coming back to my Android analogy, that was one of the biggest benefits that Android brought to the developer ecosystem. Because before Android, an application could have been a Blackberry application, or a Nokia application, or an Ericsson application. Oh, by the way, every single one of them actually had different profiles. So it would be different applications even within those families. With Android, yeah, sure, there are different hardware profiles, but most of the time it’s just an APK, right? My application as a developer, I am done building my APK, I will upload it to play store, I am done.

So we need to define what is it that actually means to be an application on the Edge, and for that within the Linux Foundation Edge we’re standardizing this idea of an Edge Container. So each container is meant to be as extension, just a true extension of the OCI, traditional sort of data center, open container initiative container format. But it also is meant to embrace things that, again, I’m not super relevant in the data center. On these boxes you still have to run real-time, or close to real-time operating systems. So whatever it is that you do in terms of packaging, it has to support a real-time operating system. Now again, we feel like we actually have a pretty good answer to how unify containers, Vms, and unikernels, and that’s what the Edge containers will be all about.

Now, once you define the application, once you define basically your packaging and distribution format, the rest is easy. Or at least it’s supposed to be. It just needs to run on something, and then you just have to define what are the properties of that something that it runs on. So for EVE, we basically submit to everybody, that it actually has to have three basic fundamental properties. So it has to be zero-touch, in a sense that this box, well this particular one actually has a display port, a lot of them don’t. And the ones that don’t, they wouldn’t even have a serial connection. It truly needs to be zero-touch. You basically bolt it onto your ceiling, you turn it on, it needs to function.

So the joke within our open source project, is that we basically need to operate a little bit like botnet, like those viruses that infect your computer. Because those can function anywhere. They don’t really ask you like, “Hey, what’s my WiFi password?” Or, “Help me with this or that.” They just run. So whatever we’ll build for these boxes basically has to be like a botnet. It has to run anywhere, everywhere, and be absolutely zero-touch to a point where you wouldn’t even know that it’s there. Just like you don’t know where the botnet has infected your computer.

It has to support any type of the application. Like I said, real-time applications. There are traditional containers. But it also has to support any kind of hardware. So hardware heterogeneity is still pretty high on the Edge. It’s getting better, but it’s pretty high. And it also has to support any kind of network. Because again, remember the Android analogy, right? These little guy, actually has to hop between LTE and WiFi all the time, but that’s not the worst of it. The worst of it is when it’s on WiFi, it’s basically guaranteed to be behind a few net boxes. So sending data to it becomes a real issue. It can make outgoing connections, but actually routing to it becomes a really, really big issue.

And as I discovered, building this project, apparently routing to an LTE IP address is actually a big issue as well. So you kind of have to figure out in the networking environment that these guys will exist in, what is the answer to the developer? Because developers would hate figuring out, how to punch holes in the firewalls, and how to traverse net boxes. So whatever it is, you actually have to give a developer an API that would just work. And finally, these boxes, like I said, they have to be zero-trust. In the sense that just like your cell phone, whatever ends up running here has to be protected, and the box itself has to be protected from the application. So based on these principles, we feel that we actually have a pretty good view into how this whole ecosystem is going to evolve.

So if you take an example of a mobile operating system, or the cloud, there’s always two, right? There’s an open source, and there’s close source, number one’s, right? Android and iOS, it pains me to say it, as a Unix guy. But I have to say that in the cloud, the open guys are a zero now, it’s true. I mean, I have to say it. I hate to say it, but it’s true. And AWS is an evil empire. So on the Edge, we feel that as a community, we will actually build the Edge Virtualization Engine, EVE, and who will emerge as an evil empire? That’s actually a really big question. I don’t know. I mean, it could be Google, or Amazon could be, but it actually could be somebody like VMware as well. So we just don’t know yet.

Now with the project itself. So again, the approach that we’re thinking is we have all of the siloed system right now, that are built as an embedded type of development. So what we are trying to do with them, is we’re basically trying to take all of the software functions out of those siloed systems. Be it the real-time operating system, or just a set of containers that are running on your Raspberry Pi. And we’re trying to run it on a common infrastructure that can run on any box like this. So this common infrastructure, this sort of, call it an operating system and an operating environment, that is what EVE is, right? So EVE is basically something that runs on the box, it boots, it runs on the box. And then it can support these types of workloads, and essentially providing a really nice software type of environment for these types of workloads.

Obviously, once you’ve built something like that, it actually needs to be controlled. Because all of a sudden, like I said, I mean these books basically have nothing but a network connection to them. Right? So how do you schedule workloads? How do you manage the boxes themselves? All of it actually has to be controlled from the cloud. So the box itself will have an API to the cloud, but the cloud itself will also have an API for sort of management of the workloads. And that’s where the Kubernetes kicks in, because the front end API could totally be Kubernetes, like your Kube cuddle can talk to basically a controller in the sky, that would then dispatch workloads in these boxes. The API between the box and the controller, is something that’s LF Edge is now standardizing.

So again, if you’re interested you can absolutely join us. It’s actually interesting because it’s this weird mix between like a puppet chef type of a thing, and like cloud type of a thing. It’s actually very interesting how it’s evolving. But at the end of the day, the experience that we want to give to everybody is very much cloud-like, right? So, you basically have Diane the DevOPS, and she will be deploying the workloads on this type of hardware, completely indistinguishably from her deploying these workloads on Amazon, or Azure. Like I’m saying, I would love for the API for this frontline API, right? The cloud API, to actually be something standard as Kubernetes, that will actually make sense to me. But the open API between the project EVE and the EVE controller, it’s a very specialized one. So, just keep in mind that there are two APIs.

Just a very short plug for my company, because I had to have the obligatory slide. So we ourselves, essentially invest heavily in the bottom layer, which is fully open source. Every single bit of software that runs on the box is open source. The stuff that we write ourselves, is a partial license. The stuff that we take from other open source communities, it’s like GPL abused your license. So whatever else, but like fully open source, nothing proprietary whatsoever. Then the company itself monetizes the controller. So we basically, we hope to build the most scalable controller in the sky. So that once you have EVE running on the box, your choice of what controller to hook it up to, will be the default one. Ours is either, but we are also fully expecting other companies like, Googles, and Amazons, and Microsoft, actually we are in talks with Microsoft about this to build their own controllers, and integrate them into their own cloud platforms. So that’s sort of the overall play.

Now, one question that I get asked all the time about EVE is like, “Well, you’re making this Android analogy so much that like, okay, so what’s wrong with Androids?” And instead of telling you what’s wrong with Android, I will actually tell you sort of the similarities. Because in a lot of sense, we’ll basically have to answer the same set of questions for building EVE. And the set of questions is, application and operating system sandboxing, how can you protect applications from each other? And how can you protect operating system from the application and vice versa? Application bundling, again, what does it mean to actually be an application? Application deployment, how it is that you actually deploy the application. And finally, what hardware we support it. So if you look into Android, Android basically made all of the choices that made sense at the time. Right?

So for the App sandboxing, it was JVM, made perfect sense at the time. We are actually using hardware assistance virtualization, a completely same style that you would use for VMware, or Xen, or KVM, that is now the default, and every single one of these types of boxes. Application bundling, again, Android was obviously APK. We are using this idea of Edge containers. For App deployment, we’re actually following a very similar model. So our idea is that we will actually have app store experience for these types of applications. Right?

So again, suppose I am managing an oil field, or I am managing a field of wind turbines. As somebody who operates that field, I would like to have the best software to do predictive maintenance on my field. And today, every single time I have to do it, it basically turns into this huge system integration effort. Because I have to invite the company, they have to build it for me as a warn off. What if instead of that, we basically had the marketplace where those types of applications could be traded. And the only requirement for those types of applications would be, well, as long as it runs on EVE, that’s all we have to know. Just as much as, again, as long as it’s part of the app store, that’s all I have to know.

Interestingly enough on the hardware side, with Android, again, all of the choices that made sense at the time, we’re kind of sticking with the same set of choices by the way. But one bit is different. So we are both supporting Intel and ARM obviously, but MIPS sort of turned into RISC-V. How many of you do know about RISC-V? Okay. So RISC-V is this really awesome architecture that is fully open source. It traces its lineage to the same people who gave us MIPS, basically, guys and girls from Berkeley. And it’s the same research team that basically decided to come up with a completely patent and unencumbered set of design, and VHDL descriptions for this new CPU called RISC-V.

There is a tremendous amount of excitement in the industry. So companies as big as Western Digital, and Seagate, are essentially transitioning in mass to RISC-V, because you pay zero royalties. RISC-V makes you pay zero royalties. It’s all licensed under the open source license. Right? And if any of you are having extended family CPU geeks, RISC-V is one of the best sort of CPU implementations to actually develop for. Because one of the things that they came up with, in order to bootstrap the open source community, they’re like, well, not a lot of people can actually do VHDL. Because in order to do VHDL, you actually have to buy this really expensive tools. And it’s like by the time you’ve invested 10K into the tools, you’re probably not an open source developer.

So they actually came up with this DSL, Domain Specific Language, that’s based on scalar, that allows you to actually hag the implementation of the CPU itself, without ever thinking about VHDL. Which is actually super awesome. But I’d aggress. So, my point is, I think the Edge on the upper tier, something like this will belong to Intel. So Intel is actually moving very rapidly in that space. I hate to say it, but I think ARM will lose. So we will still support it, because it’s sort of it exists, but I think between the RISC-V and ARM, my money is on RISC-V. So that’s why we’re sort of betting on both. ARM obviously today is very much present, so we have to support it. So let’s quickly talk about what EVE actually sort of does, or what it offers to you as a developer. And then, I would like to make it as interactive as possible, and maybe take some of the questions.

So, at the end of the day, again, think of it as a very traditional virtualization environment, right? So EVE runs on the Type-1 hypervisor. So today Xen, we are talking to Linux Foundation project called Acorn. Acorn is Intel’s attempt at essentially building embedded specific hypervisor for Intel. So pretty cool project. So we might actually support that as well. But it basically runs on the Type-1 hypervisor, and on top of the Type-1 hypervisor, it supports all of these types of workloads. Like I said, they’re packaged as Edge containers. Right? Now, the cool thing is that, because it runs on the hypervisor, at this level we should sort of, I call it on the other side of the hypervisor, we can do a lot of tricks that today typically are associated with sidecars. So how many of you do know what’s a sidecar? Either Kubernetes wise, or like any other? Okay. So at least you know a decent amount here. So very quickly, a sidecar, is a container that is basically a system level container that provides a certain function to your application container.

So lets say your application container doesn’t want to deal with load balancing. So then, all you have to do is run a sidecar, and that sidecar provides a listener on a local host, and the sidecar itself deals with the load balancing. The application itself only ever opens up a connection to local hosts. It doesn’t have to even know where the traffic is coming from. So that’s sort of the whole idea of the sidecar. We can do a whole bunch of tricks like that, on the other side of the hypervisor, in a much more efficient way. And my favorite one, which I kind of, I will throw here just as an example to give you an appreciation of what EVE is trying to do, is basically connectivity. So when we deploy a VM, one of these workloads, right? Because we are running on top of hypervisor to a VM, a network connection that we’re giving to AVM would look like a virtualized Ethernet adapter. So when the VM boots, it just sees that there is an E2000, for those of you who remember. No, it’s actually different, but pretty similar that it can drive with just a single driver.

A lot of times it’s actually a IUL driver, which makes it even more efficient. But it’s basically a virtualized network interface card. But what application doesn’t have to know is that, what kind of magic we will be pulling off on the other side of the hypervisor. So all of a sudden, that virtualized network interface card, becomes an interface to a sidecar that we will be running on the other side of the hypervisor. Which allows us to pull some really new tricks. So for example, today one of the big deals on the Edge is, if I run application on this box, a lot of times the back end of my application would run on the cloud. Right? On Amazon, or Microsoft, or Google cloud. So the question then becomes, how do I configure my networking so that I become part of the VPC, let’s say, that’s actually a part of the Amazon cloud?

And the answer to that is, well, if you’re running just a random Yocto based distribution, you’re actual out of luck. Because, chances are that strongswan, which is IPsec implementation, which is what VPC on Amazon happens to be, will be incompatible with your distribution of Linux. At which point people get stuck. They are like, “Okay, what do I do? Like I have this distribution of Linux, it runs on my box, I have to make sure that it talks to those machines on Amazon.” So then EVE shows up, and we’re like, well, easy, take your existing application, don’t modify anything, right? Just virtualize it, put it inside of the VVM. Your application will think that the traffic coming to you through that interface card, is the traffic just like any other traffic. On the other side of it, we will make sure that we basically strip everything from TCP IP up, or TCP IP down. And we will basically encapsulate it into IPsec traffic, send it back to the Amazon that you configured, and your application doesn’t even know that we’re doing it, right?

So all of these tricks that we can do actually resonate really well with developers. Because the biggest thing that they are struggling with right now is, they all grew up in this world of embedded development where the resources were super constrained, so doing something interesting and tricky, wasn’t even an option. Now that they have to connect that stuff to the internet, they are like okay, what do we do? Can we at least have the stop gap solution, where we take the existing stuff, run it on something, anything that will actually allow us connectivity and all of these other properties? Right?

It is actually not this similar to what was happening in the data center, when virtualization got introduced. Because, for those of you who remember, the biggest use case for a VMware at the time, was actually windows NT. Windows NT was sort of running out of steam hardware support-wise, and VMware well, showed up and said like, well, you can take it, virtualize it, and we’ll run it as a virtual machine, and your base OS, your host OS, can be whatever. Like it can support the latest drivers, and can do networking however it wants. But your Windows NT will not even notice, because we will emulate and virtualize everything for you. That is the most fundamental use case that people who are trying to migrate from the IoT to the true Edge computing are now sort of struggling with.

So EVE itself is actually composed of multiple different pieces. So there’s a lot of documentation on the EVE’s website. You can just Google LF Edge if it will come up. But it’s an operating environment, pretty complex by now, but still pretty tiny. So the smallest compute box that we can run on, is Orange Pi, which is actually on smaller than Raspberry Pi. But there is not really enough room left for interesting applications, so you’re probably looking for something like this anyway. And that’s it. That’s pretty much all I wanted to tell you, so that I have at least some time for questions. So any questions? Or you can take a look at the box and come up and you know.


Oh, like IoT specific?


Okay. Awesome. Awesome. Yeah, that would be nice. Yeah. Yeah, we can then move upstairs, and kind of maybe participate in a bigger group.

Yeah. Mm-hmm.

Do you need to something that will launch a device for you. How much?

Yeah. So, I think we will be so, that’s actually why we are embracing this idea of unikernels, very rapidly. So unikernel was this idea that we can actually slim down even containers even more, right? And in the data center, it was a little bit of a solution looking for a problem. Because containers ended up being small enough, for people not to care to make them even smaller. On the Edge, I think unikernels will be a big deal. So, if you will, so two pointers that I will give you, are look up two things. So one is called Ataman OS. So that’s basically a Go community, Go language. A community project of allowing Go to run without an operating system. And it’s actually really cute how they do it.

So if you have anything that’s Go, right? You know how now you can basically specify Go OS, and essentially you are doing cross builds for that OS. They implemented an OS that is basically a hypervisor. So you can specify Go OS=Zen, and the executable that will pop out, will be an executable that doesn’t require a single bit of an operating system. It’s highly optimized. So, if we can start moving towards those types of executables, I would totally see a lot of the containers basically running on the same device. Yeah. All right. Awesome. Thank you so much.


Try JFrog Artifactory for free!