These days, 7 years ago, Docker – the company behind the most famous container technology, released the first version of Docker and changed the world of DevOps and cloud infrastructure forever. Virtual machines (Vmware, Virtualbox, etc) and service managers (Systemd, init.d) were the first victims to feel the movement. The ease Docker engine provides, took the technology of containers to the front of every new cloud software development and architecture.
At some point, a few years after we have seen containers taking a place on the cloud market, Docker became available and supported by ARM based CPU architecture, and opened the mind of developers to design an IoT product that runs containers. But unlike the cloud market, containers still didn’t conquer the IoT district and were proven to be less attractive.
Is this the right time for Docker containers to serve IoT and embedded Linux-based devices?
Definitely YES. But not for everyone.
Let’s break down the decision to the top 3 major ingredients that could be affected by choosing Docker containers:
IoT devices could be designed and developed for different use cases, industries and purposes, from tiny cameras to huge machines and robots. Every product BOM (bill of materials) has a direct effect on the chosen IoT hardware. Single Board Computers (SBC) and System On Module(SOM) prices can be low as $5 – Raspberry Pi Zero and high as $100 or even $500 – Nvidia Jetson family. On low-resources hardware – less than 1GB of RAM and DISK space, the Docker engine might be found heavy and unstable. Therefore, on tiny devices with low resources, the native choice will be to give up on running Docker engine and stick with the standard Linux service managers – systemd or Init.d for covering and running the device software.
Device software complexity and OTA updates
One of the biggest advantages in choosing Docker as the infrastructure for the device software is the ability to replace the entire device file-system application: software code and packages, with a robust atomic method. When the product fleet is in the field, far from human hands, the dilemma between containers and a simpler path like systemd service is critical with a future impact that hard to change down the road. The main reason it’s such an important subject is because of the need to deploy software updates from time to time in purpose to keep the device fleet stable and up to date.
For IoT products with tiny application and few packages, the OTA update process could and should be lightweight and simple: replacing the current app directory or a file with the new updated version and re-run the app service. Opposite to a complicated edge software, where update deployment might include many different directories, files and packages on a weekly or monthly basis, especially in AI and image processing applications.
The native choice linked to the expected application update complexity that will need to be done in production. With that said, choosing containers, significantly reduce the possible edge cases and concerns we might experience.
Deploying a fleet of thousands or tens of thousands of devices requires thinking of the
“day after”. How the device fleet will behave a month or a year into production? If the device is connected to the internet regularly, are there any guidelines to keep it secure even years later?
The attack surface can change drastically between different IoT devices, and it’s all linked to the risks our devices are exposed to. Running Linux OS with minimal packages removes unnecessary threats, even over time, having fewer packages to update, keep us under control. With that in mind, running the application above a container that has been built to run the the exact application, and doesn’t have additional unused packages, could save lots of headache along the way. Yet, the OS itself runs tens of services that expose the device to attacks and must be configured or disabled before production stage.
While it’s still not straightforward to determine a single winner, it’s clear that container-based applications become a preferred solution if there are no rough BOM circumstances. Yet, both paths: container and service based product app exist in many industrial applications, and probably will still be found years to go.
The choice between the two could improve our deployment process and overall project success over time, but still, both options don’t ensure 100% of application uptime. unfortunately, software is not bulletproof, and sometimes, If the service or container application crashed on one of the production devices and doesn’t turn on, we need a backup plan to save the situation quickly as possible.
In that case, no technology can help more than having a good monitoring system that will notify us in any event of a device or application issue. Having a good monitoring system could save hours of debugging and help us fix the problem before someone even notice.
if it’s a Docker container, or a systemd service, at JFrog Connect, we provide an all-in-one platform to manage, update, monitor and secure IoT devices. Used by companies who deploy container based applications, and file-system based applications, build and produce connected motorcycles, cameras, robots, and many other great products that take the technology to the edge.
JFrog Connect OTA update tools
JFrog Connect provides 2 methods to deploy and manage software updates on IoT devices. Those tools are designed to solve the complexity of deploying a successful OTA update on a product fleet while having the right metrics and information on every deployed device in real-time.
The Container updates tool responsible to deploy the latest Docker image from your container registry, and creating a Docker container based on the given recipe, while replacing the current running container. Just as important, the container updates tool includes a rollback option that will be triggered in any case of an issue during the deployment process.
The Micro updates tool solves the complexity in deploying and updating a file-system / systemd based application. Using Micro updates, every single file can be updated separately and any Bash command or Bash script can run during the deployment. Similar to the Container updates tool, a rollback option will make sure that every updated file will be returned to the version before the deployment.