Why a bare-metal developer moved to operating systems - Embedded.com

Why a bare-metal developer moved to operating systems

Looking back at the ‘Bare-Metal Age’

The first time I knew about the embedded software was about 2008, I was a sophomore and started to learn about programming on a 51 chip. Since I majored in computer science, most of my programs were executed on PC. That was a completely different experience to see programs running on a bare-metal board, and I can still remember the excitement when my first cycling lamp program ran successfully. However, the more bare-metal programs I wrote, the more issues I met. I summarize them as follows: 

Concurrency

For bare-metal programs, there is inevitably a huge ‘while (1)’ loop, which contains almost all the transaction logic of the whole project. Each transaction invokes one or more delay functions and they are executed serially when the CPU is running a delay function, the rest of the transactions have to wait. In this way, much of the CPU time is wasted on empty loops, which causes a pretty low concurrency.

Modularity

From the perspective of the software project, the principle of high cohesion and low coupling is always emphasized during the development process. However, modules in a bare-metal software usually depend on each other heavily, it is not convenient to design software with low coupling, which makes it difficult to develop large projects on bare-metal boards. For example:

  • As mentioned above, most functions are collected in a huge ‘while (1)’ loop and hard to divide into modules.
  • As another example, developers must be careful to use delay functions when the watchdog timer is involved. If the delay time is too long, the main function doesn’t have the opportunity to reset the watchdog, then the watchdog will be triggered during execution. So, for bare-metal development, there are too many things to consider even when invoking a delay function. The more complex the project is, the more care needs to be taken.

Ecosystem

Many advanced software components must depend on the implementation of the lower-level operating system. For example:

  • I have developed an open-source project based on ‘FreeModbus’ that I had planned to transplant to various platforms, even the bare-metal boards were considered. But compared to the convenience of adapting it to different operating systems, some functions are too complex to implement on all bare-metal boards. On the other hand, many implementations must be designed from scratch on different hardware platforms because of the lack of commonality, which is a boring and time-consuming task. For now, my implementation of Modbus stack is still not able to run on bare-metal boards.
  • Many WiFi software development kits (SDKs) provided by big companies such as Realteck, TI, and MediaTek can only be run on the operating system. They don’t publish the source code of firmware for the user to modify, so you cannot use them within the bare-metal environment.

Real-time

Real-time capability is necessary for some application fields. In this situation, some critical steps of the software must be triggered at a specific time. For industry control, the mechanical devices must complete actions in a pre-determined order and timing. If the real-time capability cannot be assured, it will cause malfunctions, which may endanger the lives of workers. On bare-metal platforms, when all the functions are jammed into one big ‘while (1)’ loop, it is impossible to maintain the real-time capabilities.

Reusability

Reusability depends on the modularity directly. I believe no one would like to do the same job again and again, especially when writing code. But on various hardware platforms with different chips, the same function has to be adapted to different hardware, whose implementations depend heavily on the low-level hardware. It’s inevitable to rebuild wheels.

The advantage of operating systems

It was about 2010 when I first used the operating system. The series of STM32 MCUs were starting to be popular. With powerful features, many people ran operating systems on them. I was then using the RT-Thread operating system for which there are many available, ready-to-use components. Compared to other operating systems, I feel more comfortable and have been developing on it for 10 years.

Based on my understanding, I’d like to discuss the advantages of operating systems:

Modularity

With the operating system, the entire software could be split into several tasks (known as threads), each thread has its own independent execution space. They are independent of each other, which improves the modularity.

Concurrency

When a thread invokes the delay function, it will automatically yield the CPU to other threads in need, which improves the utilization of the entire CPU and ultimately the concurrency.

Real-time

An RTOS is designed with real-time capabilities. Each thread is assigned a specified priority. More important threads are set to a higher priority, less important threads are set to lower ones. In this way, the real-time performance of the entire software is guaranteed.

Development Efficiency

The operating system provides a unified layer of abstract interfaces, which facilitates the accumulation of reusable components and improves development efficiency.

The operating system is a product of the wisdom of a group of software geeks. Many common software functions, such as semaphore, event notification, mailbox, ring buffer, one-way chain list / two-way list, and so on, are encapsulated and abstracted to make these functions ready to use.

Operating systems such as Linux and RT-Thread implement a standard set of hardware interfaces for fragmented hardware, known as the device-driver framework. Therefore, software engineers only need to focus on development and no longer need to concern themselves about the underlying hardware, or to rebuild wheels.

Software Ecosystem

The richness of the ecosystem brings the process of quantitative changes to qualitative ones.

The improvement of modularity and reusability with operating systems allow us to encapsulate operating system-based, embedded-friendly reusable components, which not only can be used in our projects but also can be shared with more embedded developers in need—maximizing the value of the software.

I am an open-source geek and I’ve open-sourced some embedded software on GitHub. Before creating open-source software, I rarely talked with others about my projects because I considered that because people are using different chips or hardware platforms, my code could hardly run on their hardware. With operating systems, software reusability is greatly improved, many experts can communicate with each other about the same project. They are even from different countries. This is encouraging more and more people to share and to talk about their projects.


Zhu Tianlong has over 10 years of real-time operating system (RTOS) programming experience and is committed to researching and developing cutting-edge technology.

6 thoughts on “Why a bare-metal developer moved to operating systems

  1. Some of the bare metal deficiencies named are easily overcome with good design. There are two requirements that would drive me to an RTOS: (1) third party middle-ware requirements and (2) programming team size. For the first, the reasons are obvious. For the second, an agreed design model for an RTOS can facilitate independent code development with good testing along the way. I would not routinely decide to use an RTOS without a good reason but, in the real world, often prior work and schedule really drive the choices.

    Log in to Reply
  2. Alan Kay once said, “If you’re serious about software, you make your own hardware.” At the heart of that wisdom is the belief that great software is married so closely to hardware that software people may even need to make their own hardware when necessary.

    The opposite of that is being so abstracted from the hardware that your software people can’t even really do much when a problem or performance bottleneck occurs.

    I would gently add that each project should assess their broad goals:

    1. How much hardware cost is the solution worth?
    2. How much power will each solution approach require?
    3. How likely are we to need to quickly migrate the software to another set of hardware?

    For instance, if the project is making a small, rechargeable consumer item it won’t make sense to run an expensive power-hungry processor that sucks down power and supports a full stack when a modern $3 microcontroller running custom code can do the (relatively simple) tasks required, using orders of magnitude less memory and power.

    Every microcontroller company has an ecosystem of networking and display drivers that bare metal coders can use, and there are coding techniques that allow superb control over both processor power management as well as deterministic and real-time completion of tasks using hardware interrupts. But why would anyone bother to learn them?

    Because: do not underestimate the power of having complete control over the software AND the hardware—and don’t ignore the corresponding helplessness of not knowing which part of a problem is team’s code or the 80% of the stack that was purchased. Or some dependency between the two.

    So many teams start with the assumption that they need to use a desktop programming model in an embedded design and trap themselves in a power-hungry, hardware-heavy platform. They prioritize “time to first working demo” and sub-optimize the final cost position, solution size, and battery life.

    In reality, its the time to final solution that matters—not how quickly the first version can be demoed.

    And power has become the shadow requirement that often gets lost in the more glamorous discussions of network connectivity and communications. But companies (I’m looking at you, Intel) ignore power requirements at their peril. Power requirements have an out-sized impact on final solutions and their convenience in everyday use. Ask any mobile phone or electric car company.

    Log in to Reply
  3. If power, code/RAM size, and development tool costs are major issues, the bare iron vs RTOS tradeoff isn’t so simple. If you write an entire application inside one while(1) loop and block execution during a time delay, yes, you’re not going to achieve the performance or response time you may require. I’ve found that even small applications generally involve several functional “tasks.” Design each of those using a finite state machine with distinct sub-functions that can yield() to an event-driven monitor that calls back into those FSMs when an event (hardware interrupt, timer interrupt, user key press, …) indicates there’s something for one of those state machines to do.

    The top-level design task in this approach is to examine the number of these “tasks,” their complexity, their coherency, and the amount of coupling between them. There will be a point at which complexity indicates an RTOS-based design is the only rational approach.

    Log in to Reply
    1. If your requirement could be satisfied in one unified while(1) loop, an embedded OS is a waste of time and resources. However, in my experience, most applications need to handle several tasks at the same time, and they should not affect each other. This is where an OS applies.

      Experts with experiences may write a multi-task scheduler specific for their own project, but it still takes months or even years to optimize and adjust it to make it fit into various environments. It is the so-called ‘reinvent the wheel’.

      Spend a little time on studying a well-designed OS and contribute your own code to make it work better. This is the rational way I prefer.

      Log in to Reply

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.