Microkernel vs monolithic - Embedded.com

Microkernel vs monolithic

Slashdot reopened the endless Linus Torvald vs. Andy Tanenbaum debateabout microkernels and monolithic operating systems. This is asilly discussion akin to debating bicycles versus cars: both are formsof transportation that meet differing needs. Both can and shouldcoexist in the people-moving ecology.

Tanenbaum, of course, is the leading light behind theas-yet-incomplete Minix operating system that was originally developedas a teaching tool. Minix  is a microkernel whose kernel-mode component is less than 4,000 lines of C.

An open-source product released under a BSD-style license, it'stargeted at both (larger) embedded systems and desktop applications.Currently only available in x86 versions it is being ported to Xscaleand PowerPC processors. Minix is an on-going research project thathasn't made it into the mainstream as yet.

Minux is a microkernel, defined more or less as a very smalloperating system that provides system calls to manage basic serviceslike handling threads, address spaces, and inter-processcommunications. A microkernel relegates all other activities to”servers” that exist in user space.

A big monolithic OS (like Linux and Windows), on the other hand,provides far more services in the protected kernel space. Linux'skernel is over 2 million lines of code; Window's is far bigger.Monolithic kernels have been tremendously successful and do ayeoman's job of running the world's desktops and many embeddedsystems.

Where most operating systems have complex and often conflictingdesign goals, plus the agony of support for legacy code, microkernelstout reliability as their primary feature. A bug in a device driver,for instance, only crashes that driver and not the entire system.Judicious use of a memory management unit ensures that non-kernelservers live in their own address spaces, independent of each other,and protected from each other.

If a server crashes, the kernel can restart that component ratherthan having the entire system die or fall into a seriously degradedmode. Advocates of the monolithic kernel note that the microkernel is far froma panacea and that malicious attacks can still cripple the system (formore on this see Wikipedia. 

Regardless of the debate, the philosophy behind microkernelsfascinates me. See www.minix3.org/reliability.html to see how Minix, for instance, is designed for reliable operation (which is, in my opinion, far more important than the desire to pile features into already bloated code).

Minix stresses a small kernel size. It's a whole lot easier toensure that 4,000 lines of code are correct than 2 million. By installingdevice drivers and other typically buggy features in user space, mostof the operating system simply cannot execute privileged instructions or accessmemory or I/O belonging to another process or resource. Infinite loopsdisappear, since the scheduler lowers the sick server's priority tillit becomes the idle task.

A reincarnation server regularly pings each server; those that don'trespond are killed off and restarted. That's pretty cool.

Every firmware engineer should read(www.minix3.org/reliability.html) and think deeply about Minix's philosophy of reliability. The idea that bugs in big systems are inevitable, but that we can build fault-tolerant code that survivesattacks and defects, is important. It's worth thinking about whetheryou use a micro- or monolithic-kernel, or even just a while(1) loop.

I believe the next great innovation in embedded processor designwill be a very old idea: the memory management unit. An MMU in everyCPU, coupled with code that isolates each operating-system component and task into distinct hardware-protected memory areas, can and will lead to muchmore reliable firmware.

What do you think?

Jack G. Ganssle is a lecturer and consultant on embeddeddevelopment issues. He conducts seminars on embedded systems and helpscompanies with their embedded challenges. Contact him at . His website is .

Reader Response


I began my programming career working with a microkernel OS (QNX). When I started to develop with Linux I made up my which approach I preferred. The learning curve associated with using a microkernel OS is miniscule. You only need to concern yourself with the parts of the system that you are going to use and all your code runs in the same type of environment; be it a driver or an application. (This additionally aids in debugging drivers.)

When we began developing our app with QNX we started with a very large version of the OS and when I came time to deliver we were able to cut out of the OS everything we weren't using to fit the remaining OS and our code on to a very small flash disk.

– Chris Michael
Dallas, TX


The arinc 653 operating system offered by vendors such as Wind River is along the future directions. This o/s is currently used in many avionics systems.

– Glenn Edgar
San Diego, CA


It would be interesting to buy a suite of Kernels that you can use depending on the type of application you want to design. ICs come in ASIC flavors why not Kernels? Kernel Specific OS (KSOS) for the embedded market.

priority based ==> priority specific krnl
msg based ==> msg specific krnl
polling based ==> polling specific krnl
time base ==> time specific krnl
etc ==> etc specific krnl

– Steve King
Tucson, AZ


(1) DEC was looking at converting to a micro-kernel with protect-mode device drivers–in 1973!

(2) The kernel is like one leg of a stoll. The micro-processor needs to be a good architecture and the chip needs to have a good support set (MMU, priviledged instructions, etc.)

(3) Make the micro-kernel too small and it may be moved into the HW.

(4) What level of MMU is needed for various apps? I know of five levels of MMUs.

– William Gustafson
HW/SW Engineer
Leviton
Tualatin, OR


I think the debate is not that they are vehicles but rather which one is a bicycle and which one is a car. It all started when people misinterpreted Tanenbaum when he declared 'Linux is obsolete'.

– Mohamad Yusof
Kuala Lumpur, Malaysia


Hello, Jack.

I always enjoy reading your articles and have a couple of your books. Thanks for sharing your knowledge and insights with the engineering community at large. I agree with your conclusion; the basic problem is that bugs in tasks can damage the integrity of the system; make it do something it was not supposed to. Inter-task protection is very important and I agree that MMUs will become required for high-integrity systems; however, a lot of the complexity of the table-walking MMU can be avoided by using a simple MPU which can give many of the same benefits. Of course, these too are rarely implemented and even more rarely supported.

Regards
– Ata Khan
San Jose, CA


jack,

otherwise a good article — but i can't believe you wrote so many words about microkernels and reliability yet didn't mention QNX. moreover, you didn't mention that QNX's POSIX API makes it trivial to port open source software to this very robust realtime microkernel OS.

i work in large system optical telecom r&d and i've come to really appreciate designing, implementing, and debugging a QNX-based system.

– bob barker
Hollywood, CA


The arinc 653 operating system offered by vendors such as Wind River is along the future directions. This o/s is currently used in many avionics systems.

– Glenn Edgar
San Diego, CA


I agree that it's essentially a comparision between apples and oranges. But my question is why isn't any micro-kernel a big hit on the desktop market as monolithics do? Is stability a buzz word only for the embedded world? Desktop users also like their system to be more stable.

– Himanshu Chauhan
Jaipur, India


Hi Jack,

Have enjoyed your columns for ages and learned a lot from them.

One micro-kernel that hasn't been mentioned as I write this is the Mach micro-kernel. It is what underlies Mac OS X. Now that we have OS X on Intel and the ability to test the same machine under OS X and Windows, the reason for the debate becomes clear. As usual, it's speed vs safety. I have heard that Maximum PC tested a MacIntel under both OSes with exactly the same (but native to each OS) applications. Windows was faster.

As I understand it, the speed problem is due to all the time consuming transitions to/from protected mode that you need to get system work done. This is not fixable with a given architecture, but it IS possible to fix with a new processor or architecturally revised one.

Safety does win on occasion over speed, especially when other benefits accrue. Take NTFS in Windows. It's slower than any version of FAT, but has the major advantage of being able to handle larger disk partitions, so even home users have gone to it, with safety and error recovery benefits.

In my opinion, it's time for a revolution in CPU architecture. Don't just measure what instructions get used, look at the problems and see if fast and safer hardware will make software better! Your MMU idea is one such possibility, but it's not new. It was simply underdeveloped by Intel in the 8086 family – yes I mean that universally hated segmented memory!

– Bob Pegram
Bridport, VT


Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.