Vintage multicore - Embedded.com

Vintage multicore

After many years of hype, multicore and multiple-CPU embedded systems are now becoming mainstream. There are numerous articles about multicore design that address different hardware architectures (homogeneous and heterogeneous multicore) and software architectures (AMP: asymmetrical multi-processing and SMP: symmetrical multi-processing). In this article the development of an AMP system is outlined, highlighting various challenges that were addressed. What is unusual is that the design was completed in 1981!

It often appears that there is nothing truly new in the world. With technology, it is often a question of the time being right. The programming languages that we use nowadays are mostly developments of 30-40 year old technology. The current enthusiasm for multicore designs is really nothing new either. Browsing the literature turns up titles like “Multi-core is Here Now” that have been appearing for at least 5 years.

But multicore goes back further still. I was working on a multi-core system in 1980 …

How it all started
It was my first job out of college and I worked for a company that made materials testing instruments – large machines and systems that stressed and broke things under controlled conditions. The use of computer or microprocessor control was new. Hitherto, the machines had been controlled by racks of analog electronics, with meters and chart recorders providing results. I worked in the division that provided the computer control. Initially, the approach was simply to link a mini-computer to a traditional console. The next – and, at the time, brave – step was to replace the entire console with a microprocessor where a keypad enabled input of parameters and selection of settings from menus on a screen. Of course, a mouse or touch screen might have been better, but that technology would not appear for some years.

The project to which I was assigned was to facilitate the “user programmability” of the new microprocessor-controlled machines – the “User Programmability Option” or “UPO”. It was decided that the best way to provide this capability would be to add an additional computer instead of potentially compromising the real-time behavior of the controlling microprocessor. This is exactly how I might advise a customer today who is designing a multicore system with real-time and non-real-time components.

The processors
The advanced console was built around a Texas Instruments 9900 microprocessor, which was one of the first true 16-bit, single-chip devices on the market. It had an advanced architecture, with some interesting pros and cons: it could intrinsically support multi-threading in a very simple way, with context saving accommodated in hardware; but its registers were mostly RAM based, which, at the time, was a significant performance limiter. The instruction set and addressing modes bore some similarity to the 68000. I recall that the documentation was confusing, as the bits were numbered backwards, with the most significant bit being #0. This part of the system was programmed in Forth code (see code below). I have no idea why this design decision was made, but I found the language intriguing and my interest persists.

The UPO computer was an SBC-11. The “11” came from the internal processor, which was essentially a DEC PDP-11, a mini-computer which was familiar to us at the time. “SBC” was apparently short for “shoe box computer”, because that is what it looked like. I have a suspicion that this was a joke and it actually stood for “single board computer”, as it does today. We implemented user programmability using a variant of the BASIC language, with some extensions to access capabilities of the testing machine.Interprocessor communications
Using multiple CPUs (or cores)presents a variety of challenges. One is the division of labor, whichwas reasonably straightforward in this case. Another is communicationbetween the processors …

In designing the UPO, we considered anumber of means by which the two CPUs might be connected. As they wereseparate boxes, serial and parallel connections were considered. But wewere concerned about any possible compromise of the real-timeperformance of the console microprocessor. Also, we did not want theuser to be faced with the UPO freezing while it waited for attentionfrom the console. So, clearly a buffering mechanism was needed andshared memory seemed to be a good option.

A small memory boardwas designed. I have no idea of the hardware architecture, except that Iseem to recall that the TI-9900 had priority over the SB-11, as itcould not afford to be delayed by slow memory access. If I remembercorrectly, the board was 2K (words, probably).

Protocol
Itwas down to us to define a protocol for communication, so we aimed toproduce something that was simple and reliable. We divided the memoryinto two halves; one was a buffer for communication from the UPO to theconsole and the other for the opposite direction. The first word of eachbuffer was for a command/status code, which was simply a non-zerovalue. We did not use interrupts. The receiving CPU just polled thefirst word when appropriate, awaiting a non-zero value. When a commandwas found, any data could be copied and the command word cleared tozero, indicating that the processing was complete. So, the UPO sending acommand to the console might go through a sequence like this:

  • Write data to the buffer [second word onwards].
  • Write a command to the first word.
  • Poll the word, waiting for it to become zero.

Ifit was expecting a response, it would then start monitoring the otherbuffer. Of course, there were other facilities to handle a situationwhere one CPU did not respond after a timeout period.

Nowadays,multicore and multi-chip systems have a variety of interconnectiontechnologies, but shared memory is still common. A number ofstandardized protocols have been developed over the years, includingderivatives of TCP/IP. In recent years, the Multicore Associationproduced the Multicore Communications API (MCAPI), which is rapidlygaining broad acceptance in multicore embedded system designs.

Challenges
Whenwe hooked up the shared memory and started to send test messagesbetween the processors, we hit a problem: they seemed to get scrambled.At first we assumed that there was a problem with the memory board, butit was checked by the hardware guys who pronounced it healthy.
Thenwe spotted a pattern: the bytes of each 16-bit word were gettingswapped. We thought that it was a wiring problem with the board, butstudying the schematics and the board layout showed no error.

Ofcourse, the reason for the problem was that the two CPUs were differentarchitectures, from different manufacturers, each of whom had adifferent idea about which byte went where in the word. Now I woulddescribe one as big-endian and the other as little-endian, but I did nothave the vocabulary back then.

An adjustment to the board designcould have made this problem go away. But of course it was too late forthat. So we had to resort to the age-old method to rectify a problemfound late in the design process: fix it in the software. I simply put abyte swap on one side of the interface and the other side was none thewiser.

How would I do it now?
In the end, we got theUPO working to our satisfaction and I think we even sold a few of them.It is interesting to consider how I might build such a system now,thirty-odd years later.

First off, I would design the console as amulticore system. There would probably be one core that would donon-real-time work (user interface, data storage, networking, etc.) andmaybe two more to do real-time work like data sampling and control ofthe servo-hydraulics. The UPO would just be an app on a standard WindowsPC with a USB interface to the console. I have no insight into how thecompany builds testing systems now, but it would be interesting tocompare notes with my 2013 counterparts.

Colin Walls hasover thirty years experience in the electronics industry, largelydedicated to embedded software. A frequent presenter at conferences andseminars and author of numerous technical articles and two books onembedded software, Colin is an embedded software technologist withMentor Embedded (the Mentor Graphics Embedded Software Division), and isbased in the UK. His regular blog is located at mentor.com/colinwalls . He may be reached by email at . You can also follow him on Google Plus.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.