Editor’s Note: In Part 2 of a series excerpted from the book Better Software. Faster! Victor Reyes compares looks at the problems of doing hardware dependent software development in an AUTOSAR environment and how virtual Hardware-in-the-Loop (vHIL) methods deals with the complexities.
In 2003, OEM manufacturers and Tier 1 automotive suppliers founded the AUTOSAR3 consortium to address the technical and business challenges they all faced with the increase in software development and test costs. The premise of that conversation is the principle that guides the AUTOSAR alliance today: Cooperate on standards, compete on implementation. The net goal of the consortium is to drive the change in software development, moving from proprietary solutions to standardized, productized and predictable software products.
AUTOSAR focuses on three main areas: software architecture, methodology and application interfaces. In this document we will only discuss on the software architecture part.
The goal of the AUTOSAR software architecture  is to provide a clear separation from the application (differentiation) and the infrastructure (commodity) domain. As shown in Figure 3 , the AUTOSAR software stack is composed of three layers:
- Application Layer (AL) This layer is composed of several software components (SWC). A software component encapsulates, for instance, the functions belonging to a certain control strategy. Software components communicate with each other through specific AUTOSAR interface types. The implementation of the software components is independent of the underlying hardware.
- Runtime Environment (RTE) This layer serves as the glue that connects the Application Layer to the Basic Software. This layer is statically generated for a certain mapping and configuration instance. The RTE handles the logical communication between SWCs, independently of whether the SWCs are mapped on the same MCU, ECU or subsystem.
- Basic Software (BSW) This layer contains all the “infrastructure” software required for the Application and RTE layers to perform in a hardware independent fashion.
The Basic Software is in turn composed of four sub-layers:
- Services Layer, which provides basic services like the operating system (based on the OSEK OS), the communication stack (for CAN, LIN, FlexRay, etc), the memory management (NVRAM), the diagnostic protocols and the state manager.
- ECU Abstraction Layer, which makes the higher software layers independent of the ECU components and electric properties. Through this layer the application software logical ports have access to the physical ECU I/O; not directly, but through the MCAL drivers.
- Microcontroller Abstraction Layer (MCAL), which provides access to the MCU hardware via specialized drivers. All accesses to the hardware registers are routed through this layer via a predefined API that makes the higher software layers independent of the MCU.
- Complex Drivers (CD), which offers the SWC logical ports direct access to the hardware. This is important for resource and timing critical applications (e.g. engine control). Complex drivers are also the trojan horse to integrate legacy code and highly special functionality that OEM/Tier 1 do not want to have standardized.
As of version 4.0, AUTOSAR incorporates methodologies for multicore software development and the distribution of software execution across multiple cores. In this model, each core has access to the RTE and has its own copy of the operating system. Access to the AUTOSAR BSW is limited to just one core, which circumvents the need for the basic software to be multicore safe. Once single core functions are re-mapped to multicore chips, the RTE glues everything together. Communication between the RTEs and BSW is facilitated by the Inter OS-application Communication (IOC) module, which is typically implemented via message or spin locks.
Microcontroller Abstraction Layer and Complex Driver development
The MCAL is the lowest layer of the basic software in the AUTOSAR paradigm, comprising microcontroller drivers, memory drivers, communications drivers and I/O drivers. In the automotive supply chain, two types of players typically address MCAL development. The first is the semiconductor vendor or Tier 2 (e.g. Renesas, Freescale or Infineon). Today it’s expected that semis ship their products bundled with some enabling software. Typically, the chip vendor develops a “good enough” MCAL layer—sufficient to let OEMs/Tier 1s get started on software development.
The software development team at the Tier 1 or OEM often re-implements or enhances the MCAL with proprietary features. Obviously, MCAL software cannot be developed until a first prototype of the microcontroller is available. This is where virtual prototypes come into play as they allow automotive companies to start software development months before the silicon is ready. Moreover, the visibility and controllability characteristics of a virtual prototype are extremely useful to debug this type of software, beyond what is possible with just a standard software debugger connected to a hardware prototype board.
Developing Complex Driver software is similar to developing MCAL software, but an order of magnitude more complex. Complex drivers provide a “back door” to the hardware that allow Tier 1 and OEM companies to implement their differentiating features and protect them from the scrutiny of competitors. Developing this proprietary software traditionally requires more than the MCU; it also requires the available of a highly specialized Application Specific Integrated Circuit (ASIC) that sits between the MCU and the plant under control.
From a debugging point of view, both the MCAL and complex driver software are implementing the interface between the hardware and the software. Being able to correlate what the software intends to do, while programming the peripherals, with the response of the peripherals is extremely helpful. For instance, you can trace and analyze a scenario where a CAN controller is being programmed to send a CAN message.
Using a virtual prototype you can correlate the instructions executed in the software with the peripheral register accesses and values, the resulting peripheral side-band signal activity (e.g. interrupts), as well as the external I/O activity (e.g. the CAN bus). This can be observed in Figure 4. The function “TransmitMsg” is programming the CAN mailbox 0 (MB0) to send a message with ID 555 (0x22b in hexadecimal) and with payload the message “Hello”. At the top of the screenshot, the source code and assembly views are shown. Clicking on an instruction will set the cursor on that location in the trace panel and vice versa. This is very useful to see the effect of the software on the hardware. The chart in Figure 4 below shows from top to bottom:
- Function and instruction traces
- State of the CAN controller (in this case in NORMAL mode)
- Trace of the memory mapped registers (notice that when the CAN message starts to be transmitted, the bit-field TXRX from the ESR status register is set to “1”)
- Content of the mailbox MB0 (highlighted in green from the moment that ID 0x22b is programmed on the CAN controller)
- The state of the CAN bus from the CAN controller side (first IDLE and the TX)
- The message over the CAN bus itself with all the different frame fields represented (SOF, Arbitration, Control and Data, CRC, ACK and EOF)
Moreover, you can see the CAN state of the peripheral, whether it is active, disabled, transmitting a message, etc. and you can get information from the peripheral model describing what it is doing (debug logs), all in the same window and across the same timeline, as shown in Figure 5 .
You can also create scenarios where stimuli are fed to the peripherals through their interfaces, without requiring any external tool or special equipment. For instance, you can feed a CAN message with a specific ID and payload to a CAN bus using built-in model commands.
Complete scenarios can be automated using the scripting interface. The scenarios can be fairly elaborate since the scripting interface allows adding control breakpoints to software events (e.g. function), time events and hardware events (e.g. access to a register or changing the value of a signal). These control breakpoints can trigger functionality that can, in turn, add new breakpoints or inject/modify values in any peripheral interface.
For instance, Figure 6 below shows a simple command that feeds a CAN message with ID 720 and 2 bytes of payload through CAN bus 1. It will do this exactly when the time reaches 12 sec 81 milliseconds and 818 microseconds.
In case the software requires a more complex close-loop response that may not be easy to capture within a script, the virtual prototype can be connected to an external ASIC and plant models developed with 3rd party tools like Simulink  or Saber, as well as a “restbus” simulation tool like Vector CANoe.Software stack integration and bring up
Software Integrationand bring up are on the critical path to begin the testing phase. Thebulk of the test creation and execution task can only start when astable software stack is up and running on the target MCU. Hence, it isvery important to start the software stack integration and bring up assoon as possible.
Software stack integration is now, thanks toAUTOSAR, a more structured process than it was before. Although theprocess is more structured, it does not mean that it has become trivial.Bringing up a first instance of the software stack that executes asexpected on the target MCU architecture is still a daunting task. Justthe AUTOSAR configuration requires quite a number of iterations to solveall the dependencies, not to mention the actual testing of theconfiguration on the target hardware.
Software stack bring up ischallenging due to the number of software layers (SWC, RTE, OS,services, MCAL) that an application needs to traverse through beforetruly interacting with the underlying hardware. The same applies for ahardware event making its way up through the software layers until itreaches the actual application. Detecting a software bug in this processrequires full visibility across the different software layers, as wellas in the underlying hardware. In order to give a better insight into acomplex software stack like AUTOSAR, a virtual prototype can be extendedwith dedicated monitors.
These monitors will detect and filtertasks and interrupt service routines (ISR) out of a standard functioncall graph. This OS specific awareness is very useful to present anoverview of what the software is doing and whether aspects likescheduling, preemption, etc. are working as expected. These views can beextended to any level within the software stack and the monitor can,for instance, log RTE events and service APIs. Doing so, allows pruningthe massive amount of information that is typically present in afunction trace to only specific points of interest.
As shown at the top of Figure 7, below the periodic activation of a set of AUTOSAR tasks can beobserved. Since it would be too lengthy to discuss all the interactionsfor the 200 milliseconds of the complete trace, let’s focus on the firstmillisecond of execution. Zooming in on that area it can be observedhow the first task to execute is TASKRCV1. By using the AUTOSAR specificlogging we can see that around 86 microseconds this task waits for anevent, suspending itself. Zooming in another level we observe how thenext task to execute is TASKSND1.
This task is triggered after atimer fires its interrupt and the interrupt service routine (ISR)handles the context switch. Moreover, we see (using the RTE logging)that during its execution TASKSND1sends a message and it gets preemptedby TASKRCV1, which has been awake for the event associated to themessage. More details on the AUTOSAR logging shows how TASKRCV1receivesthe message and waits again for the next event.
Debugginghardware and software interactions in the real time and safety criticalworld of automotive under- the-hood applications is very complex.Unfortunately the complexity is only getting worse as multicore MCUs arepaving its way into future generation MCUs. Understandingsynchronization and “freedom of interference” issues are extremelyimportant with multicore hardware.
Fortunately virtualprototypes allow synchronous debugging of multiple cores and complexhardware peripherals. This ensures that when the simulation is paused,e.g. due to a breakpoint in one core, the whole system is paused. Allthe cores, hardware peripherals, the connected plant models, etc., willstop its execution at that exact same moment. This allows observation ofevery state and thus enables debugging of the complete system.
Victor Reyes , Technical Marketing Manager, Synopsys Inc., is the author of a chapter in Better Software. Faster! from which this series of articles was excerpted. Edited by TomDeSchutter, the book was published by Synopsys Press and can bedownloaded as a free eBook.
 For a detailed discussion of how to align virtual prototypes and Simulink models to implement vHIL, download Virtual Hardware in the Loop: Earlier Testing for Automotive Applications