Optimizing video safety systems using model-based design - Embedded.com

Optimizing video safety systems using model-based design

Passive safety devices, such as airbags and roof reinforcement, have helped save the lives of many crash victims. Today, leading automobile manufacturers and suppliers have added active safety features that help prevent crashes and rollovers from occurring in the first place.

According to the U.S. Department of Transportation, more than 18,000 deaths per year–40% of all automobile-related fatalities in the United States–are caused by unintended lane departures. To address this problem, several companies and academic institutions are designing video-based active safety systems that monitor lane markings and send the driver an audio or tactile warning when the car is about to leave the road or deviate from its lane.

Video-based active safety systems designed for commercial use must be evaluated and optimized under a wide range of conditions. Evaluating even one set of conditions using text-based specifications and hand coding takes so long that it may become impractical to proceed with the project.

Model-based design substantially reduces development and optimization time by putting a system model at the center of the design process. The model is used to define specifications, evaluate design and system performance, automatically generate code, perform hardware-in-the-loop testing, and generate a test harness for testing production hardware. This article explains how model-based design can reduce the time required to develop and optimize a video system that monitors unintended lane departure.

Development challenges
Lane-departure systems are challenging to develop because wear patterns in the road, shadows, occlusion by other vehicles, changes in the road surface, and other features make identifying lane markers difficult. Another problem is that there are many different ways of marking roads in the United States, not to mention the myriad methods used around the world.

Until now, active safety system development has been slowed by traditional development methods, which typically involve text-based specifications and hand-coding. Text-based specifications are difficult to manage over frequent design iterations and are often ambiguous, resulting in more meetings between system and implementation teams and, potentially, products that don't meet their requirements. With hand coding, the design must be re-coded at each iteration and the code recompiled, rerun, and debugged. Because C code doesn't provide built-in video viewers, it can't be tested against actual video footage until the design is built. If the designer discovers at that point that the basic methodology doesn't work, the design must begin again from scratch.

Hierarchical method
Model-based design enables a hierarchical approach in which designers can quickly build a graphical model using prebuilt primitives and advanced algorithms, incorporating their own C code only when required. This approach makes it possible to provide a working prototype that processes actual video footage in weeks instead of months. The model becomes an executable specification that designers can rapidly modify and evaluate by simulating it on development hardware and immediately viewing the results. The model can later be used to generate C code that can be downloaded to an embedded hardware system to evaluate the prototype in real time. Because the model is developed independently of an embedded hardware target, it can easily be retargeted to different platforms and re-used in future systems.

The lane-detection system I describe in this article is designed to detect one edge of the lane through a change in color from dark to light and the other edge through a change from light to dark. I'll only describe the portion of the system that handles straight sections of road while detecting both right- and left-side lane markers using the Hough transform algorithm. This basic example addresses the issue of lane drift. A complete system would probably require other modules to handle curves in the road, perhaps using steerable filters and Kalman filtering. To improve reliability, the complete system could also include other systems, such as global positioning and adaptive cruise control radar.

Implementing the design
The design of the lane-detection system is implemented using the Simulink environment and the Video and Image Processing Blockset from The MathWorks. The implementation begins with a high-level block diagram, shown in Figure 1, to which detail is added as the design progresses. The block on the left reads a video signal from a video file. (As an alternative, the Video Input block from the Image Acquisition toolbox can be used to read live video from a camera located on a test vehicle.) The Lane Marker Detection and Tracking subsystem determines the position of the lane markers on the road. The Track Lane subsystem superimposes the lane markers onto the original video signal and outputs the resulting signal to a video display. In a production system, additional blocks would be used to compare the position of the car to the lane markers and provide feedback to the driver, by, for example, shaking the driver's seat or issuing an audio warning, when the car approaches the lane markers.

Figure 1: High-level block diagram of lane detection and tracking system

The subsystem, shown in Figure 2, is developed by dragging and dropping pre-built video- and image-processing blocks that provide basic primitives and advanced algorithms for designing embedded imaging systems. The Confine Field of View subsystem saves computational time by reducing the field of view to the relevant road surface. The RGB-to-Intensity block reduces the color signal from the video monitor to a less computationally intensive grayscale image used by the lane-detection algorithm. To reduce noise, the median filter compares each pixel with its neighbors and eliminates outliers. The In Lane Edge block uses histogram-based thresholding edge detection methods to identify areas where pixels change from light to dark or dark to light, which might indicate lane edges.

Figure 2: Expanded view of the lane marker detection and tracking subsystem

The Lane Detection Using Hough Transform Block is the heart of the lane-detection system. Hough transformation maps points in the Cartesian image space to sinusoids in the polar space (Hough parameter space) using the following equation:

rho = x Χ cos(theta) + y Χ sin(theta)

When viewed in Hough parameter space, points that are collinear in the Cartesian image space become readily apparent as they yield curves that intersect at a common (ρ,θ) point, as shown in Figure 3.

Figure 3: Parametric representation of a straight line

The transform is implemented by quantizing the Hough parameter space into finite intervals, or accumulator cells. As the algorithm runs, each (xi ,yi ) is transformed into a discretized (ρ,θ) curve, and the accumulator cells that lie along this curve are incremented. Peaks in the accumulator array strongly suggest that a corresponding straight line exists in the image.

In the model, the Hough Transform block outputs a parameter space matrix whose rows and columns correspond to the rho and theta values in the equation, respectively, shown in Figure 4. Peak values in this matrix represent potential lines in the input image. Operational parameters can be modified by entering values in a dialog box. For example, the theta resolution parameter specifies the spacing in radians of the Hough transform bins along the theta axis. The rho resolution parameter specifies the spacing of the Hough transform bins along the rho axis. The BW port is used for inputting a matrix that represents a binary image using a Boolean data type. The block outputs a Hough parameter space matrix and optional theta and rho ports that output vectors of theta and rho values. The Hough, theta, and rho ports may be set to double-precision floating point, single-precision floating point, or fixed-point data types.

Figure 4: Expanded view of lane detection using Hough transform subsystem

If the lane position given by the Hough Transform block changes abruptly with respect to its values in the previous frame, the Rho Theta Correction subsystem discards the current position and keeps the previous position. Maximum allowable variation in ρ and θ with respect to previous values are:

Abs( ρ(current) - ρ(previous) )     = 30 pixelsAbs( θ(current) - θ(previous) )     = 10 degrees       

Once we get the parameter of the lane in the Hough parameter space, we can map it back to Cartesian image space and find the point of intersection of the line with image boundary lines using the Hough Lines block. The Line Drawing and Image Construction Block overlays these coordinates onto the reduced video image as output from the In Lane Edge block.

Refining the design concept
Our lane-tracking example shows how quickly developers can create executable specifications to prove concepts and validate and optimize algorithms in the early stages of design. You can run simulations at any point to evaluate performance–for example, by viewing the video stream in real time. You can then add, subtract, or move blocks or change parameters and immediately assess the impact of these changes.

In particular, you can refine the model by adjusting parameters and view the effect on system performance in the video monitor. For example, increasing the theta and rho resolution parameters improves line detection capabilities in the theta and rho axes, respectively, at the cost of an increase in response time. The tradeoff between line-detection accuracy and system performance can be optimized in a matter of minutes: to adjust the desired parameter, the designer simply opens the Hough transform block (shown in Figure 5), changes the threshold values, and views the performance on the video monitors and frame counters.

Figure 5: Hough transform block mask

From floating to fixed point
Our lane-detection system was designed in floating point. From the beginning, the intention was to implement the design in fixed point to reduce hardware requirements and power consumption. When programming in C, the change from floating-point to fixed-point arithmetic can take up to 25% to 50% of the total design time. Using model-based design, this change simply involves changing parameters in various blocks. Design time is saved because each block inherits its data type from the preceding block, and the data type of a block automatically updates if the data type of the preceding block changes. So we started by changing the video source block to unsigned 8-bit integers.

Once the system is fully defined, we generate C code as shown in Listing 1 and run the design on the Texas Instruments TMS320 DM642 EVM Video/Imaging Fixed-Point digital signal processor (DSP), an evaluation platform for prototyping video systems. An advantage of using this board during the prototype phase is that it contains needed peripherals, such analog/digital converters. This means, for example, that a video input can simply be plugged into the board. To target the DM642, the design engineer simply changes drivers to optimize the model for TI's Real Time Data Exchange (RTDX) application. RTDX provides bidirectional communication between the host-side Code Composer Studio and the target application. Embedded Target for TI 'C6000 Platform generates a C-language real-time implementation of the Simulink model that supports on-chip and onboard peripherals.

Listing 1 Automatically generated C-code for the Hough transformation

    /* Compute the Hough Transform */    for(n=0; n < incols;="" n++)="" {="" int_t="" saved_idx;="" for(m="0;" m="">< inrows;="" m++)="" {="" if(ubw[n*inrows+m])="" if="" pixel="" is="" 1="" (on)="" */="" {="" saved_idx="numThetaBins" -1;="" for(thetaidx="0;" thetaidx&#60numthetabins;="" thetaidx++)="" {="" theta="" varies="" from="" -90="" to="" 0="" */="" x*cos(theta)+y*sin(theta)="rho" */="" real32_t="" myrho="n*(-sineTablePtr[saved_idx-thetaIdx])+" m*sinetableptr[thetaidx];="" real32_t="" tmprhoidx="slope*(myrho" -="" firstrho);="" convert="" to="" bin="" index="" */="" rhoidx="(tmpRhoIdx">0)? (int_T)(tmpRhoIdx+0.5)                                          : (int_T)(tmpRhoIdx-0.5);                     yH[thetaIdx*rhoLen+rhoIdx]++; /* increment counter */                }                ......  

Link for Code Composer Studio is used to transfer data between Simulink and TI's Code Composer Studio IDE. Link for Code Composer Studio controls Code Composer Studio during testing and debugging and enables real-time exchange while the target application is running. Using Real-Time Workshop we automatically generate embeddable ANSI/ISO C code from the lane-detection model.

After validating the prototype performance, we use real hardware to provide more realistic measurements of response time by plugging a video camera into the board and connecting the board output to a video monitor. With model-based design this is accomplished by simply going back to the model and changing a few drivers. You can use the model to validate the production hardware by generating a test harness to compare results from the model with results from the physical prototype.

Model-based design streamlines the design of high-performance, embedded video-based active safety systems. You can quickly generate working proof-of-concept designs and conduct rapid design iterations and parameter optimization through a unified design, simulation, and test environment. Model-based design maintains an executable specification that easily manages changes and enables a hierarchical understanding of the system. Automatic code generation eliminates the time and errors involved in hand coding and simplifies the process of targeting new hardware or moving from floating-point to fixed-point designs.

Dave Jackson is product marketing manager video and signal processing at The MathWorks, Inc. His professional interests include new-product development and launches and technology-adoption issues.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.