Using model-based design to optimize video-based vehicle safety systems - Embedded.com

Using model-based design to optimize video-based vehicle safety systems

Airbags and roof reinforcement are passive safety devices that havehelped save the lives of many crash victims. Today, leading automobilemanufacturers and suppliers have added active safety features that helpprevent crashes and rollovers from occurring in the first place.

According to the U.S. Departmentof Transportation, more than 18,000 automobile-relateddeathsare caused by unintended lane departures. To address this problem,several companies and academic institutions are designing video-basedactive safety systems. These systems monitor lane markings and send thedriver an audio or tactile warning when the car is about to leave theroad or deviate from its lane.

Video-based active safety systems must be evaluated and optimizedunder different conditions. Using text-based specification and handcoding to evaluate a set of conditions takes a lot of time, and maytherefore prove impractical.

Model-based designsubstantially reduces development and optimization time by putting asystem model at the center of the design process. The model is used todefine specifications, evaluate design and system performance,automatically generate code, perform hardware-in-the-loop testing, andgenerate a test harness for production hardware.

This article explains how model-based design can reduce the timerequired to develop and optimize a video system that monitorsunintended lane departure.

Developing lane-departure systemsis challenging. Wear patterns in the road, shadows, occlusion by othervehicles, changes in the road surface and other features make theidentification of lane markers difficult. Another problem is that thereare many different ways of marking roads in the world.

Until now, active safety system development has been slowed bytraditional development methods, which typically involve text-basedspecifications and handcoding.

Text-based specifications are difficult to manage over frequentdesign iterations and are often ambiguous. This results in moremeetings between system and implementation teams, and potentially,products that don't meet their requirements. With hand coding, thedesign must be re-coded at each iteration and the code recompiled,rerun and debugged.

Because C code doesn'tprovide built-in video viewers, it can't be tested against actual videofootage until the design is built. If the designer discovers at thatpoint that the basic methodology doesn't work, the design must againstart from scratch.

Hierarchical approach
Model-based design enables a hierarchical approach in which designerscan quickly build a graphical model using pre-built primitives andadvanced algorithms, incorporating their own C code only when required.

This approach makes it possible to provide a working prototype thatprocesses actual video footage in weeks instead of months. The modelbecomes an executable specification that designers can rapidly modifyand evaluate by simulating it on development hardware and immediatelyviewingthe results.

The model can later be used to generate C code that can bedownloaded to an embedded hardware system to evaluate the prototype inreal-time.

Because the model is developed independently of an embedded hardwaretarget, it can easily be retargeted to different platforms and re-usedin future systems.

The lane-detection system described in this article is designed todetect an edge of the lane through a change in color from dark tolight, and the other edge through a change from light to dark.

This article only describes the portion of the system that handlesstraight sections of the road while detecting both right and left-sidelane markers using the Hough transform algorithm.

This basic example addresses the issue of lane drift. A completesystem would probably require other modules to handle curves in theroad (steerable filters and Kalman filtering) and to improvereliability (global positioning).

The lane-detection system's design is implemented using the Simulink environmen t and theVideoand Image Processing Blockset.

The implementation begins with a high-level block diagram wheredetail is added as the design progresses (Figure 1, below ).

Figure1. Lane-detection system is implemented using the Simulink environmentand the Video and Image Processing Blockset.

The block on the left reads a video signal from a video file. As analternative, the Video Input block from the Image Acquisition toolboxcan be used to read live video from a camera located on a test vehicle.

The Lane Marker Detection and Tracking subsystem (Figure 2, below ) determines theposition of the lane markers on the road. The Track Lane subsystemsuperimposes the lane markers onto the original video signal andoutputs the resulting signal to a video display.

Figure2: The subsystem is developed by dragging and dropping pre-built video-and image-processing blocks that provide basic primitives and advancedalgorithms for designing embedded imaging systems.

In a production system, additional blocks would be used to comparethe car's position to the lane markers and send feedback when the carapproaches the lane markers. Examples of feedback include shaking thedriver's seat and issuing an audio warning.

Dragging and dropping
The subsystem is developed by dragging and dropping prebuilt video- andimage-processing blocks that provide basic primitives and advancedalgorithms for designing embedded imaging systems.

The Confine Field of View subsystem saves computational time byreducing the field of view to the relevant road surface. TheRGB-to-Intensity block reduces the color signal from the video monitorto a less computationally-intensive grayscale image used by the lanedetection algorithm.

To reduce noise, the median filter compares each pixel with itsneighbors and eliminates outliers. The In Lane Edge block useshistogram-based thresholding edge-detection methods to identify areaswhere pixels change from light to dark or dark to light, which mightindicate lane edges.

The Lane Detection Using HoughTransform block is the heart of the lane-detection system.Houghtransformation maps points in the Cartesian image space to sinusoids in the polar space (Houghparameterspace) using the equation:

Rho = x cos(Theta) + y sin(Theta)

When viewed in Hough parameter space, collinearpoints in theCartesian image space become readily apparent as they yield curves thatintersect at a common (Rho, Theta )point (see Figure 3, below ).

Figure3: When viewed in Hough parameter space, collinear points in theCartesian image space become readily apparent as they yield curves thatintersect at a common ( , ) point.

The transform is implemented by quantizing the Hough parameter spaceinto finite intervals or accumulator cells. As the algorithm runs, each(x i , y i ) is transformed into a discretized (Rho, Theta ) curve, and theaccumulatorcells that lie along this curve are incremented.

Peaks in the accumulator array strongly suggest that a correspondingstraight line exists in the image. In the model, the Hough Transformblock outputs a parameter space matrix whose rows and columnscorrespond to the Rho and Theta values in the equation,respectively (Figure 4, below ).

Figure4: The Hough Transform block outputs a parameter space matrix whoserows and columns correspond to the and values in the equation,respectively.

Peak values in this matrix represent potential lines in the inputimage. Operational parameters can be modified by entering values in adialog box. For example, the Theta resolution parameter specifies the spacing in radians of the Houghtransform bins along the Theta axis.

The Rho resolutionparameterspecifies the spacing of the Hough transform bins along the Rho axis. The BW port is used forinputting a matrix that represents a binary image using a Boolean datatype. The block outputs a each pixel with its neighbors and eliminatesoutliers. The In Lane Edge block uses histogram-basedthresholding edge-detectionmethods to identify areas where pixels change from light to dark ordark to light, which might indicate lane edges.

The block outputs a Hough parameter space matrix and optional and ports that output vectors of Theta and Rho values. The Hough, Theta and Rho ports may be set todouble-precision floating point, single-precision floating point, orfixed-point data types.

If the lane position given by the Hough Transform block changesabruptly with respect to its values in the previous frame, the RhoTheta Correction subsystemdiscards the current position and keeps the previous position. Maximumallowable variation in Rho and Theta with respect to previousvaluesare:

Abs (Rho (current) – Rho (previous))=30pixels
Abs (Theta (current) – Theta(previous) )=10°

Once we get the parameter of the lane in the Hough parameter space,we can map it back to the Cartesian image space. We can also find theline's point of intersection with image boundary lines using the HoughLines block. The Line Drawing and Image Construction block overlaysthese coordinates onto the reduced video image as output from the InLane Edge block.

Refining concept
The lane-tracking example shows how developers can quickly createexecutable specifications to prove concepts, validate and optimizealgorithms in the early stages of design. You can run simulations atany point to evaluate performance by viewing the video stream inreal-time. You can then add, subtract, or move blocks or changeparameters and immediately assess the impact of these changes.

In particular, you can refine the model by adjusting parameters andviewing the effect on system performance on the video monitor. Thetrade-off between line-detection accuracy and system performance can beoptimized in a matter of minutes.

To adjust the desired parameter, the designer simply opens the Houghtransform block, changes the threshold values and views the performanceon the video monitors and frame counters (Figure 5, below ).

Figure5: To adjust the desired parameter, the designer simply opens the Houghtransform block, changes the threshold values and views the performanceon the video monitors and frame counters.

From floating to fixed point
The lane-detection system was designed in floating point. Whenprogramming in C, the change from floating-point to fixed-pointarithmetic can take up to 25 percent to 50 percent of the total designtime.

Using model-based design, this change simply involves changingparameters in various blocks. Design time is saved because each blockinherits its data type from the preceding block and the block's datatype automatically updates if the preceding block's data type changes.

So, we started by changing the video source block to unsigned 8 bitintegers. Once the system is fully defined, we generate C code ( seecode listing below ) and run the design on the Texas Instruments TMS320 DM642 EVM Video/ImagingFixed-Point DSP, an evaluation platform for prototyping video systems.

This board contains the needed peripherals such as ADCs, anadvantage when used during the prototype phase. This means that a videoinput can simply be plugged into the board.

Listing1: Once the system is fully defined, we generate this C code and runthe design on the TMS320 DM642 EVM Video/Imaging Fixed-Point DSP.

To target the DM642, the design engineer simply changes drivers tooptimize the model for TI's Real TimeData Exchange (RTDX) application. RTDX providesbidirectionalcommunication between the host-side CodeComposer Studio and the target application. Embedded TargetforTI's C6000 Platform generates a C-language real-time implementation ofthe Simulink model that supports on-chip and onboard peripherals.

The link for Code Composer Studio is used to transfer data betweenSimulink and TI's Code Composer Studio IDE. It controls Code ComposerStudio during testing and debugging, and enables real-time exchangewhile the target application is running. Using Real-Time Workshop, weautomatically generate embeddable ANSI/ISOC code from the lane-detection model.

After validating the prototype performance, real hardware is used toprovide more realistic measurements of response time by plugging avideo camera into the board and connecting the board output to amonitor. With model-based design, this is accomplished by simply goingback to the model and changing a few drivers. You can use the model tovalidate the production hardware by generating a test harness tocompare results from the model with results from the physicalprototype.

Model-based design streamlines the design of high-performance,embedded video-based active safety systems. You can quickly generateworking proof-of- concept designs. Moreover, you can conduct rapiddesign iterations and parameter optimization through a unified design,simulation and test environment.

It maintains an executable specification that easily manages changesand enables a hierarchical understanding of the system. Automatic codegeneration eliminates the time and errors involved in hand coding andsimplifies the process of targeting new hardware or moving fromfloating- to fixed-point designs.

DavidJackson is Product Marketing Manager for Video and Signal Processing atthe MathWorks Inc. He can becontacted at dave.jackson@mathworks.com.

To read a PDF version of this article, go to “Usemodel-based design to optimize video safety systems.”

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.