Vision-based parking guidance
Parking a vehicle into a parking space or garage is an essential skill for drivers; however, it is still hard for someone. Moreover, to maneuver a vehicle into a parking space with limited dimension or with other vehicles or obstacles around the parking space is difficult for most drivers. Even a familiar driver has an unpleasant experience to park a vehicle into a small parallel parking space.
In the past few decades, cameras and related embedded system are more available and are affordable and thus and have been used for vehicle driving assistance, such as lane departure warning, forward collision warning, blind spot detection, etc..
Current vehicles solve the parking problem can be solved by an around view monitor system or a back guiding monitor and detection system, which are just equipped cameras and an embedded system.
We propose instead a vision-based parking guidance system to help drivers parking their cars that only relies on an embedded hardware and a wide-angle camera to capture images for analysis without steering sensor. It is designed for low cost implementation and is suitable for used cars and after-market usage.
Our proposal uses an image-based parking guidance (IPG) system that estimates the direction of front wheels without a steering sensor. The hardware is based on an ARM-based embedded system and a wide angle camera.
The camera is mounted on the rear of vehicle to capture sequential images of ground area just behind the vehicle. In the software development side, off-line and on-line processes are sequentially constructed. The off-line process is used to calibrate 3D position of the camera to the ground coordinate system. The on-line process is utilized to estimate the vehicle trajectory with respect to the ground coordinate system.
At first, input images are first transformed into top-view images by a transformation matrix of homography. Then corner feature points on two continuous images are extracted to match each other. The feature-point pairs are further pruned by a least-square error metrics.
In the system, input images are first transformed into top-view images by a homography transformation. Then corner feature points on two continuous images are extracted to match each other.
The feature-point pairs are further pruned by a least-square error metrics. The remained pairs are then used to estimate vehicle motion parameters, where an isometric transformation model based on the Ackermann steering geometry is proposed to describe the vehicle motion. Finally, the vehicle trajectory is estimated based on the vehicle motion parameters and the parking guidance lines are drawn according to the vehicle trajectory.
To read this external content in full, download the complete paper from the author archives online.