Self-driving vehicles - are we nearly there yet? - Embedded.com

Self-driving vehicles — are we nearly there yet?

Google's self-driving cars have accomplished close to two million miles of motoring with a human “driver” merely supervising. And these are not the only self-driving vehicles mixing with ordinary road users on streets across the world. Volvo has trialed vehicles with an autopilot in its hometown of Gothenburg in Sweden, and plans to extend the program to the UK. Another company, nuTonomy, has been testing robotic taxis in Singapore since April 2016, and also has projects ongoing in the US and the UK.

As the ultimate destination for autonomous driving technologies, self-driving vehicles can potentially reduce road traffic accidents. They also promise greater access to independent mobility for groups like non-car owners and the elderly. In fact, these technologies are already permeating an increasing number of cars in the market today, supporting driver-assistance systems that are rapidly becoming expected features. These include parking assistance, autonomous braking, intersection assistance, collision avoidance, lane departure warning, and pedestrian detection.

A major enabler of this trend towards end-to-end automation via increasingly assisted driving is the continued Moore's law advancement of semiconductor integration. Not only has this made the necessary computing power available to perform the complex algorithms at the heart of these autonomous systems, but increasingly highly integrated automotive ICs hold the key to delivering systems that are small enough for use in today's vehicles and deliver the high reliability demanded by the automotive industry, all at an economically viable price. The latest devices in the market significantly reduce the number of components needed to build the subsystems that are essential for autonomous driving. These include navigation, motion sensing, and situational awareness using a combination of technologies such as video, LIDAR, and radar.

Navigation
Navigation and guidance typically rely on a Global Navigation Satellite System (GNSS) such as the GPS (Global Positioning System). Present position, speed, and heading can be computed based on complex analysis of signals received from at least four of the constellation of over 60 low-orbit GPS satellites. Location accuracy can be on the order of one meter depending on factors such as atmospheric conditions and multipath effects. Although a human driver can hop in a car and start driving almost immediately, a self-driving vehicle may need to wait until the GPS receiver has computed its first fix before departing. This can take between 30 and 60 seconds to establish an initial position.

GPS subsystems are now available as sophisticated System-on-Chip (SoC) IC or multi-chip chipsets, which require only power and an antenna and include an embedded, application-specific compute engine to perform the intensive calculations. Although many of these ICs have an internal RF preamplifier for the 1.5GHz GPS signal, designers may opt to put the antenna on the roof with a co-located low-noise amplifier (LNA) RF preamplifier, and then locate the GPS circuitry in a more convenient location within the vehicle. The antenna must have right-hand circular polarization characteristics (RHCP) to match the polarization of the GPS signals, and can be a ceramic-chip unit, a small wound stub design, or some other configuration.

An example of a GPS module is the RXM-GPS-F4-T from Linx Technologies. This 18mm × 13mm × 2.2mm surface-mount unit requires a single 1.8V supply at 33mA, and can acquire and track up to 48 satellites simultaneously (more channels allow the GPS to see and capture more data and thus yield better results with fewer dropouts).

While GPS is an essential function for autonomous vehicles, it's not sufficient by itself. The GPS signal is vulnerable to radio interference and can also be blocked by urban canyons and tunnels. Outages can last for a few seconds or many minutes, sometimes even longer. However, accurate and continuous navigation information has become essential to today's road users, not only to ensure the best possible driving experiences but also to maintain delivery schedules, keep track of assets in the field, and aid emergency services. To supplement the GPS, the autonomous vehicle can use inertial guidance, which requires no external signal of any type. Automotive inertial measurement has been realized thanks to advances in MEMS (Micro-Electro-Mechanical Systems) technology, which enables the 3-axis accelerometer, gyroscope, and magnetometer needed for displacement, rotation, and attitude sensing to be produced as tiny discrete devices or as part of an integrated Inertial Measurement Unit (IMU) that can be housed in a standard IC package.

The IMU cannot detect absolute position, only the motion of the vehicle. In the event of blocking or degradation of the GPS signal, the vehicle's position is calculated by dead-reckoning using the last known position coupled with IMU data. Firmware running on the GPS receiver is responsible for fusing the GPS and sensor data to continuously calculate the vehicle's location. Recently there have been important developments in three-dimensional dead reckoning, which allows the navigation system to maintain accuracy on multi-level highways and when entering and leaving parking garages at venues such as airports and shopping malls.

Multi-sensor situational awareness
Equally important to increasingly advanced autonomous-driving features — and, ultimately, to self-driving vehicles — is awareness of other road users and pedestrians, road furniture, markings, junctions, signage, and any objects that may require a response from the vehicle, such as an obstruction that may require avoiding actions.

Several technologies, including video, radar, and LIDAR (Light Detection and Ranging), are viable. Vehicles on the road today use a combination of sensing techniques to detect hazards at long range or close to the vehicle and inform the driver while — at the same time — anticipating any potentially necessary responses such as braking so as to minimize the vehicle's reaction time. Combining data from multiple sensors also helps avoid dependence on any single subsystem or technology so as to offer the best possible performance and reliability.

As far as video is concerned, effective visual imaging places great demands on processing complexity and throughput. Heavy graphic processing is needed to make sense of images. There is a need for depth perception as well as basic imaging. Also, conditions of lighting, shadows, and other factors can challenge the system's ability to interpret any captured images correctly. Figure 1 shows how an automotive vision system can be reduced to a small number of components by combining TI's high-performance DaVinci digital media processor with high-speed SerDes devices and minimal additional components.


Figure 1. Automotive vision system developed and
AEC-Q100 certified by TI (Source: Mouser)

The Google self-driving prototypes use cameras for near vision and reading road signs and traffic lights, and a rooftop-mounted multi-laser LIDAR system that can construct 3D images and calculate the range to distant objects. Laser pulses of short duration help to maximize depth resolution and the detected reflections are used to create a 3D point-like “cloud.” Subsequent processing performs object identification, motion vector determination. and collision prediction, and calculates avoidance strategies. The LIDAR unit is well-suited to “big picture” imaging and provides the needed 360-degree view. Google has reported on the work accomplished to “teach” its vehicles how to detect vulnerable road users such as cyclists, including recognizing cycles of many different types, responding appropriately to hand signals, and anticipating cyclists' actions, such as pulling out to avoid a parked vehicle.

Radar completes the picture
Front-facing and rear-facing radar built into the fenders provide the best information when the action is close to the vehicle, such as when parking, lane-changing, or in dense traffic. 24GHz radar has become extremely widely used for short- and mid-range driver-assistance systems such as collision avoidance, blind-spot detection, self-parking, and lane-departure warning. Intensive integration helps realize such systems within tight space constraints to allow in-fender mounting, relying on highly-featured ICs such as the Analog Devices AD8283 6-channel radar receive path Analog Front End (AFE) IC with on-chip filtering and ADC. Figure 2 shows a simplified block diagram of a single channel of this device. Techniques such as using printed-circuit tracks to form part of the antenna are also used to minimize overall solution size.


Figure 2. One of the six channels of the AD8283 radar
AFE for handling reflected pulse signals (Source: Mouser)

Going forward, automakers are adopting 77GHz radar for systems such as Adaptive Cruise Control (ACC), which requires longer range and higher power output. A 77GHz radar can meet these requirements with a smaller antenna, and can also support the features that formerly employed 24GHz radar. Early in 2016, STMicroelectronics announced a multi-channel automotive-radar IC that integrates three 77GHz transmitters and four receivers on a single chip to deliver enhanced object recognition and resolution.

Conclusion
The increasingly sophisticated driver-assistance systems that feature in today's new vehicles are paving the way towards fully autonomous motoring that will require no human intervention from the beginning to the end of the journey. Systems on the market today tend to augment the driver's awareness, to ease the burdens of driving, and to improve safety, while some vehicles are capable of taking complete control for short periods. Ongoing trials of self-driving vehicles are underway on public roads around the world. Favorable results will help to build public confidence and ensure end-user acceptance of this new form of travel. Greater safety, increased mobility, and environmental sustainability can all be achieved.

Driving the revolution, highly integrated semiconductors and sensors deliver important answers to the automotive industry's demands for extremely small size, high reliability, and low cost.

Rudy Ramos is the project manager for the Technical Content Marketing team at Mouser Electronics. Rudy holds an MBA from Keller Graduate School of Management. He has over 30 years of professional, technical, and managerial experience managing complex, time-critical projects and programs in various industries, including semiconductor, marketing, manufacturing, and the military. Previously, Rudy worked for National Semiconductor, Texas Instruments, and his entrepreneur silk-screening business.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.