Smart connectivity solutions enable seamless and immersive AR/VR experiences
Suppose you are the technical lead for a new, cutting-edge project at your company. The vision is clear, the idea is validated, and your schedule looks good, but there are two key components you need to connect together. The problem is that they have different I/O interfaces that can't communicate with each other. As a result, schedules are delayed, deadlines are missed, and significant resources have to be diverted to make these components work together, resulting in cost overruns and a lot of headaches.
Unfortunately, this is an all too common problem for engineers working on cutting-edge technologies. When you are inventing the latest and greatest product, the last thing you want to do is be held up, slowed down, or fail completely because your components don't work together.
The consumer electronics market is constantly evolving, with entirely new categories of devices being regularly created. The last decade alone has seen the introduction of the smartphone, tablet, and eBook, and this trend is set to continue. Today's latest trends are represented by the sudden emergence of Augmented Reality (AR) and Virtual Reality (VR) segments. In particular, the Head Mounted Display (HMD) has sparked high interest and participation from a broad range of companies, including traditional hardware manufacturers and social networking companies.
In the battle to win these hypercompetitive new markets, the requirements for size, speed, cost, power consumption, and time-to-market are constantly being pushed. Tools that help bridge different standards and protocols to connect disparate components together free engineers to select the best components for the task at hand, rather than letting interfaces drive design.
The consumer electronics market is constantly innovating, with entire new categories of devices being regularly created (Source: Lattice Semiconductor)
Many technologies have to come together to successfully create a compelling VR/AR experience. From a hardware perspective, much of the innovation in the VR space happens in the Head Mounted Display and controllers, which a user wears to experience VR. Advances in high performance, low power FPGAs and ASSPs bring a much-needed degree of design flexibility, with significant savings in time, cost, and resources over developing purpose-built chips for each permutation in this developing space. These benefits apply to several areas of the VR/AR experience including sensors, video display, and connectivity.
Virtual Reality (VR)
As its name suggests, virtual reality tries to replace your perception of reality with a manufactured one. To do this, VR primarily targets a user's vision and hearing (tactile feedback is also an emerging area of development). For VR to be immersive, the experience must target the senses effectively. Therefore, a helmet or sealed goggles paired with earphones are typically used. These devices are commonly called Head Mounted Displays, or HMDs.
Despite their name, HMDs are not only displays; they also contain large numbers of sophisticated sensors. Most of the VR hardware technology development centers around these devices, and there are various types depending on which processor or device is driving the HMD and how the HMD is being powered. A brief summary of the different HMD types is as follows:
- PC/Console Tethered: With a PC/console tethered display, the HMD essentially acts as a monitor replacement, with the PC/console providing all of the processing power. Examples include the Oculus Rift, the HTC Vive, and the Sony Playstation VR.
- Mobile VR Headset: In the case of a mobile VR headset, a smartphone is inserted into the headset to act as the display and processing engine. The controller is either built into the unit or is shipped with the headset. The mobile headset, because of its cost, simplicity and promotion, is currently the most popular VR HMD. The Samsung Gear VR and Google's DayDream View are two examples of mobile VR headsets.
- Mobile AIO (All-In-One): With a mobile AIO headset, the HMD has an integrated processor, display, and battery, making it an all-in-one standalone VR device. Since it is battery-powered and has an onboard processor, it does not need to be tethered to any other device. An example of this HMD type is the Deepoon AIO VR. Many companies are showing prototypes such as the Oculus' Santa Cruz product and Intel's Project Alloy.
- Mobile Tethered: In the case of a phone-tethered HMD, both processing and power are provided by a mobile phone, driving a display in the HMD. An example of this type of device is the LG 360 VR, which connects over USB Type-C.
- PC Untethered: While PC/Console-based HMDs currently offer the best VR experience because of higher performance, the cabling is a nuisance, if not a safety hazard. Mobile HMDs free the user from the cable, but their processors' limited performance demotes the content to simple games and casual 360 videos. TPCAST, a vendor from Beijing, China, announced a wireless upgrade kit for the HTC Vive providing the best of the both worlds. Requiring low latency, high bandwidth wireless video transmission, TPCAST's video solution consists of various FPGAs and ASSPs from Lattice for offering near-zero latency and robust non-line of sight (NLOS) performance and support for wireless transmission of VR display resolution at 2160 x 1200 and at 90Hz.
The variety of Head Mounted Display types illustrates one of the challenges of a new market. Many models are being attempted and industry best practices are still being determined. The flexibility of FPGAs is critical to the success of these evolving hardware markets. In the VR/AR space specifically, FPGAs have proven essential in the areas of sensor aggregation, video display bridging, and connectivity.
Gesture and positional tracking sensors
One of the biggest problems in creating a VR experience is vertigo. A lifetime of real-world interaction has turned our bodies into finely tuned reality detectors. Vertigo occurs when those expectations are challenged. Standing on a glass bridge and looking down is a good example, because your sense of touch and balance is telling you there is solid ground under you, but your sense of sight is telling you there is nothing there.
Since VR is trying to create an alternate reality for the user, any delay or mismatch between the user's movements and the virtual projection can contribute to a sense of "VR vertigo." Due to this, VR engines need to quickly and accurately locate the user's head movements and send that data back to the processor to generate the appropriate video. A variety of gesture and positional tracking sensors are used to track head, hand, and body movements. Each solution has tradeoffs in portability, accuracy, and cost that have to be considered as follows:
- Accelerometers: The simplest way to track movement is with accelerometers. These can be embedded into the HMD, similar to the technology in mobile phones today. This system is cheap, but not accurate enough to create a truly compelling VR experience.
- Infrared Sensors: Infrared sensors detect pulses from wall mounted lasers, either directly or that bounce off reflective dots on the user's body, controller, or HMD. The HTC Vive, which uses Valve StreamVR 3D tracking, is an example of such a sensor array tracking system. This system is significantly more accurate than accelerometers, but can only track where there is a sensor or dot. This generally limits tracking to the HMD and the controller held by the user.
- Multi-Camera Rigs: These are "Kinect" or "Leap Motion"-type stereo or multi-camera arrays that are constantly capturing the user's position. A Kinect-type system is designed to capture the full body, while a Leap Motion-type device is designed to capture a user's gestures. The advantage of camera sensor arrays is that they can track the full body without a custom motion capture suit or reflective dots. However, these systems require significantly more bandwidth and processing horsepower to analyze the data.
Some use cases are as follows:
- Concurrent sensor sampling: To accurately track motion with precision, a sensor array is often needed. Most MCUs, however, do not have enough I/Os and lacks the architecture to provide concurrent data capture. On the other hand, FPGAs such as Lattice's iCE40 family are optimized for low power, small form factor and low cost operation. In addition to concurrent data capture, designers can decide if they want to perform spatial processing directly in the FPGA (which may require a larger FPGA), or have the FPGA time stamp captured data to enable MCU for further processing.
- Multi Camera sampling: These systems produce significantly more data than an accelerometer or FPGA-based system. A more powerful FPGA, such as the new CrossLink™ programmable ASSP (pASSP) device, is good for aggregating the data of multiple cameras. For visual-based tracking, pASSPs can interface with multiple cameras, and concurrently sample and aggregate multiple video streams to enable video processors that have limited or incompatible camera interface.
- Gesture Tracking: In some use cases, a more integrated gesture tracking solution is needed. High performance and low power FPGAs are ideal for aggregating multiple cameras and implementing low latency gesture tracking algorithms, such as chroma extraction, depth mapping, and spatial calculation.
An array of sensors is used to track head position and movement in order to create a flawless experience for the user (Source: Lattice Semiconductor)
Gesture tracking with iCE40 Concurrent Sensor Hub (Source: Lattice Semiconductor)
In VR/AR, the standard models for displaying content on a screen have been upended and new methods are constantly being tested to deliver the most realistic experience. High resolution and low latency are always desirable traits, but when your display screens are 2-3 inches in size and sitting just a couple of inches away from the user's eyes, high resolution becomes critical. Also, high refresh rates and low latency are essential for combating VR vertigo. If a user turns his/her head and there is a lag before the video reflects that movement, the body registers that discrepancy and is confused. Lowering latency and increasing refresh rates helps integrate the virtual experience so that positional data on head movements can be reflected in the video a user sees instantly, without any jerkiness or stuttering.
At the core of creating a realistic VR experience is the ability to display different images to each eye to create the illusion of 3D. Display bridging in VR often involves taking a single video stream and extracting the video for each eye, or conversely, taking the data from two cameras and combining them into a single stream.
Some use cases are as follows:
- Left/Right Video: Mobile application processors (APs) are often used in VR headsets. Some legacy application processors are limited to a single MIPI DSI output. In order to drive separate left eye / right eye displays, a single MIPI DSI output from the AP can be input into an FPGA video bridge, which then splits the video for each display.
- Non-Mobile Display/Micro-Displays: As the VR/AR market is still in its infancy, vendors are actively searching for the optimal display solution. Some AR systems may use high-pixel-per-inch ELV (electronic viewfinder) microdisplays that are developed for high-end DSLR cameras. These displays often use interfaces such as RGB or LVDS, which are not compatible with the MIPI DSI found on most application processors. An FPGA video bridge, such as CrossLink, can connect the two incompatible interfaces.
- External Input Bridging: Most application processors being utilized in VR space lack the interface to input common external video sources, such as HDMI. In these situations, a video bridge can convert one of these popular inputs to MIPI CSI-2.
- Camera Output: The increased demand for content has sparked the popularity of 360-degree image and video solutions, moving from proprietary professional equipment to low-cost accessories for mobile phones. This promises to provide an explosion of content in which VR users can immerse themselves. Existing application processors may not have enough MIPI CSI-2 inputs or the right interfaces to support the multiple cameras needed in this application.
- Programmable application-specific standard product (pASSP) video bridges can help to merge MIPI CSI-2 streams from multiple cameras into a single MIPI CSI-2 stream to be input into the application processor.
- The same pASSP video bridges can also convert from other camera interfaces including LVDS, SLVS, RGB to MIPI CSI-2.
Augmented reality/specialty HMD (Source: Lattice Semiconductor)
Many VR displays are connected to an external device. Today, the processing power of a PC is still required to create a truly immersive experience. Typically, video connections such as HDMI are appropriate for this task as they support high audio/video bandwidth at very low latency and over a long cable (3-5 meters). However, in some cases, particularly for mobile devices, a connection over USB Type-C, which carries video, data, and power can also be quite convenient.
Going forward, wireless video offers a compelling alternative to the existing market. Without cables, the user has more freedom to move, which can significantly enhance the VR experience.
Some use cases are as follows:
- Video Connectivity: HDMI is the default video-only connection in VR space. Its speed, high bandwidth, ultra-low latency, and broad market acceptance makes it an ideal choice.
- Video + Data + Power: While HDMI is the most common video connectivity option, the standard HDMI cable can be bulky. The HMD often requires power and may also have sensor data that needs to be sent back to the main processing unit. In these cases, USB Type-C solutions that support MHL or DisplayPort to provide video, data, and power in one thin, flexible cable may be appropriate.
- Wireless: Paired with a battery-powered HMD, wireless video allows the greatest freedom of movement for the user. WirelessHD is a compelling technology in this case, providing high-speed digital video at HDMI bandwidths and latencies. For example, the TPCAST Wireless VR accessory for the HTC Vive uses WirelessHD technology.
Time and time again, FPGAs have proven to be an essential part of product development in fast evolving new markets. As manufacturers race to stake their ground in VR space, high-performance, low-power FPGAs and ASSPs play a key role in developing and connecting parts of the VR toolkit, aggregating and analyzing sensor data, seamlessly displaying 3D video, and connecting the HMD and the computer. As the VR market continues to develop and mature, FPGAs will continue to have a role to play in ensuring that designers have the flexibility they need to create the best possible VR experience for the users.
Ying Jen Chen is the Senior Business Development Manager at Lattice Semiconductor focusing on emerging consumer applications such as VR/AR, drone and Consumer IoT. Mr. Chen has 18 years of experience in the FPGA industry. Prior to Lattice, he managed the China and Taiwan channel sales for Altera, focusing on market segments ranging from networking and computing to industrial and consumer. Mr. Chen held both technical and business roles during his 15-year tenure at Altera in the U.S. and Taiwan. He received his Bachelor's degrees in Electrical Engineering & Computer Science (EECS) and Materials Sciences & Engineering from the University of California, Berkeley.