Will EDA-style verification methods enhance autonomous vehicle safety? - Embedded.com

Will EDA-style verification methods enhance autonomous vehicle safety?

Complex systems — whether a system-on-chip or autonomous vehicles — can frustrate design engineers who, after months of painstaking work, have to go back and verify that the system they just designed actually performs the way they intended.

SoCs and autonomous vehicles (AVs) are both built in a “black box,” which by nature makes it hard to find bugs “hiding in places that you don’t think about,” said Ziv Binyamini, CEO and co-founder of a Tel Aviv-based startup called Foretellix.

In testing and verifying an SoC, two measures are deemed essential: “code coverage,” which tells how well the code is tested by stimulus, and “functional coverage,” a way for the user to write certain instrumentation logic that monitor how well the stimulus is covering various functions.

Foretellix believes that similar coverage-driven disciplines should apply to AVs when car OEMs test safety.

Today, vehicles from tech companies and OEMs are racking up millions of test miles in simulation, on test tracks, on public roads. Last month, Waymo, for example, announced that the company has driven more than 10 million street miles and some 10 billion simulated miles.

But here’s the rub:

Ziv Binyamini

Does anyone know what, exactly, companies like Waymo, Uber, Cruise and Argo AI are testing? How do they measure test results? What testing scenarios have their AVs experienced?

As Foretellix’ Binyamini sees it, today’s mileage-driven race among AV companies — looking to prove the safety of their products – lacks “a quantifiable way to measure how much of the scenarios required to prove the safety of an autonomous vehicle have been exercised (covered).”

Moreover, they lack tools that could “provide a rigorous & automated way to uncover unknown risk scenarios and turn them into known,” he noted.

This is where Foretellix sees its opportunity. Foretellix, based on a team of verification experts who have grown up in the EDA industry, is migrating its expertise to the AV world.

For example, just as the EDA industry decades ago developed a high-level hardware description and hardware verification language called SystemVerilog for SoC designers, Binyamini told EE Times that Foretellix is developing Measurable Scenario Description Language (M-SDL) for AV system designers.

M-SDL is currently being “tried out” by a few car OEMs in the United States and Europe, according to Foretellix. After integrating the industry's feedback into the language, the current plan is to release it after the summer, said Binyamini. He also stressed that M-SDL is not proprietary. “This will be made open on GitHub.”

Foretellix is promising that M-SDL will offer “unified metrics” of test results – whether done in simulation, on test courses or the on road. “We are also injecting random testing to see what scenarios still need to be tested.”

Click here for larger imageCoverage-driven verification (Source: Foretellix)

Coverage-driven verification (Source: Foretellix)

Nexus of EDA and Automotive worlds

Mike Demler, a senior analyst at The Linley Group, cautioned that Foretellix is not building a verification tool to AV system design. Rather, it is proposing “a coverage analysis tool and coverage-driven verification” for AVs, he noted.

While acknowledging that the very idea of “coverage-driven verification” comes from EDA, Demler stressed that “coverage is a tool to check your verification plan, but it’s not a verification tool itself. A coverage tool checks that your test benches cover all the possible faults, or a sufficient number to satisfy a particular signoff criterion.”

So, in Demler’s view, Foretellix’s comparison of M-SDL to SystemVerilog is “a big stretch.” This looks more like “a test plan checker,” he said.

Nonetheless, the background of Foretellix’s founders strongly suggests that the technology that has strong roots in the semiconductor industry is what Foretellix is now trying to bring to the automotive industry.

Pentium Pro

For anyone who has lived through the era of growing complexity in chip designs, the designs emerging in autonomous vehicles are almost familiar. Binyamini observed, “These are problems the chip industry already experienced in 1990s.”

When Intel was developing Pentium Pro, Binyamini was a design automation engineer in the P6 project. Because the P6 design was the first X86 super pipeline, out-of-order speculative execution machine, the processor was “extremely complex. It required new verification solutions to deal with that complexity.”

Before the P6 launch, Intel faced the “Pentium bug” crisis, a defect in the floating-point in early Intel processors. The bug, discovered by a professor at Lynchberg College in 1994, was reported by EE Times. By December 1994, Intel had recalled the defective processors, at a cost of almost half a billion dollars. The incident made the electronics industry aware of the near impossibility of finding all of the bugs and problems inside a complex processor.

By 1997, Binyamini joined a startup called Verisity founded in 1995 by Yoav Hollander, a leading expert in VLSI verification. Verisity was billed as one of the world’s first verification companies, tasked with delivering a tool suite for VLSI verification, based on a coverage-driven methodology.

Verisity told the semiconductor industry that coverage-driven verification “is the only way to deal with the complexity of chip designs.” At Verisity, Hollander created the “e” verification language, which later became a standard (IEEE 1647).

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.