Benchmarking with Coremark

January 22, 2013

JackGanssle-January 22, 2013

Benchmarking is like statistics; it's easy to fiddle with the results. CoreMark strives to be more quantitative. How fast is your CPU?

That, of course, is rather a meaningless question. The amount of work a processor can get done in a period of time is dependent on many factors, including the compiler (and its optimization level), wait states, background activity such as direct memory access that can steal cycles, and much more. Yet plenty of folks have tried to establish benchmarks to make some level of comparison possible. Principle among these is Dhrystone.

But Dhrystone has problems. Compiler writers target the benchmark with optimizations that may not help developers much, but give better scores. Much of the execution time is spent in libraries, which can vary wildly between compilers. And both the source code and reporting methods are not standardized.

A few years ago the EEMBC people addressed these and other issues with their CoreMark benchmark which is targeted at evaluating just the processor core. It's small--about 16k of code, with little I/O. All of the computations are made at run time so the compiler can't cleverly solve parts of the problem. CoreMark is focused primarily on integer operations--the control problems addressed by embedded systems.

The four bits of workload tested are matrix manipulation, linked lists, state machines, and CRCs. The output of each stage is input to the next to thwart over-eager compiler writers.

One rule is that each benchmark must include the name and version of the compiler used, as well as the compiler flags. Full disclosure, no hiding behind games.

The result has been good news for us. Some of the compiler vendors have taken on CoreMark as the new battleground, publishing their scores and improving their tools to ace the competition. IAR and Green Hills are examples.

< Previous
Page 1 of 2
Next >

Loading comments...

Most Commented

Parts Search Datasheets.com

KNOWLEDGE CENTER