What to Run on Day One of Emulation
- January 14, 2015
- Posted by: Lauro Rizzatti
- Category: 2015
Many of you are probably familiar with Lauro Rizzatti, who has written countless articles on the value of emulation for verifying system-on-chip (SoC) designs and been an occasional guest blogger here on The Breker Trekker. Lauro recently published an article in Electronic Engineering Times that really caught our attention. We could not possibly agree more with the title: “A Great Match: SoC Verification & Hardware Emulation” and, as we read through the article, were very pleased with the points he made.
Emulation involves mapping the RTL chip design into a platform that runs much like an actual chip, albeit considerably more slowly. The industry is not always consistent on its terminology, but generally if the platform is connected to a software simulation it’s being used as a simulation accelerator. In this case, the design’s inputs and outputs are connected to the simulation testbench much as they would be when running software simulation of the RTL. In emulation, there’s no simulator or testbench, and so the question becomes what to run on the design.
The goal is usually to run using in-circuit emulation (ICE) mode, in which the inputs and outputs of the design are connected to the target system for the SoC. At this stage, the design is running production software, typically an operating system and applications. This verifies the hardware and software together in a manner that emulates as much as possible the operation of the actual SoC in the target system. It is expensive to buy enough emulation capacity for large chips, but few would argue the high value of hardware-software co-verification.
In real life, ICE is not simple. Since the design runs slower than the chip will, either the entire target system must be slowed down or buffers must be provided between the fast system and the slow emulator in order for them to communicate. Booting the operating system is a major step that may take weeks or even months. Lingering bugs in the hardware design must be detected, diagnosed, and fixed. Debug using production software is fiendishly difficult since operating systems and applications are designed to perform user tasks, not verify the design.
The best solution is to have tests specifically crafted to run on the emulator platform as soon as the design is mapped. These tests must be easy to debug so the hardware errors can be quickly fixed in the RTL. These tests must run in “bare metal” mode, without requiring any sort of operating system. Once this set of tests is running without error, the bring-up of the production software and the transition into ICE mode will be faster and much less painful. But project engineers are already swamped, so who will write these tests?
The answer is simple: they must be automatically generated and they must exercise the design much more thoroughly than hand-written tests ever could. To quote from Lauro’s article:
I read that one EDA company was introducing software to eliminate the need to hand-craft different tests for different verification platforms. I knew I needed to learn more. Testbench automation or — in this particular parlance — SoC verification seems like a viable solution. Especially when it can automatically generate multi-threaded, multi-processor, self-verifying C test cases that run on the SoC’s embedded processors on in-circuit emulation (ICE) platforms, FPGA prototypes, and production silicon.
We believe that Lauro had Breker in mind. Our Trek family of products generates test cases that run on every software and hardware platform. These test cases reflect how the chip is used in production, for example, stringing together multiple IP blocks into an end-user scenario. This method is very effective at incrementally picking off the remaining RTL design bugs so that production software will be easier to bring up. Further, these test cases provide real-time debug information making it easy to track down and fix the hardware errors.
Lauro’s quote also touches on the important issue of test portability, a topic that’s risen to the forefront of the industry with an ongoing standardization effort within Accellera. It’s inefficient to write new tests and hard to adapt existing tests for each new platform. From the same graph-based scenario model, our products generate test cases tuned for each platform, from simulation and acceleration through ICE, prototypes, and silicon. We recommend that our test cases be run on Day One. As Lauro said, “SoC verification and hardware emulation are a great match.”
The truth is out there … sometimes it’s in a blog.