Editor’s note: Two years ago, Lauro Rizzatti talked with Thomas Delaye, product engineering director at Siemens EDA, about applying machine learning to hardware emulation. (See: Can AI Help Manage the Data Needed for SoC Verification?) Rizzatti recently asked Delaye for an update. What follows is a condensed version of their conversation.
Lauro Rizzatti: Fast forward, what has happened over the past two years?
Thomas Delaye: Since we spoke last, the situation has evolved rapidly, and what was uncertain two years ago has become obvious today. Verification is a complex and endless task. Engineers develop tests, run them, fix the design under test [DUT], and run tests again. With this cycle, the hardware emulator generates massive amounts of data. Access to that data can uncover ways to improve the productivity of emulation. As a provider of hardware-assisted verification solutions, our aim is to help engineers interpret the data to drive better emulation results. In our previous interview, I pointed out that machine-learning [ML] algorithms can be used to investigate the DUT based on verification results or design behavior.
Rizzatti: What are the results?
Delaye: I’ll use the Veloce Strato+ platform from Siemens EDA as an example. We introduced updates and new features, including a program for handling data analytics. Capabilities like this is the entry point into a new era in SoC verification. Massive amounts of data generated by the emulator, combined with data generated by the engineering team, improve overall verification productivity and efficiency.
In the context of emulation, a knowledge database can be built from compilations and emulation runs, and the emulator learns from repeating hundreds of those runs. The collected information can be processed after each session and recommendations generated for the user.
The data-analytics functionality is just the first start. The program will be enhanced with experience and knowledge. The entire infrastructure has the potential to improve greatly over time.
Rizzatti: Can you give us an example of an ML application in emulation?
Delaye: When engineers deploy an emulator, they proceed through three steps: compilation, execution or runtime, and debug. Each of these steps has productivity inefficiencies that should be minimized.
The easiest is compilation. In the first iteration of data analytics, the tool looked into the characteristics of the design, not design functionality, rather than the design structure such as the number of flops, the interconnection network of the flops, and how they are laid out in the design. All this information is fed to a data-analytics program to improve the partitioning of the logic into the array of emulator-on-chip devices in the emulator.
This is accomplished because we [Siemens EDA] manage the entire compilation flow. The process advances in subsequent passes, improving the partitioning after each pass. That’s the first stepping stone of data-analytics improvement. For example, during compilation, an engineer may find that a block of fast logic was split into two chips instead of one chip and how that impacts the emulator’s runtime performance.
Rizzatti: You mentioned three phases in deploying an emulator: compilation, runtime, and debug. Can you comment on the other two phases?
Delaye: Runtime is tricky because it depends on how the design is exercised, a consideration under user control. That means the emulator has to analyze the behavior on a collection of designs, a longer, more difficult process to access than what transpired for compilation. Debugging is even more complex because the emulator needs to gain a better understanding of what is required and important for debug.
Rizzatti: In our previous conversation, you expressed concern that when you push the envelope too far into deep learning, you end up reverse-engineering a design, a no-no for all designers. What is your position today?
Delaye: This is still a concern today. A hardware emulation vendor like Siemens EDA clearly needs to preserve the user’s intellectual property –– this is at the core of our thought process. Learnings derived from a customer’s data are specific to that customer and do not ever leave the customer’s network environment. Data from a customer’s designs needs to be applied exclusively to future activities of that customer. What is important is a close collaboration with engineers, a partnership to understand their needs. Our objective is to provide them with a means for them to do a better job.
Rizzatti: In conclusion, what does the future hold?
Delaye: So much is left to be done in automation. For example, offering visualization to understand what is happening in the DUT is going to be a huge plus.
From a system standpoint, a benefit would be able to spot issues or inefficiencies, whether they are power-, performance-, security-, safety-, or even functionality-related. To exemplify through visualization of data, engineers may pinpoint a problem somewhere in the DUT that may appear as a spike or something that we don’t know what’s happening and they do.
I believe that all emulation vendors are working on this. The name of the game is going to be knowing what is meaningful for the user.
We are entering an era of machine-learning–driven emulation, a time when the emulator becomes smarter by learning from previous activities. That learning then influences future deployment with a smooth, efficient, and cost-effective mechanism. We’re still learning, but we are on the right foot and have a clear path ahead of us.