Monday, January 19, 2009

Accelerate your verification

Why my regression testing consumes more time? How can I reduce the run time of my regressions? Verification engineers usually ask these questions to EDA vendors. They even push the EDA vendors to increase the speed of simulator as much as possible. Yes its possible they can tune the simulator engine and achieve more performance. But this performance gain can't reduce week long regressions into hours. After all your simulator is a software that is executing everything sequentially on the processor.

You have increased the speed of simulator to its maximum limit. You are making the LSF load free and running the simulation. Still If you feel that your simulation is dead slow, then you need to think of your verification methodology and analyze your simulation process .

There are various other factors mentioned here impact the performance of your simulator.

Design Abstraction
-Behavioral models do not work at the signal level. They run much faster than RTLs and netlists.
- Well proven RTLs/IPs at the system levels can be replaced by their functional models.
- Verilog netlists are better than VITAL.
- Memory modeling - Huge memories can be modeled efficiently using dynamic memories. Memories modeled in HDLs occupy the RAM.

Testing mode
- Are you running simulation on performance mode or debug mode? - Refer my blog "Slow and Fast simulators"
- Assertions are good for 'White Box Verification' but they slow down the simulation speed.
Assertions at the subsystem/module level can be disabled for the SoC verification, especially for the regression testing.
Assertions that verify the interfaces, port connections and system protocols are sufficient to verify the system.
Based on the testcase failures, one can rerun the simulation in debug mode by enabling the assertions of buggy modules.
- Avoid using simulator TCL commands to generate stimuli.
Using TCL commands like 'force' need read & write access permissions which again reduces the performance
Enable the read/write access permissions selectively based on the need Ex: PLI access to particular design instance does not need read/write permission for the entire system.
- Avoid dumping log file and doing post processing
Testbench should be self-checking
- Avoid the compilation of DUT for every testcase

Verification Methodologies
- Avoid using HDLs for implementing complex testbenches
Ex: Testbench that needs to generate transactions like frames, packets etc
- Avoid using ad-hoc & traditional methodologies
Ex: Using C/C++ language based protocol checkers/ testcases, Using PLIs, Creating random stimuli using C functions etc.
- Use standard HVLs like SYSTEMVERILOG that works seamlessly with HDLs, without using PLIs. It provides DPI, OOP, Assertions, CDV etc., almost everything that you need for your verification ...

Most of the time we need to make use of the legacy testbenches. I also agree that you can't easily move away from the existing verification stuff. But if the project is a long term project, I urge you to consider seriously on re-architecting the testbench using latest verification technologies. One needs to plan meticulously on introducing the new methodologies. The best approach would be trying out these technologies on existing IPs and introducing them step by step.