Saturday, November 7, 2009

Are you scared of Lay-Offs?

If you say "Ofcourse everybody is scared of layoffs", then I would say "You are probably wrong". You get this insecured feeling only when you work on outdated technologies and do the same thing which you were doing 10 years back. This happens to young folks too, when they do not spend time on updating their knowledge in the emerging technologies.

People who learn continuously and try new technologies are treated as STARs in the organisations. They do not worry about these recessions and layoffs because they have huge demand in the industries. These people are SMART. They always think how they can improve their Market Value.

If you are working in the VLSI domain, especially in the functional verification domain, you should know about the latest verification methodologies and technologies. Most of the engineers run the regressions and spend most of their time on analysing the coverage reports. They wrongly assume that they are verifying the chips. Actually they are managing the regressions and reporting the bugs to the designers.

To help you to understand how much you know about verification, I would like to ask you few questions,

[1] Have you ever created the verification plan?
You can't do anything without the plan whether its about designing the chip or verifying it. During the planning process we identify various things like key features of the DUV, beta features, how many assertions, how to validate the DUV protocols etc.

[2] Have you ever architected the testbenches?
Verification engineers mostly use the HVLs to implement the testbenches. Usually the testbench is composed of various verification components like generators, monitors, scoreboards,receivers etc.

The architecture of the testbench completely depends on the design and the kind of verification you do.


[3] Have you created the coverage model?
One can measure the quality of the verification by looking at the functional coverage values. Achieving 100% coverage does not guarantee the high quality verification.The quality of your verification completely depends on the completness of your coverage model.


[4] Have you defined assertions to validate the DUV protocol?
You cannot verify everything through data integrity checks that you do in the scoreboard. You have to define assertions to validate the control oriented behaviors. One can easily do the white box verification using ABV, especially for the critical blocks in the chip.

This biggest challenge of chip level simulation is identifying the reason for the testcase failure. We spend too much time to identify the cause, which logic has bugs...


[5] Have you created the regression testsuite?
This is more about defining the testcases. One can define different kinds of tetcases like random tests, corner case testcases and directed testcases. We create these testcases by changing the seeds, generating different kind of transactions/scenarios and passing the directed values.

If you feel that you haven't done any of the things which I mentioned here, you really need to think about learning the Functional Verification process and SystemVerilog, the industry preferred IEEE standard Hardware Verification Language.





Thursday, February 5, 2009

How to live with Legacy BFMs?

Every time when I talk about the class based verification environment, most of my customers ask questions curiously about using HDL based legacy BFMs in SystemVerilog based testbenches. Many of my customers even approached me to convert their legacy BFMs into SV based transactors. From my experience, I would say you can’t easily exclude the legacy VIPs/BFMs while architecting the SV based TBs.

The SV based TB is completely based on Object Oriented Programming and it will use only *Classes*. Though class based testbenches are very complex, the actual challenge would be building them using legacy BFMs. One needs to understand he can't directly instantiate the Verilog modules in his class based verification environment because modules are static and classes are dynamic type of constructs.

Usually the chip will have diffrent kinds of standard interfaces that would be driven by some of the third party VIPs and internally developed BFMs. The VIPs from the external vendors are typically encrypted. Here the challenge is, if they are HDL based VIPs, then you can't directly use them as transactors in your SV TB. Also you can't re-write them as transactors because you have access only to the user interface. In this case, the only possible way is you should develop a SV wrapper on top of the VIP and convert it into transactor.

If your chip uses some of your internally developed BFMs, you can easily re-architect them as transactor. In some cases writing SV wrapper would be tougher and time consuming than rewriting them as transactors from the scratch. If you are sure that your BFM will be used by most of other long term projects, then you may want to consider the option, re-architecting it as transactor.

To understand how to convert the module based BFMs into SV based transactors, please refer:

Verification Methodology Manual
chapter4: Testbench Infrastructure
- Ad-Hoc Testbenches
- Legacy Bus-functional Models

Monday, January 19, 2009

Accelerate your verification

Why my regression testing consumes more time? How can I reduce the run time of my regressions? Verification engineers usually ask these questions to EDA vendors. They even push the EDA vendors to increase the speed of simulator as much as possible. Yes its possible they can tune the simulator engine and achieve more performance. But this performance gain can't reduce week long regressions into hours. After all your simulator is a software that is executing everything sequentially on the processor.

You have increased the speed of simulator to its maximum limit. You are making the LSF load free and running the simulation. Still If you feel that your simulation is dead slow, then you need to think of your verification methodology and analyze your simulation process .

There are various other factors mentioned here impact the performance of your simulator.

Design Abstraction
-Behavioral models do not work at the signal level. They run much faster than RTLs and netlists.
- Well proven RTLs/IPs at the system levels can be replaced by their functional models.
- Verilog netlists are better than VITAL.
- Memory modeling - Huge memories can be modeled efficiently using dynamic memories. Memories modeled in HDLs occupy the RAM.

Testing mode
- Are you running simulation on performance mode or debug mode? - Refer my blog "Slow and Fast simulators"
- Assertions are good for 'White Box Verification' but they slow down the simulation speed.
Assertions at the subsystem/module level can be disabled for the SoC verification, especially for the regression testing.
Assertions that verify the interfaces, port connections and system protocols are sufficient to verify the system.
Based on the testcase failures, one can rerun the simulation in debug mode by enabling the assertions of buggy modules.
- Avoid using simulator TCL commands to generate stimuli.
Using TCL commands like 'force' need read & write access permissions which again reduces the performance
Enable the read/write access permissions selectively based on the need Ex: PLI access to particular design instance does not need read/write permission for the entire system.
- Avoid dumping log file and doing post processing
Testbench should be self-checking
- Avoid the compilation of DUT for every testcase

Verification Methodologies
- Avoid using HDLs for implementing complex testbenches
Ex: Testbench that needs to generate transactions like frames, packets etc
- Avoid using ad-hoc & traditional methodologies
Ex: Using C/C++ language based protocol checkers/ testcases, Using PLIs, Creating random stimuli using C functions etc.
- Use standard HVLs like SYSTEMVERILOG that works seamlessly with HDLs, without using PLIs. It provides DPI, OOP, Assertions, CDV etc., almost everything that you need for your verification ...

Most of the time we need to make use of the legacy testbenches. I also agree that you can't easily move away from the existing verification stuff. But if the project is a long term project, I urge you to consider seriously on re-architecting the testbench using latest verification technologies. One needs to plan meticulously on introducing the new methodologies. The best approach would be trying out these technologies on existing IPs and introducing them step by step.