Wednesday, December 31, 2008

Happy New Year

Another Day, another Month, another Year, another Smile, another Tear, another Winter, A Summer too, But there will never be Another You!

As we get into the year 2009, I offer my most heartfelt and respectful wishes to you, your team at work and your family.

Thank you for your great support and faith in me.

Cheers
Siva

Thursday, December 18, 2008

Verification Sign-Off


Your project manager wants to know how much more time you require to complete the simulation. His management wants to know when he can sign off the verification. Marketing folks are very keen on finding the status of the product. The common objective of all these stake holders is to release the product on time and meet the TTM. So everybody needs some information to track the status of the product.

In the verification world, usually the engineers begin their learning with the term *COVERAGE* and they explore more on Coverage Driven Verification [CDV] as they grow as seasoned verification engineers. Coverage information is mainly used to track the functional verification process. There are different kinds of coverage information, like functional and code coverage.

Functional coverage information indicates how well the functional features of the design are verified and code coverage measures the quality of the stimulus. One needs to define the coverage models and assertions manually to generate the functional coverage but code coverage is automatically generated by the simulator.

Instead of dumping you with the definitions of all coverage metrics’, I would like to show how we make use of the coverage information to sign off the verification.

Let us take a small and powerful example *Synchronous Counter* and explore the CDV. Let us assume that we are verifying 32 bits counter. We need to make sure that the counter counts 2 to the power 32 [ 2 ^ 32 = 4294967296] possible values. One needs to spend billions of clock cycles to verify this design. Instead of running the counter through all possible values, why don't we load the counter with the random values and verify its functionality.

Let us use four bits counter end explore how this concept really works.
---------------------------
3-2-1-0 --- Bits postion
---------------------------
0000
0001
0010
0011
0100
......
0111
1000
......
1111
0000
------------------------
When the LSB [0th bit] is '1', 1st bit toggles from 0 to 1 on the active clock edge. Similarly when the 0th and 1st bits are '1', the 2nd bit toggles from 0 to 1 and so on. If you look at this sequence carefully, you can understand that one can verify the counter easily by making each bit to toggle.

Now let us go back to the '32 bits' counter. As the counter has billions of possible states, load the counter with random values and run. Every time when you load the counter, run it for a clock cycle and check how the bits are toggling. The random values are very effective on catching the bugs quickly, especially when the design is very complex.

To track the functional features of the counter, generate functional coverage by creating the coverage model with different bins as,

----------------------------------------------------------------------------------
BINS--VALUE --------FEEDBACK INFO
----------------------------------------------------------------------------------
MIN---[0]----------------Whether counter works properly in Zero state
MID1--[1-1000]---------Whether counter has gone through at least one of these values
MID2--[1001-10000]--Whether counter has gone through at least one of these values
...............
......................... [Create as many bins as required] ...........................
...............
MAX---[4294967296]--Whether counter has reached its maximum value
----------------------------------------------------------------------------------

These bins will count when the random values generated by the simulator are within the range of their definitions. If all of the bins have hit at least once, then the functional coverage becomes 100%. But it does not mean that you verified the counter completely. You also need to check whether all the 32 bits are toggled during simulation. When you generate random stimulus, there may be a lot of repetitions. So you need to analyze how much it is exciting the design.

When code coverage metrics' are enabled, especially the toggle coverage, it makes sure that each bits toggle from 0->1 and 1->0. If all the 32 bits toggle, then the code coverage becomes 100%. This coverage clearly indicates the quality of the random stimulus.

Functional coverage is mainly for tracking the functional features of the design where as the code coverage is mainly for checking the effectiveness of the testcases. So one needs to look at both the coverage information to sign off the verification process. But expecting 100% coverage or less than that depends on the design feature, complexity of the coverage models and metrics’ and more importantly the time that you can spend for the verification.

Obviously your project manager will be happy when you report 100% coverage but he will be excited more when he releases the product on time, without re-spins.

Thursday, December 11, 2008

Testbench Methodology

Let us have a look on testbench[TB] methodology in this posting. We all know that reusable verification IPs is the key thing that reduces the verification effort and time, especially for the SoCs. One has to make sure that these VIPs are created as per some good TB methodology. Generally EDA vendors provide these methodologies to enable their customers to create powerful TBs easily.


Why you need testbench methodology?

TB methodology defines the architecture of the verification environment, suggests appropriate language constructs, defines coding guidelines and gives comfort by adding the debugging utilities and application packages on top of base class library. So the TB methodology helps you to quickly realize the powerful and highly reusable testbench and verify your design thoroughly.


As a non EDA guy, I would say TB methodology is mainly for making your TB highly reusable by providing cleaner architecture and interface. It should focus more on suggesting the powerful HVL constructs and coding guidelines to create the verification environment.


Verification engineers have complete freedom to choose the HVL and methodology for their TBs. One can also create his own base class library and define methodology that suits for his design and organizational needs. For example, if you are selling your IPs to different customers who use multiple simulators, you need to make sure that your VIP/TB runs on all the simulators, especially when you choose the HVLs like SV. In this case it's better to create a base class library that runs on all or at least on the required simulators.



But creating base class and defining your own methodology is not an easy job. It should be planned very carefully by considering the business value and demand. It requires lot of effort, your precious time, valuable resources and huge investment. In addition to all these burdens you need to provide support for your base class library. Very rarely, some big organizations, especially product developers create their own base class and methodology to meet their long term business demands.


If you are working in the start-up or services organizations and if your product TTM is very critical, you need to plan for close working partnership with a reliable EDA vendor that provides matured TB methodology and good verification consultancy services that can guide you technically well to achieve your verification goals.

Slow and fast simulators

I am not talking about the benchmarks that you do on different simulators and classify them as slow and fast simulators. I am talking about running the simulators on debug and high performance modes.

Play with your simulator switches and understand how you can run it at different speeds. Its like running your car at different speeds using different gears. You need to change the gear based on power and speed requirements. Similarly you need to change the simulator options based on verification requirements.

Why you need this debug and performance mode?
Generally all the simulators run on high performance mode by default. In this mode the simulator does not have to instrument the code and log many details. So this performance mode enables the simulator to run at its highest speed. But in this mode, you cannot debug your design through line stepping, break point setting, delta cycle analysis, wave form dumping etc. Always
run your regression tests on high performance mode. Finally rerun your failing testcases on debug mode.


I have seen many of my customers who run the simulator on debug mode which is normally 2X slower than running on performance mode, for their regressions. Sometimes they enable waveform dumping too that again slows down the simulator. Usually verification engineers use the scripts written by some PERL or shell script experts. Only few guys want to take a look on the simulator options that have been used in the script, before running the regressions.

As a smart engineer, you should always scan your script before using it. Talk to CAD team, EDA guy and the script owner and make sure that you understand the simulator options and script very well. Even if you are so busy with your project verification, still you have to spend some time on analyzing the scripts.