Wednesday, December 31, 2008

Happy New Year

Another Day, another Month, another Year, another Smile, another Tear, another Winter, A Summer too, But there will never be Another You!

As we get into the year 2009, I offer my most heartfelt and respectful wishes to you, your team at work and your family.

Thank you for your great support and faith in me.

Cheers
Siva

Thursday, December 18, 2008

Verification Sign-Off


Your project manager wants to know how much more time you require to complete the simulation. His management wants to know when he can sign off the verification. Marketing folks are very keen on finding the status of the product. The common objective of all these stake holders is to release the product on time and meet the TTM. So everybody needs some information to track the status of the product.

In the verification world, usually the engineers begin their learning with the term *COVERAGE* and they explore more on Coverage Driven Verification [CDV] as they grow as seasoned verification engineers. Coverage information is mainly used to track the functional verification process. There are different kinds of coverage information, like functional and code coverage.

Functional coverage information indicates how well the functional features of the design are verified and code coverage measures the quality of the stimulus. One needs to define the coverage models and assertions manually to generate the functional coverage but code coverage is automatically generated by the simulator.

Instead of dumping you with the definitions of all coverage metrics’, I would like to show how we make use of the coverage information to sign off the verification.

Let us take a small and powerful example *Synchronous Counter* and explore the CDV. Let us assume that we are verifying 32 bits counter. We need to make sure that the counter counts 2 to the power 32 [ 2 ^ 32 = 4294967296] possible values. One needs to spend billions of clock cycles to verify this design. Instead of running the counter through all possible values, why don't we load the counter with the random values and verify its functionality.

Let us use four bits counter end explore how this concept really works.
---------------------------
3-2-1-0 --- Bits postion
---------------------------
0000
0001
0010
0011
0100
......
0111
1000
......
1111
0000
------------------------
When the LSB [0th bit] is '1', 1st bit toggles from 0 to 1 on the active clock edge. Similarly when the 0th and 1st bits are '1', the 2nd bit toggles from 0 to 1 and so on. If you look at this sequence carefully, you can understand that one can verify the counter easily by making each bit to toggle.

Now let us go back to the '32 bits' counter. As the counter has billions of possible states, load the counter with random values and run. Every time when you load the counter, run it for a clock cycle and check how the bits are toggling. The random values are very effective on catching the bugs quickly, especially when the design is very complex.

To track the functional features of the counter, generate functional coverage by creating the coverage model with different bins as,

----------------------------------------------------------------------------------
BINS--VALUE --------FEEDBACK INFO
----------------------------------------------------------------------------------
MIN---[0]----------------Whether counter works properly in Zero state
MID1--[1-1000]---------Whether counter has gone through at least one of these values
MID2--[1001-10000]--Whether counter has gone through at least one of these values
...............
......................... [Create as many bins as required] ...........................
...............
MAX---[4294967296]--Whether counter has reached its maximum value
----------------------------------------------------------------------------------

These bins will count when the random values generated by the simulator are within the range of their definitions. If all of the bins have hit at least once, then the functional coverage becomes 100%. But it does not mean that you verified the counter completely. You also need to check whether all the 32 bits are toggled during simulation. When you generate random stimulus, there may be a lot of repetitions. So you need to analyze how much it is exciting the design.

When code coverage metrics' are enabled, especially the toggle coverage, it makes sure that each bits toggle from 0->1 and 1->0. If all the 32 bits toggle, then the code coverage becomes 100%. This coverage clearly indicates the quality of the random stimulus.

Functional coverage is mainly for tracking the functional features of the design where as the code coverage is mainly for checking the effectiveness of the testcases. So one needs to look at both the coverage information to sign off the verification process. But expecting 100% coverage or less than that depends on the design feature, complexity of the coverage models and metrics’ and more importantly the time that you can spend for the verification.

Obviously your project manager will be happy when you report 100% coverage but he will be excited more when he releases the product on time, without re-spins.

Thursday, December 11, 2008

Testbench Methodology

Let us have a look on testbench[TB] methodology in this posting. We all know that reusable verification IPs is the key thing that reduces the verification effort and time, especially for the SoCs. One has to make sure that these VIPs are created as per some good TB methodology. Generally EDA vendors provide these methodologies to enable their customers to create powerful TBs easily.


Why you need testbench methodology?

TB methodology defines the architecture of the verification environment, suggests appropriate language constructs, defines coding guidelines and gives comfort by adding the debugging utilities and application packages on top of base class library. So the TB methodology helps you to quickly realize the powerful and highly reusable testbench and verify your design thoroughly.


As a non EDA guy, I would say TB methodology is mainly for making your TB highly reusable by providing cleaner architecture and interface. It should focus more on suggesting the powerful HVL constructs and coding guidelines to create the verification environment.


Verification engineers have complete freedom to choose the HVL and methodology for their TBs. One can also create his own base class library and define methodology that suits for his design and organizational needs. For example, if you are selling your IPs to different customers who use multiple simulators, you need to make sure that your VIP/TB runs on all the simulators, especially when you choose the HVLs like SV. In this case it's better to create a base class library that runs on all or at least on the required simulators.



But creating base class and defining your own methodology is not an easy job. It should be planned very carefully by considering the business value and demand. It requires lot of effort, your precious time, valuable resources and huge investment. In addition to all these burdens you need to provide support for your base class library. Very rarely, some big organizations, especially product developers create their own base class and methodology to meet their long term business demands.


If you are working in the start-up or services organizations and if your product TTM is very critical, you need to plan for close working partnership with a reliable EDA vendor that provides matured TB methodology and good verification consultancy services that can guide you technically well to achieve your verification goals.

Slow and fast simulators

I am not talking about the benchmarks that you do on different simulators and classify them as slow and fast simulators. I am talking about running the simulators on debug and high performance modes.

Play with your simulator switches and understand how you can run it at different speeds. Its like running your car at different speeds using different gears. You need to change the gear based on power and speed requirements. Similarly you need to change the simulator options based on verification requirements.

Why you need this debug and performance mode?
Generally all the simulators run on high performance mode by default. In this mode the simulator does not have to instrument the code and log many details. So this performance mode enables the simulator to run at its highest speed. But in this mode, you cannot debug your design through line stepping, break point setting, delta cycle analysis, wave form dumping etc. Always
run your regression tests on high performance mode. Finally rerun your failing testcases on debug mode.


I have seen many of my customers who run the simulator on debug mode which is normally 2X slower than running on performance mode, for their regressions. Sometimes they enable waveform dumping too that again slows down the simulator. Usually verification engineers use the scripts written by some PERL or shell script experts. Only few guys want to take a look on the simulator options that have been used in the script, before running the regressions.

As a smart engineer, you should always scan your script before using it. Talk to CAD team, EDA guy and the script owner and make sure that you understand the simulator options and script very well. Even if you are so busy with your project verification, still you have to spend some time on analyzing the scripts.

Sunday, November 30, 2008

Hardware and Software configurations

Before measuring the efficiency of a simulator, one needs to check whether he has chosen the right hardware and software configuration.


The general guidelines for choosing HW & SW configuration which can increase the simulator performance are:
[1] Faster the processor, better performance. Multi-core processors can give more.
[2] L2 cache matters a lot for run-time
[3] Linux is better
[4] Opteron processors have built-in memory controller which helps simulators
[5] High speed disk drives like 10000 rpm would be very useful
[6] Fast DDR memory is critical
[7] Make sure that you have enough RAM so that the process does not swap - as it kills all performance.
[8] Large cache

Let us discuss about the various other factors that also affect the speed of the simulators in my next article.

SIMULATORS - Are they really fast enough?

Week long regressions are the main concerns of CAD and verification engineers when they have to deliver the product on time and meet TTM.CAD team will always look for high speed simulators and demand EDA vendors to tune their software engine to meet their run time requirements.


By looking at the technology changes of processors, memories and OS, I would say EDA vendors should really come up with new innovative methods and technologies to make their simulators powerful enough to exploit these new features of the latest hardware and software technologies. For example, if the simulator is not capable of utilizing all the cores of a processor innovatively to execute parallel processes and reduce the run time, then there is no benefit of changing the technology, like single core processor to core duo. If the technology of the simulator is not changing, it might even treat your big servers and machines that have high-end processors as only your old desktop PC.


At the same time, one should also understand that updating the hardwares and softwares of the simulation form is highly needed to expect more out of simulators. CAD team has to work closely with the EDA vendors and understand about their technology road map. They have to guide the project teams to use the right version of the EDA tools, understand the flows and methodologies. This will really help the design and verification engineers to use the EDA tools to the full extent.


Do you know how much you are paying for the EDA tools? Are you utilizing the EDA tools efficiently?


I am really surprised about the fact that many of the semiconductor industries do not even have proper CAD teams. Especially in India, we think that CAD team is needed just for managing the licensing issues. In many organizations, IT guys do the license management. But IT team cannot replace CAD team and CAD engineers can do much more on jobs like evaluating EDA tools, methodologies, integrating various point tools and creating the design flow etc. They can actually guide their management team to choose their preferred EDA vendor.


In my next article, I am going to explain how you can tune your simulator engine and run it at its maximum speed.

Tuesday, November 25, 2008

Reusable Verification IPs [VIP]

SoC designs are very complex and generally they are composed of various pre-verified IPs. Most of the times the IP testbenches become useless at the full chip level. These IP TBs work only in the stand alone IP verification environment.

Why can't verification engineers build the SoC TB from IP TBs, especially when the design engineers can easily realize the SoC from IPs?

Let us consider the mobile chip. It has got the processor core IP, IP which implements the wireless communication, IPs for audio, video and entertainment applications etc. All these IPs have already been verified thoroughly using IP testbenches. One can easily plug-in all these IP TBs [VIPs] together and create the TB for the complete chip.

At the chip level, usually a top level environment is created. It includes all these VIPs and drives them as per the requirement. During simulation some of the VIPs will be active and some of them reactive. The top level env controls the active VIPs and triggers them in a particular order, like which one has to generate stimuli first, which one next etc. The monitors in the reactive VIPs still monitor the activities at the IP level.

I hope now you can visualize how the scenarios can be generated for the mobile chip verification using this technique. A typical scenario could be “Receiving calls while listening to music”.

Thursday, November 20, 2008

SystemVerilog [SV]

What is SystemVerilog?

Let us first understand what is SV. SV is not something like a brand new hardware verification language. It's built on top of Verilog2001. All the Verilog language constructs seamlessly work with SV and vice-versa. In common man terms, one can say SystemVerilog is the latest version of Verilog HDL.

Why we need this language?

Basically HDLs are mainly for capturing RTL description of the design and they are not meant for verification. Some engineers wrongly assume that Verilog is good for verification and VHDL is good for RTL. I have seen some industries use VHDL only for TB. Actually both HDLs lack many constructs that you need for verifying complex designs.

Industries really need smart and powerful verification techniques to ship high quality chips and meet the TTM. Most of the designs are System On Chip[SoC] kind and very complex. Think of your mobile chip. It has to transmit and receive calls, play audio and video, support emails, games, internet etc. All these features have to be verified thoroughly at the IP level but not necessarily at the full chip level.

At SoC level, we do not want to spend more time on building the TB from the scratch. Here reusability is the key thing. How can we test the complete SoC using existing TBs of the IPs? How we are going to track the verification process? Project manager should be able update his management with the details like, how much done, how much more time required, allocating more resources …

I will talk about some prominent verification techniques in my next blog. I would like to take this mobile chip as an example and explore how these verification techniques/methodologies/technologies really help to accelerate the verification process and achieve high quality verification.

Wednesday, November 19, 2008

Change in the Verification World

In VLSI industries, everybody talks about SystemVerilog [SV]. We also find lot of job opportunities for the verification engineers who have working knowledge in SV. Students also look out for the VLSI design courses that focus more on verification and SV. They also strongly believe that SV knowledge is highly needed to get into industries.

Why so much noise about SystemVerilog? What is happening in the verification world?

I still remember, few years back everybody was talking about the hardware verification language 'e' and Specman. Industries used to search for the strings "e" and "Specman" in the resume. Cadence also acquired Versity to increase their market share in the verification. But now the VLSI industries are moving towards SV and migrating their legacy testbenches from the proprietary HVLs and HDLs to SV.

This change in the verification community clearly indicates that you always need to update yourself on the latest technology to maintain your market value, whether you are a student or an experienced verification engineer.