This is big news in CAD circles.
One of the top 4 EDA companies gets gobbled up by another. Foes that fought desperately over Lucas VG's patents (an innovator turned villain!) on super cell's are now going to sleep in the same bed and kiss and makeup (and discuss super/hyper cells probably!).
Welcome to the dirty world of EDA.
CAD and VLSI
Is this what CAD and VLSI have become?
Thursday, December 1, 2011
Monday, October 24, 2011
Leakage is a pain..
I am not sure of any existing tools which give an accurate estimate of active state leakage.
We can evaluate leakage in the typical and worst case Leakage corners (which gives us the lower and upper bounds). The problem gets exaggerated from the lower bound if the chip gets hotter than 25C.
It is interesting to note that fabs have migrated to using HighK dielectrics in the sub 40nm processes. This could have a potential impact on Threshold voltage, which means an increased VDD. Although slight, the increase in VDD cannot be taken lightly as dynamic power scales up as a square of VDD. The increased dynamic power contributes in a direct way to increase in the temperature on chip.
This in turn would increase the amount by which the chip will now leak.
There is an acute need right now for computing thermal profile of the chip. For this, an accurate estimate of dynamic power is quite essential, which is only possible through VCD/SAIF based simulations. This information then needs to provide feedback to a leakage engine which does a region based analysis of leakage on chip based on its thermal profile.
It is quite vital to have feedback between dynamic power computation, thermal profile, leakage as they are interdependent. As the chip leaks more, its temperature is bound to increase, which would cause further leakage. Hence the problem cannot be solved over 1 iteration, but can only be tackled over a few simulations. It is also quite difficult to get hold of any characterization data from the fab, which makes the problem even hard to solve. The worst case leakage picture is a typical doomsday scenario and doesnt provide much insight into the problem. what is really required is a more realistic picture.
We can evaluate leakage in the typical and worst case Leakage corners (which gives us the lower and upper bounds). The problem gets exaggerated from the lower bound if the chip gets hotter than 25C.
It is interesting to note that fabs have migrated to using HighK dielectrics in the sub 40nm processes. This could have a potential impact on Threshold voltage, which means an increased VDD. Although slight, the increase in VDD cannot be taken lightly as dynamic power scales up as a square of VDD. The increased dynamic power contributes in a direct way to increase in the temperature on chip.
This in turn would increase the amount by which the chip will now leak.
There is an acute need right now for computing thermal profile of the chip. For this, an accurate estimate of dynamic power is quite essential, which is only possible through VCD/SAIF based simulations. This information then needs to provide feedback to a leakage engine which does a region based analysis of leakage on chip based on its thermal profile.
It is quite vital to have feedback between dynamic power computation, thermal profile, leakage as they are interdependent. As the chip leaks more, its temperature is bound to increase, which would cause further leakage. Hence the problem cannot be solved over 1 iteration, but can only be tackled over a few simulations. It is also quite difficult to get hold of any characterization data from the fab, which makes the problem even hard to solve. The worst case leakage picture is a typical doomsday scenario and doesnt provide much insight into the problem. what is really required is a more realistic picture.
Friday, October 21, 2011
Logic synthesis and the standard cell library...
It was the best of times, it was the worst of times, it was the age of wisdom, it was the age of foolishness, we had everything before us, we had nothing before us, we were all going direct to heaven"
—The opening paragraph of A Tale of Two Cities
These feelings arise when one takes a closer look at standard cell libraries and logic synthesis tools. while standard cells library designers believe that quality of logic synthesis is only going to improve with more and more cells being designed in to the library, logic synthesis seems to throw quite a few surprises by using these cells less and less effectively.
why do synthesis users get feedback that they need assign cells to "set_dont_use/force hide" . why so many?
The explanations offered go well beyond my logical understanding.
1. Open balanced cells only for clock
2. Hide the delay buffers in a safe,which needs opening up just before hold fixing?
3. Power management cells get used to buffers long wires (I have seen level shifters in a few scenarios).So hide them and open them only when you need to use them. Pre instantiate them and don't let synthesis map to them.
4. Numerous other similar tweaks to not use more and more cells.....
Are all these set_dont_use/hide mechanisms a consequence of using a delay model inside synthesis into whose grand scheme all these unfortunate cells dont fit in?
May be the library designers need to talk more to the synthesis guys....
—The opening paragraph of A Tale of Two Cities
These feelings arise when one takes a closer look at standard cell libraries and logic synthesis tools. while standard cells library designers believe that quality of logic synthesis is only going to improve with more and more cells being designed in to the library, logic synthesis seems to throw quite a few surprises by using these cells less and less effectively.
why do synthesis users get feedback that they need assign cells to "set_dont_use/force hide" . why so many?
The explanations offered go well beyond my logical understanding.
1. Open balanced cells only for clock
2. Hide the delay buffers in a safe,which needs opening up just before hold fixing?
3. Power management cells get used to buffers long wires (I have seen level shifters in a few scenarios).So hide them and open them only when you need to use them. Pre instantiate them and don't let synthesis map to them.
4. Numerous other similar tweaks to not use more and more cells.....
Are all these set_dont_use/hide mechanisms a consequence of using a delay model inside synthesis into whose grand scheme all these unfortunate cells dont fit in?
May be the library designers need to talk more to the synthesis guys....
Saturday, October 15, 2011
Dennis Ritchie
Computing as we know it today would be non existent without C and Unix, and yet so many people would be unaware of this genius who was found dead at his home at the age of 70.
Wednesday, October 12, 2011
Extreme Design Automation get acquired by Synopsys
This is bad news for design community. We can only see the price of PT and PT-SI shoot through the roof now for lack of a good alternative (Tekton?).
GoldTime was a decent timer correlating quite well with primetime SI and was within 2% of spice. I begin to Wonder if Synopsys will just kill the tool which has been a thorn in their flesh for the past few years!
GoldTime was a decent timer correlating quite well with primetime SI and was within 2% of spice. I begin to Wonder if Synopsys will just kill the tool which has been a thorn in their flesh for the past few years!
Subscribe to:
Posts (Atom)