Gate level SDF timing simulation does not simulate setup or hold scenarios seen by STA?

Make it to the Right and Larger Audience


Gate level SDF timing simulation does not simulate setup or hold scenarios seen by STA?

This is related to timing closure I talk about in another post. I have been also doing gate level timing simulation with netlist and SDF files generated by backend team. As we know, SDF uses a triplet in (min:typ:max) format to specify min, typical, and max delay for a timing arc or requirement. For single OC mode, min and max can be the same. Our backend flow is OCV based so the SDF file has both min and max values.


But here is the thing. When I run sdf back-annotated timing simulation, I have to use -sdfmin or -sdfmax to specify min or max delays to be used in sim. It is strange since as done by STA tool (in OCV mode, now I know 🙂 ), for setup check, we should use max delay for launching path and min delay for capturing path. If we run timing simulation using either max or min delays, we are not really checking either setup time or hold time as seen by STA tools?


Googled it and found this thread,


The questioner, ebaum, had the same issue, quote:
“For a setup time analysis, TRCE calculates the maximum data path delay and the minimum clock path delay to get the worst case slack, while for a hold time analysis it uses the minimum data path delay and the maximum clock path delay.

I’m wondering, whether it’s possible to perform a timing simulation with exactly the same timings as used by TRCE?

I’ve tried the ISIM -sdfmin/-sdfmax options, but it seems, that -sdfmin uses always the MIN values (for both data and clock paths), while -sdfmax uses always the MAX values. And the sdf-file generated by netgen (with -pcf) specifies all (MIN:TYP:MAX) values with MIN<=TYP<=MAX, both for data and clock paths, i. e. netgen seems to treat these paths equally.

Is there anyway a possibilty to use e. g. MAX-values for data paths and MIN-values for clock paths in timing simulations?

Im using the ISE 11.1 (with service pack 11.2) command line tools (xst, ngdbuild, map, par, trce, netgen and fuse).”


Here is the reply from Xilinx support, quote:
“Yes you are correct in your analysis. The only difference is the mechanism in which we do this. When you look at the MIN numbers of the SDF, these are not absolute MIN numbers, rather there are adjusted relative MIN numbers. This is why when you look at the report from Trce it matches the number exactly. The problem is that when trce is doing its analysis, it is able to take the relative min numbers and the max numbers, so you are essentially seeing min clock and max data being used when it does its setup calaculation and then using max clk and min data for the hold calculation. Simulators cannot cross the triples that they can leverage. They have to use max – max or min-min they cannot have max-min. This is what we have essentially done is to create two timing windows. We create a setup window, using the two max numbers and we create a hold window using the two min numbers. Netgen also calls on trce in the background in order to ensure that these numbers have any adjustments that are used in the trce’s calculations. That is where these numbers come from and it is not just directly the number in the speedfiiles. The idea behind this is that if you pass the two windows in the setup and hold simulation, then you should technically be able to pass it in timing simulation and vice-versa.”
To be honest, I don’t quite follow above explaining. If someone knows, please reply/comment this post so I can learn from you. It seems to me the reply first confirmed either max delays or min delays were used and they were not mixed as in STA tool, TRCE in xilinx case. But somehow some adjustment was made to setup and hold windows so timing simulation still simulated the cases seen by STA tool.


I am not sure how they did it. Googled and got no luck. Even it is true for Xilinx simulator, I am not sure it is true for other simulators such as Modelsim, VCS, NCsim. I am inclined to think it is NOT the case for these simulators. But at least I think it is clear why we can’t mix max and min in timing sim. For a FF, its clock path is in capturing path for this FF’s timing check but is in launching path for the next FF’s check. So we can’t mix max and min in one run.

So timing simulation is not simulating the real setup/hold worst case timing. As a matter of fact, even if it does, what is the point to do so? Design already passes timing closure and the SDF generated should be timing clean. It should not fail in timing simulation. So why to run timing simulation? As a matter of fact, we didn’t get much out of timing simulation amid the tremendous pain to bring up GLS. But we did catch something. First, some timing constraints were missing and some were wrong. In this case STA tool won’t flag anything and it just follows constraints. But timing simulation shows the issue. Second, we have asynchronous interface in our design which are not constrained. But timing simulation captures something which led us to make some modification of this async interface.

Author brief is empty

  1. Profile Photo
    mutasem 4 years ago


  2. Profile Photo
    SD-RTL-DGN 5 years ago
    +1 -0

    As for whether GLS is necessary, here is a blog from Gordon Allan.

    Short answer is yes. Allan lists reasons like bad STA constraints, changes due to backend process, ECO, cross-clock domain, DFT vector simulation, etc.

    • Author
      Ravenhill 5 years ago

      Thanks for bringing up Allan’s blog. Agree. Actually lots of items I have met myself like bad/missing timing constraints, cross-clock domain (aka async logic) as I mentioned above. Backend DFT logic insertion is another big reason for GLS since these logics are not in RTL.


Contact Us

Thanks for helping us better serve the community. You can make a suggestion, report a bug, a misconduct, or any other issue. We'll get back to you using your private message ASAP.



Forgot your details?