Timing issue is one of hard to root cause bugs in chip design. You may have this kind of experience. You have an async interface design and it works fine in rtl and gate level simulations. But on silicon from time to time it does not work and you suspect it is still async interface design issue. Is there a way we can better verify this interface in simulation?
First we need to look at the potential verification limitation. If we look at the test bench closely enough, we may see test bench may start all clocks at time 0 which in other words means these clocks are kind of sync-ed if their clock rates are the same or integer multiply of the other. Another case. If one clock is generated from another clock in design, these two clocks may just have a delta delay in rtl simulation although their clock phases can be quite different on silicon if we define them to be async. So rtl simulation is not checking all possible cases, relative clock phases, of an async interface design.
How about gate level simulation with timing annotated at different corners? This is better. But it doesn’t guarantee chip corners happen to be corner cases for this async interface design. In addition, SDF GLS is kind of late in chip process. If issue is flagged here, painful eco is normally needed.
It is possible to better verify this async i/f at RTL level. The key is we want to verify this verify with an arbitrary clock phase difference. Here are two ways:
- kick off many simulations in parallel with each simulation having one particular phase difference. Basically we let these tests to sweep two clock phase difference.
- change one clock to be less or more than the target clock rate. So if we run one simulation long enough, all possible clock phase difference, in theory, can show up in this simulation.