After 5 minutes of half-baked thought
I haven't heard of anyone doing this. The problem is too complex to
think of in generalities. You need specific examples of IP which might
benefit from this. I suspect that there are few.
But hold on, don't a lot of people think of 'control and datapath'
surely
the datapath width in a lot of such IP could be parameterized?
The regular designs aren't the problem. But how many designs are
'regular' anyway? None that I deal with; they're all fiendishly
complex.
But think, how many times when dealing with such complexity do you just
look at the spec and if it says: "handle N things", or "of size M",
just go right ahead and code something that handles the hard coded
sizes of N and M?
Why not code for a generatable range of values for N and M, to include
smaller values of those generics then more thoroughly test the
generatable IP?
How do you prove this? Formally, or do you guess? I suspect that the
subset of IP for which is true is small.
By corner case I mean something like hard coding a switch in the type
of multiplyer when n gets 'large'. I would think that changes of
architecture in the source would be just a matter of knowing when you
are writing it.
I don't expect existing IP to be written this way though. It is an idea
for new IP or for modification of existing IP (maybe).
This looks like a verification version of a simple proof by induction.
But, for induction, you need 2 steps:
1) 'prove' for case n, which you've discussed
2) prove that a proof for case 'n' also holds for case 'n+1'
For many mathematical problems, step 2 is easy enough. But how do you
do this for a complex real-world circuit? I can see how you might do
this for an adder, for example. But even a multiplier sounds
difficult, and anything more complex could be next to impossible.
The pragmatic, engineers version might be:
1) Verify the quality for case n, n+1, n+2...
2) Plot a graph.
3) If it is a straight line and you can afford case N where N > n
... then go ahead and build it
- Paddy.