Vinay Deshpande said:
You can try the example which I quoted above. Create two
implementations of N-bit full adder, one with simple statement like C
<= A + B; and other using hierarchical design starting from a simple
full adder. You can see difference in logic elements consumed by both
implementations.
That's not what I was asking about. The comment that Andy made that I
questioned was directed at logic that spans multiple entities. What he said
was that the entity presented some sort of boundary that made it difficult
to optomize logic across. I've yet to see any evidence of such a barrier
and asked him for an example to back up his claim (and one would also need
to know which synthesis tool has this problem since that is the tool that
has the problem). What I've seen is that the logic gets flattened into a
netlist right at the outset, the entity 'boundaries' at that point no longer
exist and they have no such effect as claimed. It could be though that
older tools, or crappy tools, for whatever reason, did (or do) have this
limitation (maybe they optomized entities but not globally with a flattened
netlist). Hence the question.
As for your example, there are multiple n-bit adder algorithms and I suspect
that C<=A+B is implemented with a different base algorithm than the
cascading of smaller adders. Your view is that both produce the sum of two
numbers and are functionally identical but optomizers are not so smart as to
discern better algorithms from logic but what they are good at is inferring
from the top level code what the best algorithm to implement is....and then
optomize the logic for that. A crude example would be if you were to code a
discrete Fourier transform algorithm that produces frequency responses from
a set of input numbers and expect that an optomizer would be able to figure
out that an FFT would be better algorithm to use. Both would give you the
same overall function but different implementations one of which would be
'better'. Having said that, though I'll admit that I don't know just which
base algorithm your tool happened to choose and how (or if) it differed from
your hand coded version and whether the observed differences were because of
a different algorithm choice at the outset or because of an entity boundary
barrier effect. But you can't assume that just because you got different
results from different source code that it is an example of an entity
boundary barrier limitation.
KJ