Are HDLs Misguided?


R

rickman

Sometimes I wonder if HDLs are really the right way to go. I mainly
use VHDL which we all know is a pig in many ways with its verbosity
and arcane type conversion gyrations. But what bothers me most of all
is that I have to learn how to tell the tools in their "language" how
to construct the efficient logic I can picture in my mind. By
"language" I don't mean the HDL language, but actually the specifics
of a given inference tool.

What I mean is, if I want a down counter that uses the carry out to
give me an "end of count" flag, why can't I get that in a simple and
clear manner? It seems like every time I want to design a circuit I
have to experiment with the exact style to get the logic I want and it
often is a real PITA to make that happen.

For example, I wanted a down counter that would end at 1 instead of 0
for the convenience of the user. To allow a full 2^N range, I thought
it could start at zero and run for the entire range by wrapping around
to 2^N-1. I had coded the circuit using a natural range 0 to
(2^N)-1. I did the subtraction as a simple assignment

foo <= foo -1;

I fully expected that even if it were flagged as an error in
simulation to load a 0 and let it count "down" to (2^N)-1, it would
work in the real world since I stop the down counter when it gets to
1, not zero. Loading a zero in an N bit counter would work just fine
wrapping around.

But to make the simulation the same as the real hardware I expected to
get, I thought adding some simple code to handle the wrap around might
be good. So the assignment was done modulo 2^N. But the synthesis
size blew up to nearly double the size without the "mod" function
added, mostly additional adders! I didn't have time to explore what
caused this so I just left out the modulo operation and will live with
what I get for the case of loading a zero starting value.

I guess what I am trying to say is I would like to be able to specify
detailed logic rather than generically coding the function and letting
a tool try to figure out how to implement it. This should be possible
without the issues of instantiating logic (vendor specific, clumsy,
hard to read...). In an ideal design world, shouldn't it be pretty
easy to infer logic and to actually know what logic to expect?

Rick
 
Ad

Advertisements

J

Jan Decaluwe

rickman said:
Sometimes I wonder if HDLs are really the right way to go. I mainly
use VHDL which we all know is a pig in many ways with its verbosity
and arcane type conversion gyrations. But what bothers me most of all
is that I have to learn how to tell the tools in their "language" how
to construct the efficient logic I can picture in my mind. By
"language" I don't mean the HDL language, but actually the specifics
of a given inference tool.

What I mean is, if I want a down counter that uses the carry out to
give me an "end of count" flag, why can't I get that in a simple and
clear manner? It seems like every time I want to design a circuit I
have to experiment with the exact style to get the logic I want and it
often is a real PITA to make that happen.

For example, I wanted a down counter that would end at 1 instead of 0
for the convenience of the user. To allow a full 2^N range, I thought
it could start at zero and run for the entire range by wrapping around
to 2^N-1. I had coded the circuit using a natural range 0 to
(2^N)-1. I did the subtraction as a simple assignment

foo <= foo -1;

I fully expected that even if it were flagged as an error in
simulation to load a 0 and let it count "down" to (2^N)-1, it would
work in the real world since I stop the down counter when it gets to
1, not zero. Loading a zero in an N bit counter would work just fine
wrapping around.

But to make the simulation the same as the real hardware I expected to
get, I thought adding some simple code to handle the wrap around might
be good. So the assignment was done modulo 2^N. But the synthesis
size blew up to nearly double the size without the "mod" function
added, mostly additional adders! I didn't have time to explore what
caused this so I just left out the modulo operation and will live with
what I get for the case of loading a zero starting value.

Ok. So you didn't have time to explore the issue, but you have all the
time in world to write a lengthy post spreading FUD and jumping to all kinds
of Big Conclusions?

There, as is commonly known, no reason why modulo a power of 2 (hint)
would generate additional hardware, and there is overwhelming evidence
that decent synthesis tools do this just right.

Therefore, if you think you see this, the proper reaction is to be
very intruigued and switch to fanatic bug-hunting mode. Do that please
(or trick others into doing it for you). Chances are that we will not
here about the issue again.

All the rest is a waste of everybody's time.
I guess what I am trying to say is I would like to be able to specify
detailed logic rather than generically coding the function and letting
a tool try to figure out how to implement it. This should be possible
without the issues of instantiating logic (vendor specific, clumsy,
hard to read...). In an ideal design world, shouldn't it be pretty
easy to infer logic and to actually know what logic to expect?

--
Jan Decaluwe - Resources bvba - http://www.jandecaluwe.com
Python as a HDL: http://www.myhdl.org
VHDL development, the modern way: http://www.sigasi.com
Analog design automation: http://www.mephisto-da.com
World-class digital design: http://www.easics.com
 
J

jacko

Sometimes I wonder if HDLs are really the right way to go.  I mainly
use VHDL which we all know is a pig in many ways with its verbosity
and arcane type conversion gyrations.  But what bothers me most of all
is that I have to learn how to tell the tools in their "language" how
to construct the efficient logic I can picture in my mind. By
"language" I don't mean the HDL language, but actually the specifics
of a given inference tool.

I find the tools wierd sometimes, but they have their own style for
logic minimization. Like I only considered doubling the memory size by
having two routines to do 2 hi bits of jump, and then use bytes
instead of 16 bit words. Strange but it also makes the hardware
smaller!!

I have also been considering using preseting special values in the
cycle before a general load, instead of an if/else in the same cycle.

I also think the hardest part is specifying to the sythesis tool, how
external memory supplies a result after an access delay, and how to
make this delay relative to the synthesized fmax., not just in ns.

Cheers Jacko
 
R

rickman

I usually find that the gyrations are a hint to step back and see what aspect of
the design I have missed... I end up needing a few, but YMMV.



Here the issue appears to be how to get at the carry out of a counter...
the syntax of integer arithmetic doesn't provide an easy way to do that by
default, in any language I know (other than assembler for pre-RISC CPUs, with
their flag registers).

Well, no, I'm not trying to force the tool to generate a carry out
since I am not using it for anything. I just want a simple counter
and logic to make it detect a final count value of 1. I am pretty
sure I would have gotten that from my original code. But I also want
the counter to roll over to zero at the max value of the counter which
will give me a max count range of 2**N by specifying a value of 0 in
the limit register. To use the carry out for the final count
detection I would have to require the user to program M-1 rather than
programming M or 0 for max M.

It seems to me that you have two choices ...
(1) implement an n-bit counter, and augment it in some way to recreate the carry
out (unfortunately you are fighting the synthesis tool in the process)

(2) implement an n+1 bit counter, with the excess bit assigned to the carry, and
trust the synthesis tool to eliminate the excess flip-flop at the optimisation
stage...

I am willing to guess the second approach would be simpler. Even if the obvious
optimisation doesn't happen (and I bet it does) it's worth asking if your design
is sensitive to the cost of that FF.

Can you boil down what you are trying to do (and doesn't work) into a test case?

Jan doesn't get what I am saying. I'm not that worried about the
particulars of this case. I am just lamenting that everything I do in
an HDL is about describing the behavior of the logic and not actually
describing the logic itself. I am an old school hardware designer. I
cut my teeth on logic in TO packages, hand soldering the wire leads.
I still think in terms of the hardware, not the software to describe
the hardware. Many of my designs need to be efficient in terms of
hardware used and so I have to waste time learning how to get what I
want from the tools. Sometimes I just get tired of having to work
around the tools rather than with them.

I would say so. And I'm still hoping to live to see it!

I'm not sure I will still be working then if it ever happens. As
hardware becomes more and more cost efficient, I think there is less
incentive to make the tools hardware efficient. I guess speed that
will always be important and minimal hardware is usually the fastest.
But that's not the case when the tools are doing the optimization. I
recently reduced my LUT count 20% by changing the optimization from
speed to area.

Rick
 
R

rickman

I also think the hardest part is specifying to the sythesis tool, how
external memory supplies a result after an access delay, and how to
make this delay relative to the synthesized fmax., not just in ns.

I'm not sure what you are trying to do, but you should be able to
specify a delay in terms of your target fmax. Just define a set of
constants that calculate the values you want. I assume you mean a
delay value to use in simulation such as a <= b after x ns?

Rick
 
J

Jan Decaluwe

rickman said:
Jan doesn't get what I am saying. I'm not that worried about the
particulars of this case. I am just lamenting that everything I do in
an HDL is about describing the behavior of the logic and not actually
describing the logic itself. I am an old school hardware designer. I
cut my teeth on logic in TO packages, hand soldering the wire leads.
I still think in terms of the hardware, not the software to describe
the hardware. Many of my designs need to be efficient in terms of
hardware used and so I have to waste time learning how to get what I
want from the tools. Sometimes I just get tired of having to work
around the tools rather than with them.

Ok, let's talk about the overall message then.

I remember an article from the early days were some guy "proved" that
HDL-based design would never otherthrow schematic entry, because
it is obviously better to describe what something *is* than what
it *does*. All ideas come back, also the bad ones :)

HDL-based design was adopted by old school hardware designers, for
lack of other ones. They must have been extremely skeptical. How
did it happen? Synopsys took manually optimized designs from
expert designers and showed that Design Compiler consistently
made them both smaller and faster, and permitted trade-off
optimizations between the two. The better result was obviously
*not* like the original designer imagined it.

The truth is that HDL-based design works better in all respects
than handcrafted logic. It is a no-compromises-required technology,
which is very rare.

Look no further than this newsgroup for active designers who understand
this very well. Their designs must probably be as efficient as yours.
Yet they use coding styles that are much more abstract, and they
are certainly not concerned about the where the last mux or
carry-out goes.

In other words, when you make claims about ineffiencies and requirements
to fight tools all the time, you better come up with some very
strong examples - the evidence is against you.

What do you give us? A vague problem with an example of a modulo
operation on a decrementer. Instead of posting the code and
resolve the issue immediately, you give some verbose description
in prose so that we now all can start the guessing game. The example
has a critical problem, but you don't know what it is and you refuse
to track it down. Yet you still refer to it to back up your claims.

If that is your standard, why should I take any of your
claims seriously?

--
Jan Decaluwe - Resources bvba - http://www.jandecaluwe.com
Python as a HDL: http://www.myhdl.org
VHDL development, the modern way: http://www.sigasi.com
Analog design automation: http://www.mephisto-da.com
World-class digital design: http://www.easics.com
 
Ad

Advertisements

J

Jan Decaluwe

rickman said:
Jan doesn't get what I am saying. I'm not that worried about the
particulars of this case.

I'm sorry to bother you with this again, but I am actually worried.
From your description, I tried to reproduce your problem, to no avail.
With or without modulo, it doesn't make the slightest difference.
(Quartus Linux web edition v.10.0).

Perhaps you stumbled on some problematic use case that
we definitely should know about. After all, HDL-based design is
not about specifying an exact gate level implementation, but
about understanding which patterns work well. Perhaps you stumbled
upon a pattern that doesn't and that we should avoid. Please
post your code. Let's not spoil an opportunity to advance the
state of the art.

Of course, you may have good reasons not to post your code, for
example because you found a bug in the mean time. Perhaps you
did modulo 2^N-1 instead of 2*N, just to mention a mistake that
I once made. Let us know, so that we can stop worrying.

--
Jan Decaluwe - Resources bvba - http://www.jandecaluwe.com
Python as a HDL: http://www.myhdl.org
VHDL development, the modern way: http://www.sigasi.com
Analog design automation: http://www.mephisto-da.com
World-class digital design: http://www.easics.com
 
A

Alessandro Basili

Sometimes I wonder if HDLs are really the right way to go. [snip]
I guess what I am trying to say is I would like to be able to specify
detailed logic rather than generically coding the function and letting
a tool try to figure out how to implement it. This should be possible
without the issues of instantiating logic (vendor specific, clumsy,
hard to read...). In an ideal design world, shouldn't it be pretty
easy to infer logic and to actually know what logic to expect?

IMHO using HDL as if we are using a schematic entry is rather limiting
and does not provide any high level abstraction which is rather powerful
in terms of description, implementation and maintainance of the code.
I found the following readings very inspiring:

http://www.designabstraction.co.uk/Articles/Advanced Synthesis Techniques.htm

and

http://mysite.ncnetwork.net/reszotzl/uart.vhd

Al
 
D

d_s_klein

Sometimes I wonder if HDLs are really the right way to go.  I mainly
use VHDL which we all know is a pig in many ways with its verbosity
and arcane type conversion gyrations.  But what bothers me most of all
is that I have to learn how to tell the tools in their "language" how
to construct the efficient logic I can picture in my mind. By
"language" I don't mean the HDL language, but actually the specifics
of a given inference tool.

said:

Back in the 8086 days, I had to do the same thing with compilers. I
spent a fair amount of time learning how the code generation phase
worked so I could get the tools to work properly. I remember a "brand-
name" 'C' compiler very carefully generating code to keep the loop
control variable in the CX register, then at the end of the loop
moving CX to AX and adding minus-one. (For those that don't program
86's in assembly, the CX register is a special purpose register for
the 'LOOP' instruction.) Now that processors are real fast, and
memory is very cheap, I don't worry so much about efficient code.

I would draw a parallel to the frustrations you are having with HDLs.
Another parallel I would draw is that when I wanted 'very fast tight
code' I would code certain modules in assembly, and link them with 'C'
routines. When the synthesizer just won't get it right, I draw it a
picture. (Which is one advantage the Acme-Brand tool has over the
Brand-X tool)

Yeah, it's not portable, and it isn't "right". But I'm getting
product out the door.

RK.
 
A

Andy

Here the issue appears to be how to get at the carry out of a counter...
the syntax of integer arithmetic doesn't provide an easy way to do that by
default, in any language I know (other than assembler for pre-RISC CPUs, with
their flag registers).

It seems to me that you have two choices ...
(1) implement an n-bit counter, and augment it in some way to recreate the carry
out (unfortunately you are fighting the synthesis tool in the process)

(2) implement an n+1 bit counter, with the excess bit assigned to the carry, and
trust the synthesis tool to eliminate the excess flip-flop at the optimisation
stage...

I am willing to guess the second approach would be simpler.

I have found the 1st approach far simpler, by using a natural subtype
for the counter. Then (count - 1 < 0) is the carry out for a down
counter. Similarly, (count + 1 > 2**n-1) is the carry out for an n bit
up counter. No fighting required.

Andy
 
A

Andy

I'm not that worried about the
particulars of this case.  I am just lamenting that everything I do in
an HDL is about describing the behavior of the logic and not actually
describing the logic itself.  I am an old school hardware designer.  I
cut my teeth on logic in TO packages, hand soldering the wire leads.
I still think in terms of the hardware, not the software to describe
the hardware.  Many of my designs need to be efficient in terms of
hardware used and so I have to waste time learning how to get what I
want from the tools.  Sometimes I just get tired of having to work
around the tools rather than with them.

I seem to recall a similar argument when assemblers gave way to higher
level language compilers...

This change in digital hardware design is not unlike the change from
one-man furniture shops to furniture factories. The craftsmen of the
one-man shops painstakingly treated every detail as critical to their
product: a chair. And the result was an exquisite piece of furniture,
albeit at a very high price, and very low volume (unless you hired a
lot of one man shops at the same time).

Circuit designers are no different (being one myself, dating back to
those "I can do that function in one less part" days gone by). But the
target has changed. We no longer need a chair, we need a stadium full
of them. And we need the elevators, climate control, fire suppression,
lighting, and all the other support systems, to go along with them.

Perhaps we should take a step back, and look at what we really need
(hint: a place for a lot of people to watch an event, while seated
most of the time). Now I can optimize my stadium to recognize that all
of my seats don't need to be finely crafted pieces of furniture. But I
don't know that until I focus on the requirements: "What must my
project do?" So, instead of finding a way to describe the project as
a collection of specific chairs, elevators and fire extinguishers, we
need to describe it as a set of desired behaviors, and then, through
some process (hopefully semi-automated), convert that description into
an optimized design for the stadium. Could the craftsman and his tools
have done that?

What do you want from the tools, a collection of exquisitely crafted
chairs, or an efficient stadium?

Andy
 
Ad

Advertisements

C

Christopher Head


Am I missing something, or is the transmitter slightly flawed in this
code? I seem to see the following:

1. At some point, TxState_v is SEND, and you reach TxBitSampleCount_v =
tic_per_bit_g and hence bit_done is true. Also, TxBitCount_v is 7.

2. You enter the "if" block in the SEND case in procedure tx_state. You
set TxBitSampleCount_v to 0, serial_out_v to Tx_v(TxBitCount_v) =
Tx_v(7). You set TxBitCount_v to TxBitCount_v+1 = 8. You notice that
TxBitCount_v=char_len_g=8 and hence set TxState_v to STOP.

3. tic_per_bit_g clocks later, you enter the "if" block in the STOP
case. You set serial_out_v to '1' and TxState_v to IDLE.

4. From this moment, if the application queries the status register,
you will see that TxState_v is IDLE and hence report transmitter ready.
The application could thus immediately strobe another byte of data into
the transmit data register. Then tx_state will transition to TxState_v
= START and, on the next clock, set serial_out_v to '0'.

Problem: this might not have been a full bit-time since you started
sending the '1' stop bit! You never actually guarantee to wait for the
full stop bit to pass before accepting new data from the application in
the transmit data register!

Or am I missing something?
Chris
 
R

rickman

I have found the 1st approach far simpler, by using a natural subtype
for the counter. Then (count - 1 < 0) is the carry out for a down
counter. Similarly, (count + 1 > 2**n-1) is the carry out for an n bit
up counter. No fighting required.

I won't argue that, both of these will utilize the carry out of an
adder. But that may or may not be the same adder I am using to update
count with. I have looked at the logic produced and at some time
found two, apparently identical adder chains used, one of which had
all outputs unconnected other than the carry out of the top and the
other used the sum outputs to feed the register with the top carry
ignored. Sure, there may have been something about my code that
prevented these two adders being merged, but I couldn't figure out
what it was.

I see a number of posts that don't really get what I am trying to
say. I'm not arguing that you can't do what you want in current
HDLs. I am not saying I want to use something similar to assembly
language to provide the maximum optimization possible. I am saying I
find it not infrequent that HDL gives nothing close to optimal results
because the coding style required was not obvious. I'm saying that it
seems like it should be easier to get the sort of simple structures
that are commonly used without jumping through hoops.

Heck, reading the Lattice HDL user guide (not sure if that is the
actual name or not) they say you shouldn't try to infer memory at all,
instead you should instantiate it! Memory seems like it should be so
easy to infer...

I don't know Verilog that well, but I do know VHDL is a pig in many
ways. It just seems like it could have been much simpler rather than
being such a pie-in-the-sky language.

Rick
 
R

rickman

Back in the 8086 days, I had to do the same thing with compilers.  I
spent a fair amount of time learning how the code generation phase
worked so I could get the tools to work properly.  I remember a "brand-
name" 'C' compiler very carefully generating code to keep the loop
control variable in the CX register, then at the end of the loop
moving CX to AX and adding minus-one.  (For those that don't program
86's in assembly, the CX register is a special purpose register for
the 'LOOP' instruction.)  Now that processors are real fast, and
memory is very cheap, I don't worry so much about efficient code.

I would draw a parallel to the frustrations you are having with HDLs.
Another parallel I would draw is that when I wanted 'very fast tight
code' I would code certain modules in assembly, and link them with 'C'
routines.  When the synthesizer just won't get it right, I draw it a
picture.  (Which is one advantage the Acme-Brand tool has over the
Brand-X tool)

Yeah, it's not portable, and it isn't "right".  But I'm getting
product out the door.

RK.

I never said I don't get a working design. I just feel that HDLs are
more complex than useful.

BTW, I don't agree with the analogy between HDLs and compilers. For
one, you are considering the case of PCs where speed and memory are
virtually unlimited. My apps tend to be more like coding for a PIC
with 8K Flash and 1K RAM. A perfect target for a Forth cross-
compiler, but likely a poor target for a C compiler.

Where is the Forth equivalent for hardware design?

Rick
 
T

Tricky

I won't argue that, both of these will utilize the carry out of an
adder.  But that may or may not be the same adder I am using to update
count with.  I have looked at the logic produced and at some time
found two, apparently identical adder chains used, one of which had
all outputs unconnected other than the carry out of the top and the
other used the sum outputs to feed the register with the top carry
ignored.  Sure, there may have been something about my code that
prevented these two adders being merged, but I couldn't figure out
what it was.

I see a number of posts that don't really get what I am trying to
say.  I'm not arguing that you can't do what you want in current
HDLs.  I am not saying I want to use something similar to assembly
language to provide the maximum optimization possible.  I am saying I
find it not infrequent that HDL gives nothing close to optimal results
because the coding style required was not obvious.  I'm saying that it
seems like it should be easier to get the sort of simple structures
that are commonly used without jumping through hoops.

Heck, reading the Lattice HDL user guide (not sure if that is the
actual name or not) they say you shouldn't try to infer memory at all,
instead you should instantiate it!  Memory seems like it should be so
easy to infer...

I don't know Verilog that well, but I do know VHDL is a pig in many
ways.  It just seems like it could have been much simpler rather than
being such a pie-in-the-sky language.

Rick

From all this reading, Im guessing its not a problem with the language
you have, its more the synthesisors.

So my two thoughts:

1. Try AHDL - its pretty explicit (but you'll be stuck with Altera).
2. Instead of getting pissed off with the tools and pretending its an
HDL problem, how about raising the issue with the vendors and asking
them why they've done it the way the have.

Personally, I have never had too much of a problem with the tools. The
Firmware works as I intend. Im not usually interested in the detail
because it works, it ships, the customer pays and we make a profit. I
dont care if a counter has used efficient carry out logic or not - it
works and thats all the customer cares about. When its working, or I
have fit problems I can then go into the finer detail.
 
J

Jan Decaluwe

Andy said:
I seem to recall a similar argument when assemblers gave way to higher
level language compilers...

This change in digital hardware design is not unlike the change from
one-man furniture shops to furniture factories. The craftsmen of the
one-man shops painstakingly treated every detail as critical to their
product: a chair. And the result was an exquisite piece of furniture,
albeit at a very high price, and very low volume (unless you hired a
lot of one man shops at the same time).

Circuit designers are no different (being one myself, dating back to
those "I can do that function in one less part" days gone by). But the
target has changed. We no longer need a chair, we need a stadium full
of them. And we need the elevators, climate control, fire suppression,
lighting, and all the other support systems, to go along with them.

Perhaps we should take a step back, and look at what we really need
(hint: a place for a lot of people to watch an event, while seated
most of the time). Now I can optimize my stadium to recognize that all
of my seats don't need to be finely crafted pieces of furniture. But I
don't know that until I focus on the requirements: "What must my
project do?" So, instead of finding a way to describe the project as
a collection of specific chairs, elevators and fire extinguishers, we
need to describe it as a set of desired behaviors, and then, through
some process (hopefully semi-automated), convert that description into
an optimized design for the stadium. Could the craftsman and his tools
have done that?

What do you want from the tools, a collection of exquisitely crafted
chairs, or an efficient stadium?

This analogy suggests the need for a compromise, which I think isn't
there.

I don't see a case where the schematic entry craftsman can
realistically hope to beat the guy with the HDL tools. For example,
for smallish designs, it can be shown that logic synthesis can
generate a solution close to the optimum, *regardless* of the quality
of the starting point. The craftsman can draw any pictures he wants,
even if the tool guy writes the worst possible code, the synthesis
result will still be as good or better.

Of course, for realistic, larger designs, the structure of the
input code becomes more and more significant. But thanks to
powerful heuristics, local optimization algorithms, and the
ability to recognize higher level structures, this is a
gradual process. In contrast, the craftsman's ability to
cope with complexity quickly detoriates beyond a certain
point. As a result, he has to rely on logic-wise inefficient
strategies, such as excessive hierarchy.

--
Jan Decaluwe - Resources bvba - http://www.jandecaluwe.com
Python as a HDL: http://www.myhdl.org
VHDL development, the modern way: http://www.sigasi.com
Analog design automation: http://www.mephisto-da.com
World-class digital design: http://www.easics.com
 
Ad

Advertisements

J

Jan Decaluwe

rickman said:
I won't argue that, both of these will utilize the carry out of an
adder. But that may or may not be the same adder I am using to update
count with. I have looked at the logic produced and at some time
found two, apparently identical adder chains used, one of which had
all outputs unconnected other than the carry out of the top and the
other used the sum outputs to feed the register with the top carry
ignored. Sure, there may have been something about my code that
prevented these two adders being merged, but I couldn't figure out
what it was.

I see a number of posts that don't really get what I am trying to
say.

Probably because many people don't see what you say you are seeing,
so they must think you don't have a case.

I'm not arguing that you can't do what you want in current
HDLs. I am not saying I want to use something similar to assembly
language to provide the maximum optimization possible. I am saying I
find it not infrequent that HDL gives nothing close to optimal results
because the coding style required was not obvious. I'm saying that it
seems like it should be easier to get the sort of simple structures
that are commonly used without jumping through hoops.

Heck, reading the Lattice HDL user guide (not sure if that is the
actual name or not) they say you shouldn't try to infer memory at all,
instead you should instantiate it! Memory seems like it should be so
easy to infer...

I don't know Verilog that well, but I do know VHDL is a pig in many
ways. It just seems like it could have been much simpler rather than
being such a pie-in-the-sky language.

I think Verilog will suit you better as a language, you really should
consider switching one of these days. However, there is no reason why
it would help you with the issues that you say you are seeing here.

--
Jan Decaluwe - Resources bvba - http://www.jandecaluwe.com
Python as a HDL: http://www.myhdl.org
VHDL development, the modern way: http://www.sigasi.com
Analog design automation: http://www.mephisto-da.com
World-class digital design: http://www.easics.com
 
A

Andy

This analogy suggests the need for a compromise, which I think isn't
there.

I don't see a case where the schematic entry craftsman can
realistically hope to beat the guy with the HDL tools. For example,
for smallish designs, it can be shown that logic synthesis can
generate a solution close to the optimum, *regardless* of the quality
of the starting point. The craftsman can draw any pictures he wants,
even if the tool guy writes the worst possible code, the synthesis
result will still be as good or better.

Of course, for realistic, larger designs, the structure of the
input code becomes more and more significant. But thanks to
powerful heuristics, local optimization algorithms, and the
ability to recognize higher level structures, this is a
gradual process. In contrast, the craftsman's ability to
cope with complexity quickly detoriates beyond a certain
point. As a result, he has to rely on logic-wise inefficient
strategies, such as excessive hierarchy.

--
Jan Decaluwe - Resources bvba -http://www.jandecaluwe.com
    Python as a HDL:http://www.myhdl.org
    VHDL development, the modern way:http://www.sigasi.com
    Analog design automation:http://www.mephisto-da.com
    World-class digital design:http://www.easics.com- Hide quoted text -

- Show quoted text -

I've seen too many examples where a bit more performance can be
obtained by either tweaking the code, or "hard-coding" the solution.
They are getting fewer and farther between, but they are still there.
My point was that the extra performance is seldom, but not never,
needed, and on a larger scale, letting the synthesis tool do the heavy
lifting results in a better overall design MOST of the time.

There're ain't no 100% solutions. If you try to hard code 100%, you
lose; if you try to let the synthesis tool do 100%, you lose.
Compromise is necessary.

Andy
 
A

Andy

I won't argue that, both of these will utilize the carry out of an
adder.  But that may or may not be the same adder I am using to update
count with.  I have looked at the logic produced and at some time
found two, apparently identical adder chains used, one of which had
all outputs unconnected other than the carry out of the top and the
other used the sum outputs to feed the register with the top carry
ignored.  Sure, there may have been something about my code that
prevented these two adders being merged, but I couldn't figure out
what it was.

I have very seldom have that problem (two adders), and when I do, the
reason (usually some extra condition on one adder that was not there
on the other) is easily found and fixed.

To be fair, the synthesis tools do not always implement a simple carry
chain for either, but what they do implement is at least as fast and
compact as a simple carry chain.
I see a number of posts that don't really get what I am trying to
say.  I'm not arguing that you can't do what you want in current
HDLs.  I am not saying I want to use something similar to assembly
language to provide the maximum optimization possible.  I am saying I
find it not infrequent that HDL gives nothing close to optimal results
because the coding style required was not obvious.  I'm saying that it
seems like it should be easier to get the sort of simple structures
that are commonly used without jumping through hoops.

I think "obvious" is related closely to the experience level of the
observer. What appears obvious to me may not be obvious (yet) to you.
Heck, reading the Lattice HDL user guide (not sure if that is the
actual name or not) they say you shouldn't try to infer memory at all,
instead you should instantiate it!  Memory seems like it should be so
easy to infer...

Either their documentation is woefully out of date, or their synthesis
tool is far from the state of the industry.
I don't know Verilog that well, but I do know VHDL is a pig in many
ways.  It just seems like it could have been much simpler rather than
being such a pie-in-the-sky language.

Trust me; about 18 years ago, I was in exactly the same boat WRT to
VHDL synthesis. Why do I need this much "stuff" to do what I can do in
a sheet of schematics, and know without hardly thinking about it that
it is correct? And at the time, the synthesis tools were not nearly as
good as they are now, and FPGA performance was also not nearly as
good, so much more of my typical design work needed to be done at a
lower level.

But sythesis and FPGA performance are now vastly superior to what they
were then, and I can do the vast majority of a design without having
to deal too much at the gates and flops level. It is a difficult
paradigm to embrace when you come from a structural, schematic based
design background.

As for whether Verilog might work better for you, perhaps. I know I
could not be nearly as productive with the constraints of what you
cannot do in verilog at the higher levels of abstraction. Something
like the fixed and floating point capabilities of VHDL is completely
beyond the capabilities of verilog without major changes to the
language, but all it took was a couple of packages in VHDL. The only
problem any synthesis tools had with it was tied to the fact that they
assumed (improperly) that only non-negative indices would ever be used
for vectors.

Andy
 
Ad

Advertisements

R

rickman

From all this reading, Im guessing its not a problem with the language
you have, its more the synthesisors.

So my two thoughts:

1. Try AHDL - its pretty explicit (but you'll be stuck with Altera).
2. Instead of getting pissed off with the tools and pretending its an
HDL problem, how about raising the issue with the vendors and asking
them why they've done it the way the have.

Personally, I have never had too much of a problem with the tools. The
Firmware works as I intend. Im not usually interested in the detail
because it works, it ships, the customer pays and we make a profit. I
dont care if a counter has used efficient carry out logic or not - it
works and thats all the customer cares about. When its working, or I
have fit problems I can then go into the finer detail.

Ok, so there ARE times when you care if the synthesis uses two carry
chains instead of one or if it used a set of LUTs to check terminal
case rather than the carry chain. You are just saying that is not so
often. That's not the same as saying, "It works, it ships".

The same is true for most people I'm sure and I expect no small number
of them virtually never look past the HDL. But when you do need speed
or minimal area this can be important. I recently did a design where
I had to cram five pounds of logic into a 4 pound FPGA. It ended up
working, but I had to optimize every module. That worked out ok for
the most part as I typically test bench each module anyway, so I just
had to do a synthesis on it as well and work the code to get a good
size result. That is how I found things like double carry chains,
etc.

That is also why I am now wondering why we have a language that is so
far removed from the end product. I don't agree that this is a matter
for the synthesis vendors. Like I said somewhere else, if you want a
particular solution, the vendors tell you to instantiate.
Instantiation is very undesirable since it is not portable across
vendors and often not portable across product lines within a vendor!
The language is flexible, that's for sure. But it seems to be
flexible without purpose. Many of the changes currently being
suggested in VHDL is to provide an easier to use language by getting a
bit closer to what engineers want to use it for. I just think that in
many ways the language is way too far removed from what we want to do
with it.

Rick
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top