So I define:
constant MAX_COUNT: natural := 42;
signal my_counter: natural range 0 to MAX_COUNT;
Then I can simply increment the counter with my_counter <= my_counter +
1, and compare it to numeric constants like 0, 1 and MAX_COUNT.
The interesting thing comes when there's need to convert back from
natural to std_logic_vector. I prefer not to intermix mathematical
libraries and use solely ieee.numeric_std,
All good things
so my conversion is:
my_slv <= std_logic_vector(to_unsigned(my_counter, 6));
(Not that I think of it, I could use my_slv'size or 'length or
something, would that be synthesizable ?)
This also works just fine and has no trouble synthesizing
my_slv <= std_logic_vector(to_unsigned(my_counter, my_slv'length));
In fact, wherever you have to 'hard code' in the length, the MSB, LSB, etc.
of any vector you should probably pause for a second and consider using the
appropriate signal attribute instead as shown above for 'length (i.e.
'length, 'left, 'right, etc.)
What are other people's approaches when implementing counters ?
Same as what you're talking about. I tend to use naturals since most
counters tend to be counting from 0 to something, but in any case if I need
something that counts negative I'll use integer, so in other words you can
safely use the appropriate data type without fear of retribution from
probably most synthesis tools.
From a synthesis perspective, make sure you define the range completely
since
signal Counter: integer;
will synthesize to a 32 bit counter since no range is specified. If Counter
only counts from 0 to 7 then you'll end up having 29 bits getting
synthesized that always result in 0. The synthesis tool uses the range to
figure out how many bits are needed to implement the counter, I haven't
found any that will optomize away those upper 29 bits in this example.
One kind of clumsy thing though is if the range needs to be somewhat generic
and get values from the generic map AND you need to be able to convert this
to/from std_logics since now the width of the std_logic_vector and the range
of the counter will both vary as a function of the generic. In that case,
I'll bring in the width of the std_logic_vector version of the counter as
the generic and define the range of the counter in terms of that generic.
Ex. If 'N_Bits' is the name of the generic input to the entity then
signal Counter_Slv: std_ulogic_vector(N_Bits - 1 downto 0);
signal Counter: natural range 0 to 2**(N_Bits - 1);
The 'problem' is simply one of usage and documentation. When you document
how this generic is used, you'll end up saying something to the effect that
you need to set 'N_Bits' to the base 2 log of .... In fact, calling it
N_Bits as I've done is probably NOT a good name to use.
Not that you should write your own FIFO, but if you did, one of the
parameters you would probably want to bring out is a generic that specifies
the depth of the FIFO. So to someone trying to USE your nice new FIFO
design, they would probably immediately grasp that a generic called
'FIFO_Depth' represents the depth of the FIFO. But if you follow the above
approach, what you would bring out as the generic would actually be
log2(FIFO_Depth). You could get into calling the generic 'FIFO_Depth_Bits'
or something and say that this is the number of bits needed to represent
FIFO_Depth or maybe 'FIFO_Depth_Log2' and say that the actual depth of the
FIFO is 2**FIFO_Depth_Log2. So to make a 256 entry FIFO one would need to
set this generic to 8. My preference would be to name it something like
Log2_Fifo_Depth and document it as being log2() of the desired depth of the
FIFO. Remember the perspective of someone trying to use your code but is
not intimately familiar with it as you are as you come up with names.
It's not at all difficult to grasp when you're both writing the
entity/architecture of the new component AND writing the code that
instantiates that component since you're on both sides of the fence and
obviously know what is needed. If you have no visibility into the
entity/architecture though and are now trying to use that code then having
to specify the log2 of the real thing that you would like to specify is not
terribly intuitive. Calling that generic something that has to do with the
number of bits of something is even less intuitive. Since the code that you
write is potentially code that someone else will pick up and just want to
use, not have to dig into and completely understand themselves (i.e. code
reuse) you just need to be careful about how to name those generics and try
to make it painfully clear about how the user should use that generic.
KJ