VHDL refactoring tools

C

c d saunter

Greetings All,

I've just spent 20 mins editing 12 VHDL files to add two signals and route
them up and down a design hierarchy.

Tedious and not exactly rocket science.

Is anyone aware of any refactoring tools out there to automate such
processes?

Regards,
Chris
 
A

Andy

Greetings All,

I've just spent 20 mins editing 12 VHDL files to add two signals and route
them up and down a design hierarchy.

Tedious and not exactly rocket science.

Is anyone aware of any refactoring tools out there to automate such
processes?

Regards,
Chris

Well this method is not perfect, but it can help: Declare a record
type (either one and make all signals inout, or three for in, out,
inout) in a package and use that record type on the ports. Then you
can add/delete elements to the record by editing the package. This
typically works best for a bus or group of related signals that
typically go anywhere as a group. The entities may have multiple
record ports if they attach to multiple buses or signal groups.

It would be REALLY NICE if VHDL had user defined modes for record
types such that different elements of a record type port could have
different modes. Those user defined modes could then be used in a port
specification with a port of that type. You could define multiple
modes for the same type (i.e. master, slave for a bus).

Besides that, I strongly recommend not using components/configurations
if you do not need the configurability. Directly instantiate the
entity/architectures, and then you don't have any components or
configurations to keep up to date.

Andy
 
K

kennheinrich

I'm not sure I know exactly what you mean by "refactoring"
in this context. If you want to rearrange the hierarchy to
make such changes simpler then, no, I don't know of
anything that does it automatically. Design capture
tools such as HDL Designer might help, at a price.

I'm intrigued by the problem you describe. I agree that it
is neither exciting nor cutting-edge work, but I'm interested
to know why you need to do it at all.

If you merely need a temporary probe on a couple of internal
nodes, there are other ways to do it - not pretty ways,
but they exist and they work. No need to drill your
probe signals all the way through your hierarchy.

If it's a more fundamental change, then it raises a much
bigger question about the design of component hierarchy
and why you got the interfaces between components wrong
in the first place (that's not a criticism, just a bald
statement of the problem). It's certainly happened to me,
but in pretty much every case it has been self-inflicted
pain caused by carelessly building the hierarchy at too
fine a granularity. Procedures typically provide a much
cleaner and more flexible way to decompose functionality
than the heavy use of instance hierarchy, and a procedural
interface is usually much easier to extend or tweak than
a port interface.
--
Jonathan Bromley, Consultant

DOULOS - Developing Design Know-how
VHDL * Verilog * SystemC * e * Perl * Tcl/Tk * Project Services

Doulos Ltd., 22 Market Place, Ringwood, BH24 1AW, UK
(e-mail address removed)://www.MYCOMPANY.com

The contents of this message may contain personal views which
are not the views of Doulos Ltd., unless specifically stated.

Johnathan wrote one point on which I can comment:
I'm intrigued by the problem you describe. I agree that it
is neither exciting nor cutting-edge work, but I'm interested
to know why you need to do it at all.

Sometimes, the use of heavy hierarchy and awkward structure is imposed
on you, no matter how much of a better job you could have done
yourself. I've been through the original poster's experience many
times, too. One case in point was when using a machine-generated
interface module generated by one of the wizard tools from one of the
"big two" FPGA vendors. The module itself was built using structural
instantiation many levels deep, and the total code base from the
vendor tool was on the order of 100+ files. The module instantiated
at the lowest level one of the "hard IP" blocks in the FPGA.
Unfortunately, there was a problem with the hard IP which required a
two-way control/status bus to be "plumbed" down into the lowest level
block from the top. Also, there was a feature provided by the low-
level IP block (let's say, a CRC generator/checker module) that wasn't
made visible at the top level by the vendor's module, and we needed to
use it to save fabric gates. We then had to "plumb in" a two-way bus
all the way down through the hierarchy to access the ports on the low
level block. To make the matter worse, there were seven or eight
different flavours of interface module we needed, and instead of
providing a module with generics, the vendor insisted on producing
seven or eight completely independent, but subtly different, copies of
the entire module. Now we had 700+ files to manage, and for each
flavour, we had to do the same control and CRC bus plumbing. Needless
to say, there were many errors made and much cursing :) Testbenches,
version control, regression, dumb typing errors -- all a complete
nightmare.

Clearly there's a (good) argument that says the vendor could have been
more foresighted and structured the module differently. But I had
exactly the same thought as the OP -- to develop a refactoring tool
that could manage this type of thing well. Using something like a GUI
tool (like HDL Analyser, as mentioned) can assist, but then you get
into the whole other mess of having code that's been mangled, with
machine-generated random signal names all over the place, and all the
other stuff that goes along with these tools (lack of integration in
your existing version control system, binary proprietary file formats,
I can flame for hours on this ...! )

So enough with the war story. To the OP -- I feel your pain. To the
rest of the group -- this is an interesting thread - any other good
suggestions?

- Kenn
 
M

Mike Treseler

So enough with the war story. To the OP -- I feel your pain. To the
rest of the group -- this is an interesting thread - any other good
suggestions?

Since you asked, I would suggest using real synthesis code
rather than vendor netlists. I would rather spend
the same amount of "wartime" writing/siming
and sometimes buying clean, portable, testable vhdl sources.

This is a matter of style and mine is a minority opinion.
The classical hardware method is to wire together prefabricated
modules, turn on the power, and start probing.
This works, but it shifts most of the working to debugging.

I agree with Jonathan about procedural rather than
structural decomposition. The base function of
a crc check is just a shift with a twist, and
adding procedural layers is something the vhdl
language is good for. VHDL synthesis works better
than most designers believe.

-- Mike Treseler
 
K

kennheinrich

Since you asked, I would suggest using real synthesis code
rather than vendor netlists. I would rather spend
the same amount of "wartime" writing/siming
and sometimes buying clean, portable, testable vhdl sources.

This is a matter of style and mine is a minority opinion.
The classical hardware method is to wire together prefabricated
modules, turn on the power, and start probing.
This works, but it shifts most of the working to debugging.

I agree with Jonathan about procedural rather than
structural decomposition. The base function of
a crc check is just a shift with a twist, and
adding procedural layers is something the vhdl
language is good for. VHDL synthesis works better
than most designers believe.

-- Mike Treseler

Mike,

I agree with you and Jonathan to a large extent about ways of
decomposing the design. I also prefer to infer and describe designs
rather than instantiating literally. I've actually seen guys who
randomly mixed inferred flops (the usual if rising_edge(clk) style),
and component-instantiated D-flops, not for any performance or
esoteric reason, but because they were, not to put too fine a point on
it, marginally incompetent. I still shudder when I think of that code!

In this particular example, though, and in many like it, I was trying
to use VHDL to get access to pre-existing hard resources in the FPGA
(a hard-wired CRC module that would run at 500 MHz and use up no
fabric, and there were twenty of them on the chip!). While I would
have liked to use VHDL to describe the CRC function, and as you point
out, would have been trivial to do, it would have cost me gates and
speed. The problem I was attacking was simply wiring and getting
access to ports on an inner hardware block.

- Kenn
 
K

KJ

Jonathan Bromley said:
If it's a more fundamental change, then it raises a much
bigger question about the design of component hierarchy
and why you got the interfaces between components wrong
in the first place (that's not a criticism, just a bald
statement of the problem).

Actually not. The software folks who practice agile development insist on
incremental development, frequent testable deliverables and designing in
absolutely no more than is required for the current deliverable. Given
that, the changing of interfaces and hierarchy and the re-factoring of code
into different functionality is part of the way of going about business.
Whether you subscribe to or accept that as a 'good' design approach or not
for developing hardware, it does appear to be an increasingly popular
software development method and these changes do not necessarily mean that
the interfaces were wrong in the first place.

Kevin Jennings
 
K

KJ

Sometimes, the use of heavy hierarchy and awkward structure is imposed
on you, no matter how much of a better job you could have done
yourself. I've been through the original poster's experience many
times, too. One case in point was when using a machine-generated
interface module generated by one of the wizard tools from one of the
"big two" FPGA vendors.

Choosing to use IP from a vendor is not the same as something being imposed
on you. There was a business decision made by someone in your company to
use the vendor's IP to implement the function. While you personally may
have had no input on the decision and therefore had the IP imposed on you,
the same can not be said of your company. Presumably part of that decision
was that it will cost less to use something that somebody already has than
it would be to re-design it yourselves. To the extent that the IP block
indeed does exactly what you expected of it than the decision will turn out
to be a sound one...when it doesn't, well one can cast blame widely but it
usually doesn't do much to solve the problem.
The module itself was built using structural
instantiation many levels deep, and the total code base from the
vendor tool was on the order of 100+ files. The module instantiated
at the lowest level one of the "hard IP" blocks in the FPGA.
Unfortunately, there was a problem with the hard IP which required a
two-way control/status bus to be "plumbed" down into the lowest level
block from the top.

You're the customer on the IP, generate a service request and put the heat
on the vendor to get the problem fixed. Then start working on the work
around because you have customers too who won't accept delays.
Also, there was a feature provided by the low-
level IP block (let's say, a CRC generator/checker module) that wasn't
made visible at the top level by the vendor's module, and we needed to
use it to save fabric gates. We then had to "plumb in" a two-way bus
all the way down through the hierarchy to access the ports on the low
level block.

Another service request, although this would be a feature suggestion...and
we all know where those end up in the priority pile. It's still worth
putting in if you think it could have some general utility. Although it
wouldn't help you in your particular case, if it gets implemented it would
benefit others in the future....just like you may have (or will) benefit
from others putting in feature suggestion on IP that you choose to use
further down the road...open source anything are examples of this continued
refinement, that one can benefit from and (unfortuneatly if you're working
under deadline) may contribute to in various ways.
To make the matter worse, there were seven or eight
different flavours of interface module we needed, and instead of
providing a module with generics, the vendor insisted on producing
seven or eight completely independent, but subtly different, copies of
the entire module. Now we had 700+ files to manage, and for each
flavour, we had to do the same control and CRC bus plumbing. Needless
to say, there were many errors made and much cursing :) Testbenches,
version control, regression, dumb typing errors -- all a complete
nightmare.

Clearly there's a (good) argument that says the vendor could have been
more foresighted and structured the module differently.

Actually the vendor is trying to protect the IP from being generic in any
sense because they are trying to also sell you silicon. Trying to make the
code somewhat opaque and non-generic hinders at least some from reverse
engineering it and creating something that gets used on competitor's
silicon. But remember, you don't have to choose to use the IP, you can
engineer the solution yourself...just like the vendor did. They spent their
money engineering a solution that might be acceptable to a majority of their
users, it's up to the market to sort out the winners and losers.
But I had
exactly the same thought as the OP -- to develop a refactoring tool
that could manage this type of thing well. Using something like a GUI
tool (like HDL Analyser, as mentioned) can assist, but then you get
into the whole other mess of having code that's been mangled, with
machine-generated random signal names all over the place, and all the
other stuff that goes along with these tools (lack of integration in
your existing version control system, binary proprietary file formats,
I can flame for hours on this ...! )

So enough with the war story. To the OP -- I feel your pain. To the
rest of the group -- this is an interesting thread - any other good
suggestions?

It is an interesting one, but like the others I don't know of any tools
either. Perhaps perusing the agile software developer communities might
provide something close.

Kevin Jennings
 
M

Mike Treseler

Jonathan said:
Why do we even bother to do hierarchical partitioning in
the first place?

Exactly. A preference for describing lots
of boxes and wires is a hindrance to agile development
of vhdl synthesis code because the language
provides little assistance in this area.
And why do I need an interface if I'm writing
all the code in the design entity?

For a three week development "sprint"
I focus on the top entity IO
and procedures to update the register
variables. When it sims, and makes Fmax,
I'm ready for the next sprint.

Certainly interfaces are required when
multiple designers work on the
same project, but there is no
upside to me in coding and
instancing an entity
just to update a register.

-- Mike Treseler
 
K

KJ

Jonathan Bromley said:
Interesting, but don't necessarily expect me to agree :)
Didn't expect you to, nor was I implying that I agree with developing in
that fashion, just that there are things to be learned from other
disciplines that can be applicable to what we do...and that reworking an
interface or refactoring is not necessarily a symptom of an initially
'wrong' interface/partitioning.
Why do we even bother to do hierarchical partitioning in
the first place? Because we wish to localise our design
activity sufficiently to be able to reason about it reliably.

Agreed...but also I would say to create reusable components so that on some
future project one can reuse instead of redo.
"Agile" refactoring of a design, as you describe, inevitably
widens the scope of that reasoning, making me question
whether the original partitioning was useful; if we can
think sufficiently clearly (perhaps with help from tools)
about the whole design to be able to redesign its internal
interfaces at the drop of a hat, then we didn't need it
to be partitioned.

I'm not quite sure what you're getting at. As I understand it, agile
development is still disciplined and correct design partitioning for the
deliverable at hand is still important, but it accepts that the partitioning
might change as new deliverables are being worked on in the future...it just
doesn't bog down current development with the needs for things that are down
the road. They would probably see it more as a continuous refinement
process that improves what they have but a willingness to toss out and redo
the things that (now in hindsight) may have been designed badly, or perhaps
were designed well for their purpose, but that purpose is no longer useful
and a more generalized design would be better.

How the interfaces and partitioning evolve over time as 'the project' moves
along and whether the original interfaces and partitioning were in fact
'good' or not, will depend almost completely on the skill of the
designer...I've yet to come across a case where that was not true. Skilled
people (working in their area of skill) generally produce good designs; good
designs can usually be readily modified to accomodate new situations but
that in no way necessarily reflects badly on the original design's
'constraints' for not foreseeing that new situation.
My experience (and note, only experience - no theory
at work here at all) is that if I need to change
an interface between components, I almost certainly
have the partitioning wrong; by contrast, if I got the
partitioning (reasonably) right, then I likely was able
to create a clean, reliable design for the interfaces
between blocks at the outset.

My experience is a mix, sometimes it is as you stated, but often it is
because of refinements and generalizations that occur later when you reuse
designs in new applications (sometimes no further than in the testbench for
the design itself). The refinements and generalizations of the reusable
component creates a better, more useful component that can be used in more
situations. Use of reusable code is the main way to get design productivity
advantage over the competition which (in theory) should lead to shorter
development times.

I don't consider anyone to be so experienced and wise that they could (or
should) develop the most generally useful and customizable widget at the
first crack. Darn near everything can be improved by application of more
thinking, but applying that thinking all upfront can result in late designs,
missed product windows, etc.

Creating the most generally useful widget right out of the shoot is probably
not cost appropriate for most projects since it burdens that project for the
cost and time of development of something that is only used in a (probably)
small subset of the possible cases that can be handled. However, *usage* of
that generally useful widget many times IS appropriate for many
projects...trouble starts when that generally useful widget doesn't quite
work right, but that's a totally separate issue.
On the other hand, we still have quite a lot to learn
from the Extreme Programming "test first, implement
later" approach. And there is a long history of the
hardware design community lagging a couple of decades
behind software design in its methodologies.

Hey, we're not THAT backwards out in hardware land....after all we know all
about concurrent processes executing in parallel. The software folks are
still grappling with how to deal with more than one processor.

Always a pleasure bantering with you.

Kevin Jennings
 
K

KJ

Mike Treseler said:
Exactly. A preference for describing lots
of boxes and wires is a hindrance to agile development

I disagree in that it can (should) encourage design reuse.
of vhdl synthesis code because the language
provides little assistance in this area.

Finally...something that emacs doesn't handle ;)
And why do I need an interface if I'm writing
all the code in the design entity?

So you can reuse that widget in the future (or use something that you wrote
in the past) knowing that when simply connected it will work properly and
not spend any time redoing what you've already written and tested would be
one good reason.
For a three week development "sprint"
I focus on the top entity IO
and procedures to update the register
variables. When it sims, and makes Fmax,
I'm ready for the next sprint.

Certainly interfaces are required when
multiple designers work on the
same project

That would be another good reason.
, but there is no
upside to me in coding and
instancing an entity
just to update a register.

Presumably one would instance something that does something functionally
useful (maybe a PCI interface, a memory controller) that does not border on
the near trivial (like an internal memory that doesn't get synthesized to
flops).

When the level of functionality delivered by the entity is low, it had best
not come along with an interface that is difficult to understand since the
learning curve when using the entity must be considered as well. The bang
for the buck ratio here is how much function you get per unit of time needed
to understand how to use it (versus time needed to design and test it
yourself).

Kevin Jennings
 
A

Andy

I agree with Jonathan about procedural rather than
structural decomposition. The base function of
a crc check is just a shift with a twist, and
adding procedural layers is something the vhdl
language is good for. VHDL synthesis works better
than most designers believe.

-- Mike Treseler

Am I missing something? Procedural interfaces still require data to be
passed in and out, but if the needed data was not passed, the same
problem exists. Unless you go to global signals, where there is no
hierarchy at all, or you use one huge file to take advantage of local
scope (always defining procedures inside the scope of the procedure in
which they are called), procedures vs entities don't appear to make a
difference. Both methods (globals and entirely local scope) have been
discredited in SW and HW design for a long time. Actually, one
language (ada, from which much of vhdl was borrowed) allows a
procedure to be locally declared and externally implemented in a
separate file, but I don't know if that eliminates local scope
advantages.

The ability to "containerize' the interface (procedure or entity),
like running virtual conduit through a building's walls, allows one to
add/subtract interface elements, without having to tear into the
intervening walls. And VDHL "conduits" (record types), unlike real
ones, never "fill up". They just don't currently have a flexible means
of defining directionality (port modes).

Andy
 
M

Mike Treseler

Andy said:
Am I missing something? Procedural interfaces still require data to be
passed in and out, but if the needed data was not passed, the same
problem exists.

I should have explained that I write single process
entities so that my register update procedures
have direct access to all the register variables
in the process. For this reason I use
either a simple interface list like this,

procedure retime
(arg_in : in std_ulogic; -- input value
update : inout retime_t -- 3 bit shifter
)
is
begin
update.f3 := update.f2; -- f2[DQ]f3
-- \__
-- \
update.f2 := update.f1; -- f1[DQ]f2
-- \__
-- \
update.f1 := arg_in; -- in[DQ]f1
end procedure retime;

or sometimes no interface at all, like this:

procedure inc_tic_count is
begin
RxBitSampleCount_v := RxBitSampleCount_v+1;
end procedure inc_tic_count;

just to unify repeated statements.

I do use port maps at the top level,
but my bias is for large entities with
small interfaces. Some generic interface
entities are reusable by others, but I find that
most reuse by me involves scavenging bits and pieces
from old designs. Packaging procedures like
retime above adds some formality to this process.

-- Mike Treseler
 
A

Andy

Am I missing something? Procedural interfaces still require data to be
passed in and out, but if the needed data was not passed, the same
problem exists.

Yes, but you can add new procedures without breaking code that
uses the old set of procedures. That ain't true for ports. [*]
Procedures also compose better than ports: if I have a procedure
that *nearly* does what I want, I can usually wrap it in another
procedure that adjusts things to make it do *exactly* what I want;
and again the old interface isn't broken, and the existing users
of my old interface are not disrupted by the new extensions.

I know it's never really quite that simple, but...


I guess I'm still not following. Any procedure than can be wrapped by
another procedure to "fix things" can also be represented by an entity
wrapped by another entity/architecture to do the same thing. Granted,
to do it in one source file, you'd have to have more than one entity/
architecture in the file, but that is a matter of form, not imposed by
the language or tools. If the ports have to be changed, they have to
be changed either way (entity or procedure). The procedure allows
local inheritance, which can avoid ports in the first place, but that
also obviates the possibility for reuse of that procedure, since it is
out of scope everywhere else, and always operates on the same local
objects. Without reuse, the need to keep it unmodified for fear of
breaking another instance (call) of it is also eliminated.

The original post exhibited a need for being able to easily plumb new
data through multiple levels of hierarchy in a design. Short of making
a whole project (ASIC or FPGA) one big entity/architecture with nested
local scopes for various procedures (and one huge source file), I
don't see how using procedures solves his problem. In fact, block
statements could be used to do the same thing without procedures, but
they will all necessarily be in the same source file.

Using procedures on the scale being proposed (eliminating all but a
single top level entity/architecture) is also unworkable due to the
current synthesis tool's inability to allow a subprogram to span time
(include a wait statement). So every procedure must be in the same
process. This is likely to complicate managing the order of
operations, which with variables, implies register usage.

I actually like the use of procedures on a small scale (i.e. within a
modestly sized process for clarification/separation of distinct
functionality). But there is a practical limit to the scope of their
application in synchronous, synthesizable code.

Andy
 
K

kennheinrich

Yes, but you can add new procedures without breaking code that
uses the old set of procedures. That ain't true for ports. [*]
Procedures also compose better than ports: if I have a procedure
that *nearly* does what I want, I can usually wrap it in another
procedure that adjusts things to make it do *exactly* what I want;
and again the old interface isn't broken, and the existing users
of my old interface are not disrupted by the new extensions.
I know it's never really quite that simple, but...

I guess I'm still not following. Any procedure than can be wrapped by
another procedure to "fix things" can also be represented by an entity
wrapped by another entity/architecture to do the same thing. Granted,
to do it in one source file, you'd have to have more than one entity/
architecture in the file, but that is a matter of form, not imposed by
the language or tools. If the ports have to be changed, they have to
be changed either way (entity or procedure). The procedure allows
local inheritance, which can avoid ports in the first place, but that
also obviates the possibility for reuse of that procedure, since it is
out of scope everywhere else, and always operates on the same local
objects. Without reuse, the need to keep it unmodified for fear of
breaking another instance (call) of it is also eliminated.

The original post exhibited a need for being able to easily plumb new
data through multiple levels of hierarchy in a design. Short of making
a whole project (ASIC or FPGA) one big entity/architecture with nested
local scopes for various procedures (and one huge source file), I
don't see how using procedures solves his problem. In fact, block
statements could be used to do the same thing without procedures, but
they will all necessarily be in the same source file.

Using procedures on the scale being proposed (eliminating all but a
single top level entity/architecture) is also unworkable due to the
current synthesis tool's inability to allow a subprogram to span time
(include a wait statement). So every procedure must be in the same
process. This is likely to complicate managing the order of
operations, which with variables, implies register usage.

I actually like the use of procedures on a small scale (i.e. within a
modestly sized process for clarification/separation of distinct
functionality). But there is a practical limit to the scope of their
application in synchronous, synthesizable code.

Andy


It's really interesting to hear all of the intelligent comments in
this thread, and I can agree to a certain extent with most of the
viewpoints. I think the root issue being debated here is how best to
implement a design - using a bigger emphasis on "hardware-ish pure
structure" or more on the "software-ish procedural approach". They are
certainly closely related, but are definitely not equivalent. The "old-
timers" who grew up with boards filled with 74LS logic and 22V10's
will likely feel more comfortable with the hardware-ish approach -
it's just an extension of hierachical schematics. The more
intellectually adventurous old-timers will then discover functions and
procedures and use these to neaten up their code. Those with a strong
software background will often see everything as a procedure and not
intuitively see how the entity/architecture/component structuring
would be the right approach in part of the design.

There are some places where E/A structuring is dead-simply the right
thing. For example, an FPGA which requires N identical (or nearly
identical, modulo some generics) modules - DRAM controllers, gigabit
transceiver modules, register files, what have you. If an FPGA vendor
tells me they have a big honking hardware block that does function X
for me, for free, then the right thing is to plonk it down as a
component instance. No matter how elegant the academic theory behind
it might be, I'll never try to infer a Sonet controller similarly to
the way I infer a block RAM. (Even a dual-port RAM doesn't infer
right half the time - anyone who's done it will probably agree.) No
amount of procedural finessing can work around this.

But there are other places where the old-time hardware guy is missing
the boat by slavishly implementing every little piece of his design
using the explicit combinatorial logic and flops he sees in his head
as he considers what the schematic for his circuit ought to look
like. The CRC is a nice example - with the right procedures defined,
the circuit using the procedures is short, concise, understandable,
and easy to modify. The manually written mess of gates, or even the
inline instantiation of a separate CRC-32 engine can be much longer,
less obvious, and interrupt the "flow" of your description. These can
become critical things as the complexity of your design grows.

Both styles need to be written with the right amount of room to grow.
This is basic good design practice, and it's as much an art as a
science. Room to grow, and good engineering, encompasses using the
right parameterization (generics), the right default values, the right
use of procedural style and instantiation at the right layers, and the
right interfaces, among all of the other choices related to the
implementation of the design itself.

I've seen guys who implement every interface as a std_logic_vector,
that just grows, and grows, and grows, as needed. I've also seen their
code: it's an endless sea of meaningless code like

if (interface_a(16) and not interface_b(14)) then -- this means the
event triggered
interface_c(14) := '1'; -- so clear the reset

The use of records as a conduit (I like that analogy) is one good
technique to manage the OP's original problem. It has other VHDL-
imposed drawbacks though - there are all sorts of rules about
partially associated records in interfaces, resolved vs non-resolved
records types, multiple drivers, interface element modes, and so on.
It's like a poor-man's version of type polymorphism (not the same as
the parametric polymorphism when we talk about overloading
operators). It's type polymorphism because if your conduit just goes
between two levels, you can make it carry the right type by editing
your record definition in the package. It's poor-man's because there
no way of automatically inferring the type of the pipe -- if an entity
does nothing more than carry a unidirectional signal from the outer
ports to an inner component, why can't I just say "connect the two",
rather than spelling it all out, manually, over and over. This feature
(type polymorhism) is a nice feature in languages like Standard ML and
Haskell, which use well known and well understood mechanisms to make
this happen nicely. These solutions are also guaranteed to be type-
safe, which is the reason we have so much explicit type specification
in VHDL to begin with! Unfortunately, I can see this would be
difficult to implement with all of the extra quirks of the language
(port modes, multiple driver rules, resolution, etc, not to mention
possible interactions with conventional VHDL name resolution).

What does this all mean? VHDL is quirky, obviously "designed by
committee", and large. But it's also flexible, and you can generally
do most of what you need, once you get your head around it. At the end
of the day it's just a tool, and the tool is nothing better than the
person wielding it.

(Going off to the woodpile to get my axe... now THERE's a tool :)

- Kenn
 
M

Mike Treseler

Andy said:
local inheritance, which can avoid ports in the first place, but that
also obviates the possibility for reuse of that procedure, since it is
out of scope everywhere else, and always operates on the same local
objects.

I use procedures for clarity and speed of coding.
Reuse is sometimes a nice side effect.
The original post exhibited a need for being able to easily plumb new
data through multiple levels of hierarchy in a design. Short of making
a whole project (ASIC or FPGA) one big entity/architecture with nested
local scopes for various procedures (and one huge source file), I
don't see how using procedures solves his problem.

I think both Jonathan and I punted that part of the problem.
Your suggestion of structured ports at least addresses Chris's problem.
So every procedure must be in the same
process. This is likely to complicate managing the order of
operations, which with variables, implies register usage.

Yes, I use one process per entity.
Yes, the variables I declare become registers.
The order of operations is a single thread per tick.
Or do you mean the order of piped registers?
I actually like the use of procedures on a small scale (i.e. within a
modestly sized process for clarification/separation of distinct
functionality). But there is a practical limit to the scope of their
application in synchronous, synthesizable code.

There is a practical limit to every style.

-- Mike Treseler
 
M

Mike Treseler

No matter how elegant the academic theory behind
it might be, I'll never try to infer a Sonet controller similarly to
the way I infer a block RAM.

Who has the free Sonet controllers?
I've seen guys who implement every interface as a std_logic_vector,
that just grows, and grows, and grows, as needed. I've also seen their
code: it's an endless sea of meaningless code like
if (interface_a(16) and not interface_b(14)) then
-- this means the event triggered
interface_c(14) := '1';
-- so clear the reset

I think I know those guys :)

Good example of "untouchable" code.
Use it while it works, but start over when it doesn't.

-- Mike Treseler
 
A

Andy

I use procedures for clarity and speed of coding.
Reuse is sometimes a nice side effect.


I think both Jonathan and I punted that part of the problem.
Your suggestion of structured ports at least addresses Chris's problem.


Yes, I use one process per entity.
Yes, the variables I declare become registers.
The order of operations is a single thread per tick.
Or do you mean the order of piped registers?


There is a practical limit to every style.

-- Mike Treseler

Sorry, I was too busy rushing the punter to see the ball had already
been punted! I could get a penalty for that, a personal foul no
less...

I understand your use of procedures and generally accept the idea. I
thought you and Jonathan were proposing expanding that to cover the
hierarchy of the whole design.

Andy
 
M

Mike Treseler

Andy said:
Sorry, I was too busy rushing the punter to see the ball had already
been punted! I could get a penalty for that, a personal foul no
less...

I should have faked a hand-off ...
I thought you and Jonathan were proposing expanding that to cover the
hierarchy of the whole design.

A good PhD topic for someone.
Some procedural description/synthesis for instances seems inevitable.
But it's a hard problem with no big market evident.
Maybe the Gates foundation would sponsor the work :)

MyHDL sort of does this as a verilog generator.
An RTL-viewer does the *inverse* algorithm.
And emacs vhdl-compose-wire-components covers obvious wires.

We seem to go off on style about once a year,
so I guess it's back to the salt mines until next time...

-- Mike Treseler
 
A

Andy

If you mean the "separate" mechanism, that is defined to maintain the
"local scope" at the declaration, wherever the "separate" implementation
happens to be.


One record per mode works, but it's untidy.

- Brian

Yes, the "separate" mechanism is what I was thinking. Thanks for the
memory jog.

Andy
 
W

Wolfgang Grafen

KJ said:
Actually the vendor is trying to protect the IP from being generic in any
sense because they are trying to also sell you silicon. Trying to make the
code somewhat opaque and non-generic hinders at least some from reverse
engineering it and creating something that gets used on competitor's
silicon. But remember, you don't have to choose to use the IP, you can
engineer the solution yourself...just like the vendor did. They spent their
money engineering a solution that might be acceptable to a majority of their
users, it's up to the market to sort out the winners and losers.

The naive thought is that it is easy to write a generic component and
use it for any application. If you haven't done yet, then please try it
once. Remember, the requirement on an IP is that it will work, and that
the customer does not has to look into internals to get it work.

Write a component of a modest complexity and distribute it to several
designers and you will find out that all you did is not intuitive at
all. Come back to your code a month or later and you will think the
same. Every tiny thing should be documented in detail, and a heavy
documentation is frustrating itself.

Now, this is not all. You will find out that your customers use your
code in a manner you never have thought it can be used, and, of course
it does not work! Also you have stopped somewhere testing the 30
possible alternatives and the trillion corner cases, and all what you
haven't tested will not work.

I am from an ASIC design background, and I assume delivering IP on a
commercial base has the same demands, they should be error free on
delivery. Verification is the hard part, often you spend more than 80%
of your time there. I can imagine developping good documentation is also
not easy, so code development is a real tiny part of the complete package.

A generic IP is likely to cost a fortune which the customer is not
willing to pay. The only reasonable way is to provide one or two
flavours which work for sure, and the customer has to adopt this module
to his interfaces.

I am not developping IP, but all what I see this business case is not a
real money maker. We have used IP that should run out of the box but
often it doesn't, after a quarter year until we get it to work we have
wondered whether we should have it better designed ourself... An IP
vendor gives support for free in this case, bad luck...

Best regards

Wolfgang
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,774
Messages
2,569,598
Members
45,150
Latest member
MakersCBDReviews
Top