Have you adapted any software methodologies into your hardware work?

V

Vinh Pham

I've always lamented the fact that there has been more literature on the
software design process than hardware. I imagine it's because there are far
more SW people than HW, in a total population sense, and the fact that HW
has small, very specialized areas of work.

Of course, there's nothing preventing us from borrowing ideas from the SW
community and adapting it to HW. Have any of you folks tried?

Some things are easy to borrow, like Extreme Programming's emphasis on
writing test code before source code, automating the testing process, and
testing daily. Refactoring code, using tests to make sure you didn't break
anything along the way.. Design Pattern's architecture reuse vs. code
reuse. Etc.

But SW design and HW design are different beasts. Application software
tends to have "unlimited" resources (RAM, HD space, processing power vs.
processing needs) so they have the luxary to do things that we can't.

Anyways, I'm interested in hearing any personal stories or thoughts on the
subject. I hope people feel it's something worth talking about.


Regards,
Vinh

P.S. Hmm I wonder if processes and methodologies, in the HW world, is
considered an unofficial "trade secret." Sorta of how cooks don't share
details of their recipies. I still think it's mostly a population, critical
mass thing.
 
V

Vinh Pham

I guess we have more in common with the embedded software world, when it
comes to adapting methodologies design for desktop applications. The main
similarities I see is the critical nature of time and concurrent processing.

Here's an interesting read regarding Extreme Programming and embedded
software.


Regards,
Vinh
 
K

Kai Harrekilde-Petersen

Vinh Pham said:
I've always lamented the fact that there has been more literature on the
software design process than hardware. I imagine it's because there are far
more SW people than HW, in a total population sense, and the fact that HW
has small, very specialized areas of work.

Of course, there's nothing preventing us from borrowing ideas from the SW
community and adapting it to HW. Have any of you folks tried?

Sure.

We're doing things like testbench reuse, IP block reuse (interface
blocks like 10/100/100 MAC, 10G MAC, SPI4.2 interfaces, CPU interface,
internal register access system, etc) as well as chip-level reuse (a
family of 8/12/16/24 port Ethernet switches based on the same
architecture, and to a high degree the same code), dai^H^H^Hnightly
regression testing, random testing etc.

As I see it, there are places where SW design are lightyears ahead of
HW design, but also places where HW design is ahead of SW design.

As an ASIC designer, I live under a different set of constraints than
most SW designers - things have to be correct every clock tick. Even
a failure rate of 1E-9 is unacceptable, as the system would fail every
few seconds. I've seen fine SW designers write the most beautiful -
behavioural - VHDL code, which had to go into an ASIC. Most of them
had to beat their heads against Synopys DC for 6 months until they
learned to think in HW when writing code.

As for resources, much of the testing that I do is limited by the
amount of (VHDL) licenses that we can afford. SW people need not use
expensive licenses just to run and test their code, so they have more
resources to simply buy more/faster machines. Doing at-speed testing
means that I have to convince management to shell out several $100K,
wait several months to get a chip back, and _then_ spend some more
months to test the thing. So it's pretty expensive "compile" button.

I believe that the above is the crux of the difference in HW and SW
design. For SW, doing a product release is less risky in many ways.
This allows SW to design (and ship) much more complex products that HW
designs allow, since the cost of testing an update or errata is not
associated with a 6-9 month wait/test period, and does not require a
+$100K investment just to see if the designers hunch was right.

Regards,


Kai
 
V

Vinh Pham

We're doing things like testbench reuse, IP block reuse (interface
blocks like 10/100/100 MAC, 10G MAC, SPI4.2 interfaces, CPU interface,

Ah yeah, interface blocks do make good canidates for reuse.
architecture, and to a high degree the same code), dai^H^H^Hnightly
regression testing, random testing etc.

That's good to hear. Definately a necessity in the ASIC world. My group
does FPGA design and we're spoiled rotten because of it. Short turn-around
times are nice, but it has definately encouraged sloppy habbits.

Even though we don't have expensive mask costs, and our turn around time is
in hours, I think nightly regression testing would still be valuable in
reducing time spent debugging. It's just harder to convince people because
the costs of mistakes are hidden and spread out.
most SW designers - things have to be correct every clock tick. Even
a failure rate of 1E-9 is unacceptable, as the system would fail every
few seconds.

And multiply that rate by the number of products you plan to deploy in the
field, which could be in the thousands if your product is a chip.
I've seen fine SW designers write the most beautiful -
behavioural - VHDL code, which had to go into an ASIC. Most of them
had to beat their heads against Synopys DC for 6 months until they
learned to think in HW when writing code.

Heh good point. It's a hardware definition language first, behavioral
language second. It's not enough to write code that behaves well, it also
has to synthesizes well, and meet critical timing constraints.
Unfortunately that can make the code look really ugly and harder to read,
which is a shame.

My boss once hired a mathematician to do FPGA design. That was the most C
looking VHDL I had ever seen.
As for resources, much of the testing that I do is limited by the
amount of (VHDL) licenses that we can afford. SW people need not use
expensive licenses just to run and test their code, so they have more
resources to simply buy more/faster machines. Doing at-speed testing
means that I have to convince management to shell out several $100K,
wait several months to get a chip back, and _then_ spend some more
months to test the thing. So it's pretty expensive "compile" button.

Definately an expensive compile "button", especially for those doing 90-nm
work. Yeah I guess that's an important distinction. Our developement
cycles are slower and more expensive, which requires more up-front and
careful design. Even testing can be more expensive, compared to software,
and I imagine it's also slower to do a simulation of hardware, than running
a piece of software.
associated with a 6-9 month wait/test period, and does not require a
+$100K investment just to see if the designers hunch was right.

Imagine those poor sods working on 90-nm designs. I bet their companies
provide free pepto-bismo along with coffee.

Cool, thanks for sharing your thoughts Kai.
 
K

Kai Harrekilde-Petersen

Vinh Pham said:
Ah yeah, interface blocks do make good canidates for reuse.


That's good to hear. Definately a necessity in the ASIC world. My group
does FPGA design and we're spoiled rotten because of it. Short turn-around
times are nice, but it has definately encouraged sloppy habbits.

Rethorical question: how are you going to do directed test to check
that, say, a CPU interface fpga works correctly in a specific
sequence, which includes a DRAM refresh cycle? Hardware is fine for
statistical and random testing, but when it comes to directed testing,
testbenches win hands down. Don't get me wrong: I'm advocating the
use of BOTH directed *and* random testing here.
Even though we don't have expensive mask costs, and our turn around time is
in hours, I think nightly regression testing would still be valuable in
reducing time spent debugging. It's just harder to convince people because
the costs of mistakes are hidden and spread out.

I do not believe that HW testing and simulation exclude each other;
I've been debugging a fairness problem in an arbiter of lately, which
was very quick to test/provoke in HW, but virtually impossible to
root-cause. In this case, using a simulation to probe the internal
state at various points of the arbiter was essential to pinpoint why
the mechanisms that we had put in didn't kick in as expected.

I did run some 10 test runs in the lab to assess which parameters did
or did not affect the problem. Then, it was off to (more)
simulations, and from the synthesis of those two things a fundamental
understanding of the problem arose.

Regards,


Kai
 
V

Vinh Pham

Rethorical question: how are you going to do directed test to check
that, say, a CPU interface fpga works correctly in a specific
sequence, which includes a DRAM refresh cycle? Hardware is fine for
statistical and random testing, but when it comes to directed testing,
testbenches win hands down. Don't get me wrong: I'm advocating the
use of BOTH directed *and* random testing here.

I agree, the power of directed testing is you can take your inate knowledge
of your design and create cases that you know will stress it. Definately
valuable. And random testing is good for creating situations that you can't
imagine.

We definately do directed testing, but we don't do regression testing, since
FPGAs reduce the obvious costs of mistakes that don't get caught until
testing hardware.
root-cause. In this case, using a simulation to probe the internal
state at various points of the arbiter was essential to pinpoint why
the mechanisms that we had put in didn't kick in as expected.

I agree, the great thing about simulation is it gives you visibility into
every inch of your design. As long as the simulation isn't too slow, or too
hard to setup, it's far easier than testing the hardware.
I did run some 10 test runs in the lab to assess which parameters did
or did not affect the problem. Then, it was off to (more)
simulations, and from the synthesis of those two things a fundamental
understanding of the problem arose.

Yeah the more tools you have to attack a problem, the better. Each has it's
strenghts which can compliment each other.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,756
Messages
2,569,533
Members
45,007
Latest member
OrderFitnessKetoCapsules

Latest Threads

Top