Phlip said:
My admonition answers the common complaint, "I don't have time to write
tests; I'm too busy debugging".
Never heard that one myself. We routinely work out detailed end-to-end
test plans as well as unit tests, but as much as possible on the real
hardware. Automated where possible certainly. Often, detailed testing
on an emulator is a waste of time because the emulator isn't high
fidelity enough.
You have time to write tests. Writing an emulator gives you a framework to
hang all your discoveries about the real environmentthings' bugs, and this in turn
frees up your schedule by automating as much low-value labor as possible.
Sure thing, thats why we have scripts and analysis programs to suck data
out of logic & bus analyzers to profile interrupt performance, packet
latencies, etc.. but an emulator only provides the roughest sort of
functional test.
And you find those bugs --> by debugging <--.
You describe a legacy situation - someone else invented this bizarro
hardware, and legacy situations require debugging to learn their
characteristics.
No I don't. I describe a realtime system- or even a simple device
driver. There are lots of those, new and old.
Say you wrote some subsystem on this embedded system, and you
implemented some kind of test framework to help you get it as
functionally debugged as possible. What you've tested is your inputs,
outputs and algorithms work in an fairly abstract and simplistic
situation. Thats helpful for first order easy debugging, but you're
going to be doing it the hard way along with everybody else once the
software is on the hardware and some of the unknown idiosyncracies of
the system start showing up. More of the unknown idiosyncracies will
appear over time. And at this stage, your emulator is pretty much
useless because nobody cares about what it says when the real thing is
sitting in the lab & thats were the bugs are.
As you learn them, add tests about them to your emulator, to _approach_ a
state where a code failure causes a red flag in the tests _before_ it causes
an error situation near the hardware.
Sounds great. You going to step up and write the emulator for the
embedded system, from the fpga glue, busses and cpu- prove that it is of
reasonable fidelity- and keep it in sync with the vhdl as it evolves
too? Be advised that hardware specs will continue to change (you'll
have to emulate all its bugs, or at least its most important ones- maybe
they're documented), you'll have to also emulate as much as possible of
the expected and unexpected interface characteristics the system will
operate in. AND you have deadlines on your part of the delivered
software whatever your testing methodology is- nobody is going to wait
for you to write an emulator before you start delivering code according
to project schedule.
The last cpu emulator I worked with had writable registers that were
read-only on the variation of the cpu we're actually using... which is
not to say the emulator is useless, I use it for rough "does it crash on
boot" tests so I don't waste time on the real hardware. If you want to
call that a "test case", feel free- but its not really testing much.
The point is not to never debug. The point is to always seek ways to replace
any necessary debugging with test cases, so the remaining debugging is
manual labor of the highest value.
I'm not talking about web-apps, I'm talking about realtime systems on
dedicated hardware- or maybe just a simple device driver- and useful
"test cases" on that sort of thing essentially involve the real hardware
in situations as close as possible to what the system will experience in
the field.
Gregm