How much should I charge for fixed-price software contract?

  • Thread starter Robert Maas, see http://tinyurl.com/uh3t
  • Start date
G

Guest

Do you have any idea how stupid and arrogant that sounds?

Unfortunately, Jonathan is right.
It's not possible to put together a machine without money to buy the
components

Corporations and individuals throw away computers daily.
I personally have thrown away a dozen computers including
several SGI Indys (sob! I hated watching the garbage compactor crunch the 20 inch monitor
and the nifty blue cases) because they were too old to be worth keeping
around and were not valuable enough to be worth the hassle of selling.
Like most people, I have also given away computers free.

If you were in Sydney, I would give you a AMD 1800+ system free.
(It is gathering dust on a shelf behind me )

Most computer hobbyists will have spare components and pcs.
Contact a local mac group or linux group or pc group and ask nicely if
they can put an item in their next newsletter about you needing
a free computer. I am sure that someone will provide one.
 
C

Chris Dollin

Tim said:
What do you mean "Not true" - There is lots of CGI stuff out there run
on nearly every platform you can think of, windows, Mac, Linux - none
of which are Unix.

I do not think that what you wrote matches what you wanted to say. Linux
is a Unix, and Mac OS X is another Unix.
 
A

alex.gman

"Ranked 5th in the US: Putnam maths competition(undergraduate)"

That's one hell of an achievement. Robert, how did you sink so low?
Drugs? (It was the 60's!) Trauma? Insanity? What happened?
 
M

Michael Campbell

Although this thread has been fascinating, are we coming to the point
where we realize that sometimes, some people, just cannot be helped?

My 4 year old has this syndrome sometimes...
 
G

Greg Menke

Phlip said:
My admonition answers the common complaint, "I don't have time to write
tests; I'm too busy debugging".

Never heard that one myself. We routinely work out detailed end-to-end
test plans as well as unit tests, but as much as possible on the real
hardware. Automated where possible certainly. Often, detailed testing
on an emulator is a waste of time because the emulator isn't high
fidelity enough.

You have time to write tests. Writing an emulator gives you a framework to
hang all your discoveries about the real environmentthings' bugs, and this in turn
frees up your schedule by automating as much low-value labor as possible.

Sure thing, thats why we have scripts and analysis programs to suck data
out of logic & bus analyzers to profile interrupt performance, packet
latencies, etc.. but an emulator only provides the roughest sort of
functional test.

And you find those bugs --> by debugging <--.

You describe a legacy situation - someone else invented this bizarro
hardware, and legacy situations require debugging to learn their
characteristics.

No I don't. I describe a realtime system- or even a simple device
driver. There are lots of those, new and old.

Say you wrote some subsystem on this embedded system, and you
implemented some kind of test framework to help you get it as
functionally debugged as possible. What you've tested is your inputs,
outputs and algorithms work in an fairly abstract and simplistic
situation. Thats helpful for first order easy debugging, but you're
going to be doing it the hard way along with everybody else once the
software is on the hardware and some of the unknown idiosyncracies of
the system start showing up. More of the unknown idiosyncracies will
appear over time. And at this stage, your emulator is pretty much
useless because nobody cares about what it says when the real thing is
sitting in the lab & thats were the bugs are.

As you learn them, add tests about them to your emulator, to _approach_ a
state where a code failure causes a red flag in the tests _before_ it causes
an error situation near the hardware.

Sounds great. You going to step up and write the emulator for the
embedded system, from the fpga glue, busses and cpu- prove that it is of
reasonable fidelity- and keep it in sync with the vhdl as it evolves
too? Be advised that hardware specs will continue to change (you'll
have to emulate all its bugs, or at least its most important ones- maybe
they're documented), you'll have to also emulate as much as possible of
the expected and unexpected interface characteristics the system will
operate in. AND you have deadlines on your part of the delivered
software whatever your testing methodology is- nobody is going to wait
for you to write an emulator before you start delivering code according
to project schedule.

The last cpu emulator I worked with had writable registers that were
read-only on the variation of the cpu we're actually using... which is
not to say the emulator is useless, I use it for rough "does it crash on
boot" tests so I don't waste time on the real hardware. If you want to
call that a "test case", feel free- but its not really testing much.
The point is not to never debug. The point is to always seek ways to replace
any necessary debugging with test cases, so the remaining debugging is
manual labor of the highest value.

I'm not talking about web-apps, I'm talking about realtime systems on
dedicated hardware- or maybe just a simple device driver- and useful
"test cases" on that sort of thing essentially involve the real hardware
in situations as close as possible to what the system will experience in
the field.

Gregm
 
C

Chris Sonnack

CBFalconer said:
I can't really remember when I last used a debugger. Judicious
printf statements, or the equivalent, have handled everything for
me for years.

And any programmer who can't do it that way isn't worth his salt.

But using an interactive debugger--when you do need to debug--is
so much easier, faster and,... well, more fun.

And, as Patricia wrote, it lets you poke around and investigate
things--that can be very helpful.

My bottom line is that you NEED to know how to do it the basic way,
and you need to be comfortable doing that, but given access to
advanced tools, there's no reason not to use'm.
 
P

Phlip

Greg said:
No I don't. I describe a realtime system- or even a simple device
driver. There are lots of those, new and old.

The define "legacy" as "requires debugging".
Say you wrote some subsystem on this embedded system, and you
implemented some kind of test framework to help you get it as
functionally debugged as possible. What you've tested is your inputs,
outputs and algorithms work in an fairly abstract and simplistic
situation. Thats helpful for first order easy debugging, but you're
going to be doing it the hard way along with everybody else once the
software is on the hardware and some of the unknown idiosyncracies of
the system start showing up. More of the unknown idiosyncracies will
appear over time. And at this stage, your emulator is pretty much
useless because nobody cares about what it says when the real thing is
sitting in the lab & thats were the bugs are.

This sounds like the code went through an emulation phase and then a real
thing phase.

Ideally, I would configure one button on my editor to run all tests against
the emulator, then run all possible tests against the hardware. If the code
fails in the hardware I would trivially attempt to upgrade the emulator to
match the fault, so the code also fails with the emulator. But this is
speculation, and of course it won't prevent the need to debug against the
hardware.

Realistically, you are apparently relentlessly testing, and admittedly in a
"legacy" situation that prevents you from using tests to make the hardware
more bug resistant. Carry on!
 
D

Duane Bozarth

Phlip said:
The define "legacy" as "requires debugging".

That's a bizarre (at best) definition of "legacy"...

.....
Realistically, you are apparently relentlessly testing, and admittedly in a
"legacy" situation that prevents you from using tests to make the hardware
more bug resistant. Carry on!

Scratch the "legacy" and you have a reasonable description of much of
embedded system development.

The "trivial" upgrading of the emulator is <definitely> an ideal state
in most instances, too...
 
B

Ben Pfaff

Do you have any idea how stupid and arrogant that sounds? Basically
you're saying that only people who already have lots of money to buy
the latest equipment out of their own magic funds, should ever be
allowed to work. Nobody, in your view, should ever be allowed to work
to earn the money to buy the stuff you think we should already have.

I understand from your earlier articles that you live in the San
Francisco Bay area. In that case, you have little excuse for not
having better hardware. You can easily follow the craigslist
"free" section and follow up on the free computers there. Within
a few days or weeks you'd be sure to get one better than the
machine you say you're stuck with now.
 
P

Phlip

Duane said:
That's a bizarre (at best) definition of "legacy"...

That's why Greg didn't understand why I used it like that.

Me: Strive to never debug.

Greg: What about blah blah blah.

Me: You are using something that you can't design
fresh from scratch to resist bugs. So you must
run the debugger more often than greenfield code

Greg: It's not "legacy" it's embedded blah blah blah

My point is you must frequently debug it, just as you must frequently debug
user-level code that someone wrote without good unit tests. "Legacy" code.
The "trivial" upgrading of the emulator is <definitely> an ideal state
in most instances, too...

If you can't, then skip it. (That's also a rule when attempting to TDD
legacy code.)

Just don't leave the emulator out of the loop. Greg implied using it would
slow down the tail end of development.
 
?

.

So would you recommend the keywords be listed in logical sections, such
as programming languages in one list, platforms in another list,
application areas in another list, etc., or should I just mix all the
unrelated keywords together in one huge alphabetical list to make it
easy for the junior staff member to find the keywords he/she is looking
for?

Robert,

I, personally, write my resume so the company knows who I am and what I
can do for them. If they list requirements I will make it INCREDIBLY easy
for them to see that I meet or exceed their requirements. If the company
uses someone who just scans the resume for keywords then most likely they
are scanning the resume for keywords relating to the requirements for the
position.

If they are just looking for a list of unrelated keywords then why would
you want to work for them.

For example, if you apply for a job at IBM they have you do an online
application. The application will quiz you on your skill set and level of
experience. You will notice that all the technologies they quiz you about
are also listed in the job ad requirements.
Which is better keyword for that: assembler or assembly-language?
Or should both be included in case the junior staff member is looking
for the other one and doesn't realize they mean the same thing?

I never worry about this sort of thing. If the company is not going to put
forth the effort to know assembler and assembly-language are the same
thing then why would I want to work for them. I'm willing to put a great
deal of effort into applying for a job but I expect the employer to put
some effort into it as well.
 
D

Duane Bozarth

Phlip said:
That's why Greg didn't understand why I used it like that.

I didn't either (and still don't) because it has nothing whatsoever to
to w/ "legacy" or not...
Me: Strive to never debug.

Greg: What about blah blah blah.

Me: You are using something that you can't design
fresh from scratch to resist bugs. So you must
run the debugger more often than greenfield code

Greg: It's not "legacy" it's embedded blah blah blah

In that sense everything is "legacy" -- I can't redesign a commercial
compiler, either.

....
Just don't leave the emulator out of the loop. Greg implied using it would
slow down the tail end of development.

At some point in most embedded systems, that <is> true...you get to a
point at which the depth of emulation required isn't worth the effort
that would be required. Once at that point, reverting is rarely
productive use of resources.
 
?

.

Only an idiot would make it that easy for a potential employer to see
that I'm over 40 and toss my resume in the trash without even glancing
at the rest of it. You want me to be an idiot, I presume?

Why would you presume Tim X wants you to be an idiot? If you talk to me
that way I'd drop you in my kill file. People here are trying to help you;
they have nothing to gain from it. Why would you assume they are trying to
make an idiot out of you?

Besides, if I apply to a company and hide information from them I'd feel
I'm already off to a bad start with them. I've seen that finding good
people is hard. Why would I eliminate someone just because they are over
40? Why would you want to work for a company that discriminates against
the elderly?
My 2005.June resume includes that information. My 1998 resume, and the
early-2003 rearrangement of it by my job coach at Focus for Work, didn't
include those classes because they didn't start until 2003 Summer.


How?

If you are in your 40s then you were an undergrad in the early 1980s. Why
are you talking about something you did in the 80s? When you get the
interview and the employer looks at you they are going to wonder why you
are talking about things that happened two decades ago.
That's not important in a resume. I'm just trying to get a programming
job. If they want more info about my published papers, they can ask me
during a telephone interview. In fact I was never told where the NMR
paper was published, and I wasn't given a pre-print as I was promised.


Why? Each language is somewhat different on each different platform.
The fact I've used the languages on lots of platforms shows I've used a
wide range of versions in a wide range of environments, hence some of
what I've done is likely to be similar to whatever the employer might
want. Somebody else who has never used any of the languages except on
an Amiga, might not have the foggiest idea how to interface to system
utilities on other systems, and might not even be aware that different
platforms have different system interfaces, and might be totally
stumped when Amiga software doesn't run immediately elsewhere. I'm
trying to show my versatility of experience, showing my ability to
adapt in the past and have a variety of experience possibly useful in
the future.

A company will probably receive a few hundred resumes for a position. They
will spend a minute (or less) looking at each one until they short list it
to 2 or 3 dozen resumes. You have two jobs when writing a resume. The
first is not getting cut from the short list. The second is getting an
interview.
On the other hand, anyone who had never used databases at all could lie
and write "used databases" on their resume, but only somebody who knew
a little about the techncal details could include all the specific
things that I did, thereby proving I am not lying that I wrote JDBC
software etc.

Don't assume you have to convince the person reading your resume. If you
give an example of where you used JDBC then I'd trust you. It would be
during the interview that I would expect the details. That is when I see
if you lied on your resume.
Because although it was the best index on the net from 1991 to about
1995, since then it's been made obsolete first by Yahoo then by Google.
Nobody would want to look at it online now, but still it's impressive
that I created the very first toplevel meta-index to the InterNet
before Yahoo got the idea to start theirs.

What have you done recently? I did some really impressive things int he
80s. I don't list them on my resume because companies only care when the
last time you did something impressive. If it wasn't recent then they
don't care at all.

Bottom line, the purpose of writing a resume is to get an interview. IT IS
NOT TO GET A JOB. The purpose of the interview is to get a job offer.

Getting an interview is a job. How would you go about a software project
on a platform you never used and in a new language? Apply the same ideas
to getting an interview. Do some research. Try a few things. Get feedback.
The first few resumes will crash and burn. Learn from your mistakes. Talk
to the people who rejected it. Talk to people who hire but are not
currently looking. Find out from them what the process looks like from
their side.
 
A

alex.gman

Robert, I have an idea for you. Legally change your name to John
McCarthy. It will be easier to find a Lisp job then.
 
R

Russell Shaw

Pascal said:
And this in a remote place on a remote continent whose habitant live
in trees. So imagine how easy it should be to garbage-collect
top-notch computing hardware in the middle of the Silicon Valley!

and that's after fighting off the feral koalas and dislodging
the redbacks from the keyboard;)
 
G

gds

(Followups reset)

Why would you presume Tim X wants you to be an idiot? If you talk to me
that way I'd drop you in my kill file. People here are trying to help you;
they have nothing to gain from it. Why would you assume they are trying to
make an idiot out of you?

I have heard varying opinions on whether one should put something on
their resume that indicates their age, such as the year they graduated
from college. I have decided that since (1) I can't hide this
information from anyone who really wants/needs to know it, and (2) it
might actually be valuable to an employer to know when I graduated, in
order to have a frame of reference about my educational background,
I'll include it.
Besides, if I apply to a company and hide information from them I'd feel
I'm already off to a bad start with them. I've seen that finding good
people is hard. Why would I eliminate someone just because they are over
40? Why would you want to work for a company that discriminates against
the elderly?

Personally, I would not want to work for such a company. I suspect
REM is in a far more desperate situation, so such a company might be
attractive to him if they would hire him (after learning he is capable
of doing the work, but before learning how old he is).
If you are in your 40s then you were an undergrad in the early 1980s. Why
are you talking about something you did in the 80s? When you get the
interview and the employer looks at you they are going to wonder why you
are talking about things that happened two decades ago.

I've heard mixed opinions here also. In some interviews I've had,
questions were asked about things I'd done in the 1980s that were
pertinent to the work the company was currently doing. In other
interviews, no one cared about anything that I'd done prior to the
early 1990s. I think it behooves someone to put things on their
resume that are pertinent to the work the company is doing, even if
the work was done more than five or so years ago.

--gregbo
gds at best dot com
 
G

George Neuner

If you can think of the next _line_ of code to write, you must perforce be
able to think of a complementing test case that would fail if the line were
not there. Just make writing the test case part of writing that line.

In principle you're right ... but it's rarely that simple.

I support testing and assertions wherever possible, including
intra-procedure if there is some meaningful test that can be done on a
partial result - but testing line by line is ridiculous. It may be
extremely difficult or quite impossible to figure out what would
happen if a particular line of code is wrong or missing. It's also
unwieldy because such tests frequently can't be batched but must be
executed in-line due to following code altering the test conditions.

I didn't agree with this "line by line proof" approach in 1981 when
Gries proposed it in "The Science of Programming" and I don't agree
with it now. YMMV.

I use to do image processing in which the processing was a dependent
chain of fuzzy logic. All the possible answers were wrong in some
absolute sense - and the object was to find the answer that was least
wrong. If I missed a minor step or had a bug in a calculation
somewhere, the chances are I wouldn't be able to tell by looking at
the intermediate results.


George
 
P

Phlip

George said:
In principle you're right ... but it's rarely that simple.

Yes it is. You are considering the harder task of retrofitting
acceptance-level tests to the outside of finished APIs. Not the simpler and
more direct task of writing tests that fail so you can write a line or two
of behavior to pass them.
I support testing and assertions wherever possible, including
intra-procedure if there is some meaningful test that can be done on a
partial result - but testing line by line is ridiculous. It may be
extremely difficult or quite impossible to figure out what would
happen if a particular line of code is wrong or missing.

That's why you run the test and ensure it fails for the correct reason,
before writing the code to pass the test. The test doesn't have to be
exact - it doesn't need to constrain your code to only one possible new line
[or two]. You write more tests until all of them constrain.
It's also
unwieldy because such tests frequently can't be batched but must be
executed in-line due to following code altering the test conditions.

Design-for-testing implies you have just a little more API, and you use it
to detect these intermediate states. The requirement to write tests first
makes such API adjustments easier.
I didn't agree with this "line by line proof" approach in 1981 when
Gries proposed it in "The Science of Programming" and I don't agree
with it now. YMMV.

That was probably something different, and TDD is not a "proof" system.
I use to do image processing in which the processing was a dependent
chain of fuzzy logic. All the possible answers were wrong in some
absolute sense - and the object was to find the answer that was least
wrong. If I missed a minor step or had a bug in a calculation
somewhere, the chances are I wouldn't be able to tell by looking at
the intermediate results.

Right. Such tests can be very fragile and hyperactive. Hence, run them
_more_ often, and if they fail unexpectedly use Undo to back out your
change, then think of a smaller change that doesn't change an irrelevant
side-effect. Tests that constrain too much are better than ones you run
infrequently.
 
P

Peter Ammon

Phlip said:
Tim X wrote:




No. I'm talking about developers who don't write unit tests as they write
code. These provide the option of using Undo, instead of debugging, when the
tests fail unexpectedly.

This leads to a development cycle with highly bug-resistant code, and
without proactive debugging to implement new functions.

Yes, you still need the debugger - typically for legacy situations - and you
still need elaborate debugging skills. New code stays ahead of them.

The idea that we can implement without debugging is incomprehensible to most
programmers. But that really is what I meant.

I've been waiting for someone to make this claim about unit testing.
Guess I'll pick on you :)

I don't believe that unit testing eliminates debugging. I'll give a
real life example.

We had a bug where an update to our library disabled some features of a
client program. After some investigating, it was determined that the
client program was doing this:

if (library_version == 3)

instead of this:

if (library_version >= 3)

Our fix for this bug was to detect the client and report 3 for the
version if the client was the offending program, and otherwise report
the true version.

This is the sort of bug that gets caught in integration testing. I
can't think of any way that unit testing would have helped this
situation. I'd be very interested in hearing how this sort of bug would
be approached in the test-first "no debugging" philosophy.

-Peter
 
P

Phlip

Peter said:
I've been waiting for someone to make this claim about unit testing. Guess
I'll pick on you :)

Oh goody 'cause I didn't claim it. [I think.]
I don't believe that unit testing eliminates debugging. I'll give a real
life example.

Some TDD authors prevaricate and say "/virtually/ bug free". My emphasis. I
prefer to prevaricate more accurately and usefully.

TDD produces bug-resistant code, with reduced the odds of long bug hunts.
The kind that typically require debugging and/or trace statements.
We had a bug where an update to our library disabled some features of a
client program. After some investigating, it was determined that the
client program was doing this:

if (library_version == 3)

instead of this:

if (library_version >= 3)

Our fix for this bug was to detect the client and report 3 for the version
if the client was the offending program, and otherwise report the true
version.

TDD works in tiny steps. Upgrading a library is a big step (and another
"legacy" situation). Fortunately, your tests let you roll back to the
previous library version for frequent sanity checks (and emergency
releases), until you debug the situation.
be approached in the test-first "no debugging" philosophy.

There is no test-first "no debugging" philosophy. Test cases make an
exquisite platform for debugging. For example, in VB[A] I use Debug.Assert
in a test case, and failure raises the debugger. Then I move the current
execution point back to the called method, step inside it, fix the code to
pass the test, and resume running the remaining tests.

Without TDD, this is the Unholiest of Unholies - programming in the debugger
to generate spaghetti code. With TDD, it's nothing more than leveraging your
tools effectively.

When people use TDD, they generally report surprise that the incidences of
_punitive_ debugging, with absolutely no other recourse, go way down. And
some teams (in greenfield projects with user-level code) indeed never turn
on their debugger, and code for years without it.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,780
Messages
2,569,608
Members
45,248
Latest member
MagdalenaB

Latest Threads

Top