Tom Anderson said:
This sounds like excellent policy.
By hand? No automated testing of the complete app through its web
interface?
Also, presumably, before you deploy to production, you deploy to a staging
or pre-production environment which replicates the production environment,
and where you do some more thorough (manual or automatic) testing, right?
Or rather, you have someone else - a QA or client team - do it?
We have a team of dedicated testers (they've come up through the ranks on
the business side and know exactly what should happen), and quite a few
others that do it part-time as required (again, business-side people who
know the business rules to a T). By the time a production build is to be
produced, a test build which is identical except for some configuration has
been deployed in a similar environment, and tested for weeks (or months).
The idea of using automated tests for the web GUI has been bruited, but in
our environment it would be unrealistic.
It might be more accurate to say that I am quite concerned to make sure that
the test builds are correct. Because a lot of time can be wasted if a tester
reports that an error is still there, and then it's a question of is the
error still there because the build is flawed, or because the developer who
"fixed" it only fixed it for a different use case or in his own development
environment.
So getting things wrong in the build is not actually posing a threat to
the production site. I'm certainly not saying that you shouldn't bother
taking precautions to get the build right, but it's not quite the Mad Max
scenario you mention if it goes wrong!
That's correct - if things were obviously wrong the first business day after
a new deployment (it just so happens that tomorrow AM is one such), then
we'd quickly deploy the previous EAR and figure out what went wrong.
Yikes. Has you experience really been that bad? We only use CVS through
Eclipse, and haven't had much trouble - we did hit a snag at one point
about Eclipse being out of sync with what was on disk, so now we always do
a refresh before synchronising. But that's it.
It's been bad enough. In Eclipse I avoid the team synchronization stuff as
much as possible. I find it much easier to do my svn status and svn update
etc on the CL, and refresh in Eclipse.
In order to keep my SVN commits as close as possible to a logical changeset,
I find it easier to "svn status" into a file, edit it, and then use
the --file option. It is of course doable in Eclipse, but when you're
wanting to select 15 or 20 diferent files in a dozen different spots in 3 or
4 separate projects, it gets a bit busy in the IDE.
That also sounds like an excellent idea. We have a nightly build (and
actually, we don't at the moment, because the thing we're working on takes
a lot of manual intervention to build, and we haven't invested the time in
automating it yet), but i'd love to have a build and test after every
checkin. With a klaxon and flashing light that goes off if it's broken.
Hell yes, a set of traffic lights - green if the current version in CVS is
good, amber if there's just been a checkin and it's currently undergoing
testing, and red if it's broken!
We pretty much have the klaxon and flashing lights.
If a build breaks
then Hudson emails every developer, and the email tells you what the base
event was (whose commit), and what broke (if JUnit tests then you follow the
emailed link and drill down). Hudson itself is quite easy to configure, and
jobs (say one for trunk, one for each feature branch etc) are also very easy
to configure. What's nice too is that the Ant build.xml (if such you choose
to use) that you point Hudson at is exactly the same thing that you'd use in
your dev environment, or test or production.
In fact, i fantasise about having a build and test running on a pre-commit
hook, so that if you try to check in code that doesn't build and run, it
gets rejected! This is why i was thinking about an automatic dependency
tracking system, in fact, so you wouldn't need to recompile everything to
do this. Obviously you'd also need a fairly fast set of unit tests - you'd
need to flag anything slow as not to be run on checkin.
tom
This is one thing we don't enforce. It has not been an issue to date.
With the exception of slow tests (slow by their nature), I haven't seen that
large test suites take so long to run that a developer couldn't execute them
on each commit. On the specific app I refer to we have probably close to
3000 JUnit tests, and it's on the order of a minute on my local box to run
them all. They're typically not trivial tests either.
Speaking of tests and Hudson, one handy other thing to hook in is test
coverage, like Emma. It just gets added to the script that Hudson is
provided...you end up with nice graphs of coverage at various levels. IMO
this is indispensable (even in a TDD environrment) for staying on top of
whether your tests are sufficiently blanketing the codebase.
AHS