Noons said:
Joe, the notion that DBA's are somehow "popes" is today
old hat. It's an old TCO argument from the Microslop mob.
Proven false ages ago. DBA's nowadays are simply trained monkeys
nursing third party apps. And little else. And have been so for
years.
Well, I have more respect for them than that... And the strident
oppinions some of them have may have to do with doing hard, complicated
work which is often under-appreciated. It is also true that the
bulk of problems with these third-party apps would not exist if they
had been written with 1/10th the DBMS understanding that the DBA has.
Customer success stories in web sites? Come on! Do you
expect that to impress someone? A little old lady maybe?
I *never* said Sybrand is a little old lady!
But to be serious,
even though some sites content is as trustable as those "As seen on TV"
avertisements, BEA is posting big and referenceable successes.
The question is not if BEA has a good middleware engine.
There is no doubt in my mind it does. Probably the best.
You are a gentleman and a scholar Sir. If we succeed in returning this
thread to the topic of 'where should we put logic and/or data?', I would
be interested to hear your opinion on the affirmative value of middleware.
It seems that, even when Oracle itself is asked to show how fast and scale-
able it's DBMS is, Oracle says "OK! Watch what we can do with BEA's
transaction monitor!", some other DBMS-biased still say "It's the DBMS, stupid".
The question is how much of and what kind of code should be in that
middle tier. Which has got nothing to do with your experience
at Sybase/BEA, mine at IBM/Univac/Prime/Oracle/world or the price
of fish in fact. Or rather: it has everything to do with it.
It's based on experience with prior environments and technology that
we can hope to make proper decisions regarding which is the best way
to go.
Well, not that you aren't the only one to determine what the question
is... If you follow the thread as it actually progressed and morphed,
it certainly *started* where you say, and we can try to slew it back
on course, but when I stated my DBMS experience, it was a specific
and responsive retort for a pompous supposition that I couldn't have
had such experience. And note that experience doesn't have *everything*
to do with it. Certainly much of what we can present is based on our
experience, but there is an objective reality out there, and to the
extent that our individual models of the universe are accurate, and
to the extent that our extrapolations of experience to a general model
of the world are accurate, we *do* synthesize truth, and our individual
experiences become less important as we all meet at the scientificly
correct answers.
Your point is important though. Folks statements that "In all my
experience such-and-such is always a problem or always right" lose
value as soon as there is any controversy. We should be as rigorous
as we can with logical bases for our claims. For an example, independent
of my experience of this, is it *logical* to believe that it might make
an objectively measurable performance benefit to alter an application
suite to daily cache a single list of all the names of the countries in
Europe, and have all the user interfaces refer to this list, rather than
have the user application base query the DBMS for this information hundreds
of thousands of times per day?
Data is stale (especially with Oracle's multiversion concurrency control
)
as soon as the DBMS reader process releases the low-level page semaphore. The
question is "How stale do I want my data to be and still be comfortable using
it for decisions? The most parsimonious, belt-and-suspenders choice would be
implemented with serializable isolation level (and for Oracle, we also need
'select FOR UPDATE' to really disallow read data being changed during the tx).
Now we ask 'why not do this always?', and the answer is "cost, in concurrency
performance". Oracle's optimistic concurrency allows much more throughput in
real terms, in exchange for the possibility of occasional tx failure and need
to retry. Now my question is "Why should this beneficial use of optimistic
concurrency stop at the DBMS?". I contend that middleware is made to be much better
and faster at caching, distributing, synchronizing and serving data in a regime
where optimistic concurrency is beneficial and/or performance crucial.
So, can we go back to defining what should go where? That is should go
somewhere is a given.
I'd welcome it. Those that get over-quickly emotional and derogatory should
try to be more analytical, so as not to derail the subject, as well as more
credibly supporting their position.
Joe Weinstein at BEA