choices regarding where to place code - in the database or middle tier

J

Joe

Hi -

Over the last several versions of Oracle, developers have been provided with
a pretty revolutionary idea for a database product - namely the ability to
write code that used to belong in the middle tier and store it in the
database. I'm referring here to the ability to write stored procedures in
Java.

Now of course, Microsoft with their SQL Server product is doing the same
thing. The next version of SQL Server will allow programmers to write
stored procedures in any of the .NET languages.

I'm interested in looking at the increased choices developers now have
because of these new features in more depth ,developing some best practices
on the subject, and possibly publishing an article on the topic.

I personally am more experienced with SQL Server than with Oracle. I am
therefore looking for someone who has been involved with making these
choices in the Oracle environment who would like to collaborate with me on
the subject.

If you are interested, please contact me at (e-mail address removed)

Thank you

Joe Lax
 
J

Joe Weinstein

Joe said:
Hi -
Over the last several versions of Oracle, developers have been provided with
a pretty revolutionary idea for a database product - namely the ability to
write code that used to belong in the middle tier and store it in the
database. I'm referring here to the ability to write stored procedures in
Java.

Now of course, Microsoft with their SQL Server product is doing the same
thing. The next version of SQL Server will allow programmers to write
stored procedures in any of the .NET languages.

Hi. My 2 cents: It's more of a reactionary idea rather than a revolutionary one.
The market growth and functionality growth since the early 90's has been in the
middle tier. The internet killed client-server, which was actually dead when the
DBMS vendors, even in benchmarking their products in the standard and artificially
DBMS-focussed TPC-C tests, began needing/using a real middle tier transaction
monitor/processor to get the maximum out of their DBMS. They continue to do this
today, and in the current top DBMS TPC-C record Oracle uses BEA's Tuxedo. (My interest
becomes apparent).
Productivity during development requires the new tools, standards and languages.
However, that's not enough. If you really want Enterprise-class applications
with the performance to handle the unlimited user base that the internet has
provided us, you would be misled to be seduced back to a DBMS-as-center-of-the-
universe architecture. The DBMS vendors would like this, but the fact is that
the DBMS already has enough work on it's plate doing the core ACID transactions.
It operates in a crucial but expensive isolation model that you don't want to
waste on catalog browsers. Think of a restaurant, and the DBMS as the chef.
If you want to scale beyond the 6-stool beanery, the customers don't interact
as-a-rule directly with the chef. There is an efficient middle tier of waitresses
to concentrate 'chef-access' to a few high-volume channels. Furthermore, for the
percentage of frequently-requested items, there is an independent cache which
the chef fills asynchronously, and the waitresses tap this cache to serve customers
without ever involving the DBMS except to occasionally say "Gravy's out!".
If you want to get to "Millions served" you would be wise to develop a
powerful independent middle tier to do all it can to serve those millions, and
to control/optimise the load on, and output of the DBMS. Lastly, consider the
world where transactions involve more than one independent resource. Customers
nowadays tend to want the best of everything with one click of a mouse. This is
like a catered wedding where she wants the soup from "Chez Fancy", and the canapes
from "Chin's Canape Castle". You really need that caterer guy with the funny
accent as an independent middle tier to handle the logistics to make it one
transaction.
Render unto Caeser (the DBMS) that (only) which is Caeser's. Sure, do your
heavy data grinding where the data is, in the DBMS. You build your saw mills
where the trees are, but: Let's say you're in Guam, and you want a box of
toothpicks and a dining room table. It is silly to call the Great North West Mill
for logs, but it is also silly to call the Great North West Mill for the toothpicks
and table. Something smart and independent in the middle, like www.walmart.com
is where the money, efficiency and solution is.

Joe Weinstein at BEA
 
D

Daniel Roy

Hi Joe,
I am a Siebel configurator/programmer (Siebel is a "Customer
Relationship Management" software, which can be considered analogous
to SAP). My personal experience with the issue which interests you is
that as much as possible should be stored in the database. Siebel, by
some twisted reasoning about compatibility of code on various
databases (it runs on Oracle, SQL Server and DB2), decided to keep
almost all the code (including referential integrity!) in the middle
tier. As a result, on ALL the projects I've been a part of, we have
had data issues. The worst part is for the foreign keys which are not
valid. Other issue are about some code (usually PL/SQL) which is not
in sync with the database, for whatever reason (access rights,
objects/columns which don't exist anymore, ...). Also, performance is
always better from inside the database, from what I've seen so far.
This is logical since there is less network traffic when everything is
done from Oracle.

Just my 2 cents

Daniel
 
D

Daniel Morgan

Daniel said:
Hi Joe,
I am a Siebel configurator/programmer (Siebel is a "Customer
Relationship Management" software, which can be considered analogous
to SAP). My personal experience with the issue which interests you is
that as much as possible should be stored in the database. Siebel, by
some twisted reasoning about compatibility of code on various
databases (it runs on Oracle, SQL Server and DB2), decided to keep
almost all the code (including referential integrity!) in the middle
tier. As a result, on ALL the projects I've been a part of, we have
had data issues. The worst part is for the foreign keys which are not
valid. Other issue are about some code (usually PL/SQL) which is not
in sync with the database, for whatever reason (access rights,
objects/columns which don't exist anymore, ...). Also, performance is
always better from inside the database, from what I've seen so far.
This is logical since there is less network traffic when everything is
done from Oracle.

Just my 2 cents

Daniel

Exactly mirrors my experience with Siebel, SAP, PeopleSoft, and Baan.

--
Daniel Morgan
http://www.outreach.washington.edu/ext/certificates/oad/oad_crs.asp
http://www.outreach.washington.edu/ext/certificates/aoa/aoa_crs.asp
(e-mail address removed)
(replace 'x' with a 'u' to reply)
 
J

Joe Weinstein

Daniel said:
Hi Joe,
I am a Siebel configurator/programmer (Siebel is a "Customer
Relationship Management" software, which can be considered analogous
to SAP). My personal experience with the issue which interests you is
that as much as possible should be stored in the database. Siebel, by
some twisted reasoning about compatibility of code on various
databases (it runs on Oracle, SQL Server and DB2), decided to keep
almost all the code (including referential integrity!) in the middle
tier. As a result, on ALL the projects I've been a part of, we have
had data issues. The worst part is for the foreign keys which are not
valid. Other issue are about some code (usually PL/SQL) which is not
in sync with the database, for whatever reason (access rights,
objects/columns which don't exist anymore, ...). Also, performance is
always better from inside the database, from what I've seen so far.
This is logical since there is less network traffic when everything is
done from Oracle.

Thanks. This conversation will become richer and clearer as folks get into
it. Sure, the DBMS is a good place for simple referential integrity constraints,
as well as set-based data processing. Stored procedures and triggers are important
to do that sort of thing in one place. I knew of a payroll application that took
8 hours to do 40,000 employee run because it sucked raw data, person by person
from the DBMS to the fat client to do the real grinding. When this was converted
to 4 separately-running-but-pipelined stored procedures, it took 15 minutes to
do the same work. What *shouldn't* be in the DBMS is user session control or even
most less-volatile online data that is tapped by user applications. I saw a European
software company's product (I won't name names but it starts with "Baa" ;-) ) that
has a set of application queries, from which all useful business functions were
built. A typical user would enact a 100 or so business functions per day. Each
function had a part where it queried the DBMS for the list of countries in Europe!
Every user, hundreds of times a day! I am aware of the political flux in Europe
in the 90's but it was never that bad! ;-)
Fundamentally, you don't want every user of your application to cause/require a new
DBMS connection, let alone ask it for the same non-volatile data 10,000 times a day.
To get the best performance, applications are going to have to be DBMS-specific
at some level. DBMSes aren't dumb file systems. I knew of another application vendor,
who shall remain nameless (but it rhymes with 'beoblesoft' ;) ) that 'rolled it's
own stored procedures' by storing the SQL for every business query in the DBMS
as string data, and to do any given function, would query the DBMS for the SQL
needed, and would then send that SQL back to be executed as fresh SQL!
However, that dbms-specific level should be as narrow and controllable/switchable
as possible. J2EE standards help there.
Joe
 
E

Eric F

Exactly mirrors my experience with Siebel, SAP, PeopleSoft, and Baan.

Hello!

Data belongs in the database. Business logic; i.e. code, belongs in the middle
tier.

Referential integrity belongs in the database. That is data.

CRM packages typically don't perform well because, as noted, they push as much
into the middle tier due to the differences between the different databases.
But the DBMS systems are optimized for RI.

They also miss indexes which cause huge performance problems.

Replication of business logic via clusters/EJB will provide a lot of
performance improvements with code in the middle tier.

My .02

e
 
J

Jim Kennedy

Joe said:
Hi -

Over the last several versions of Oracle, developers have been provided with
a pretty revolutionary idea for a database product - namely the ability to
write code that used to belong in the middle tier and store it in the
database. I'm referring here to the ability to write stored procedures in
Java.

Now of course, Microsoft with their SQL Server product is doing the same
thing. The next version of SQL Server will allow programmers to write
stored procedures in any of the .NET languages.

I'm interested in looking at the increased choices developers now have
because of these new features in more depth ,developing some best practices
on the subject, and possibly publishing an article on the topic.

I personally am more experienced with SQL Server than with Oracle. I am
therefore looking for someone who has been involved with making these
choices in the Oracle environment who would like to collaborate with me on
the subject.

If you are interested, please contact me at (e-mail address removed)

Thank you

Joe Lax
Joe,
I want to make a subtle distinction. Just about any database can store code
in a the database. (binary object) That said I think you mean more that
complex business logic can be stored and run in the database or server end
(eg in Oracle pl/sql or Java). Having the business logic (not the GUI
logic) in the database allows one to switch GUIs or have multiple systems
interact with the backend and consistant business rules are followed.
Having it in the middle tier means that every other system has to go through
that middle tier. Which means that other groups will go right to the
database and not through the middle tier.(time constraints, must do it now,
can't wait to use middle tier, middle tier written in a language we don't
like or don't know....)

Siebel, Peoplesoft et al hire programmers and not really dbas. Programmers
drive the projects and dbas are relagated to a lower importance. Thus these
products don't use Referential integrity, stored procedures etc. For an
example, in Siebel you "have to define all database objects through their
tool even indexes". Unfortunately, that means you can't create a Function
based index or an index where one of the elements of the key is descending
instead of the default ascending. Dumb, just dumb.

Jim
 
S

Sudsy

Joe wrote:
<snip>

I've got to side with Eric F on this one. I like the idea of the
business logic residing in the middle-tier. Use the database as
a high-performance data store, foreign keys, indexes, all the
things that a database does so well. Eschew stored procedures,
triggers and all those vendor "lock-in" attempts.
And with all due respect to Jim Kennedy, if the middle-tier is
well-designed and documented then there should be nothing to
gain by attempting to short-circuit the mechanism and access
the database directly. Tempting, to be sure, but something to
be avoided at all costs.
One of the biggest advantages of the J2EE architecture, IMHO,
is the ability to locate the business logic in the right place,
using a portable language and API. Don't like WebLogic? Go for
WebSphere. Too expensive? Check out JBoss.
That's what it's all about to me: choices!
YMMV
 
N

Niall Litchfield

Eric F said:
Hello!

Data belongs in the database. Business logic; i.e. code, belongs in the middle
tier.

Referential integrity belongs in the database. That is data.

What is 'business logic' and what is 'Referential integrity'. By which I
mean Is a rule like every order must consist of fully formed header
information and at least one detail line a business rule or a database rule?

Replication of business logic via clusters/EJB will provide a lot of
performance improvements with code in the middle tier.

At what cost?


--
Niall Litchfield
Oracle DBA
Audit Commission UK
*****************************************
Please include version and platform
and SQL where applicable
It makes life easier and increases the
likelihood of a good answer
******************************************
 
M

Mike Sherrill

Data belongs in the database. Business logic; i.e. code, belongs in the middle
tier.
Hmmm.

Referential integrity belongs in the database. That is data.

Where do "required data" constraints belong, as in "this value must
not be null and must not be an empty string"?

Where do range constraints belong, as in "this value must be greater
than zero"?

Where do "paired column" constraints belong, as in "end date must be
on or after start date"?

I think you overcharged me. There's a lot more to data integrity than
just referential integrity.
 
A

Andrew Carruthers

I'm afraid your case is not proven.

Cache syncronisation has got to be one of the biggest problems middleware
faces, most large organisations have data feeds directly into and out of the
DBMS, this data needs to appear in the middleware cache at some point, so
there is either a constant refresh cycle ocurring to renew data or the
middleware cache become out of date pretty quickly, thus negating the
preceived benefit. Why cache what is already cached in the DBMS? having 2
places to cache data is not the best architectural model I have ever seen
implemented, in fact, it's one of the dumbest.

Logic should reside as close to the data as possible, in fact, data should
be protected via API's to reduce the security risk, the only logic that
should reside in the middle tier is the glue to piece together the API's to
implement business rules - so long as there's more than one datastore
involved, for a single datastore the logic should always reside with the
data.

My second biggest problem with middleware is with vendors who put all their
logic, referential integrity and validation into middleware as if their
application is the only one of importance within an organisation and
everyone will conform to their rules and methods of working. This approach
works fine if the vendor is developing a stand-alone application but in the
real world I have yet to see a stand alone application which has no
connection to data feeds of one form or another. Implementing applications
is more about integrating them into systems and this is where middleware
orientated applications fail most often.
 
J

Joe Weinstein

Andrew said:
I'm afraid your case is not proven.

Hi! Thanks. 'Proven' needs a definition, but if there exist many large
enterprises with worldwide distributed applications in successful service,
then they at least stand as support for the case...
Cache syncronisation has got to be one of the biggest problems middleware
faces, most large organisations have data feeds directly into and out of the
DBMS, this data needs to appear in the middleware cache at some point, so
there is either a constant refresh cycle ocurring to renew data or the
middleware cache become out of date pretty quickly, thus negating the
preceived benefit. Why cache what is already cached in the DBMS? having 2
places to cache data is not the best architectural model I have ever seen
implemented, in fact, it's one of the dumbest.

The virtue of caching data is as described, making it available in a lighter, faster
way than if the DBMS must be consulted. The refresh of cache benefits from it's being
not necessarily synchronous with user requests. This architecture allows for varying
transactional isolation as a user interaction wends its way unpredictably from a browse
mode to a possible binding transaction. You want to limit the strict costly centralized
ACID transaction part to the final DBMS-involved part. The speed benefit of having
customers all over the globe perusing a locally cached catalog is worth the occasional
retry necessary when they commit a purchase that fails because the DBMS changed relevantly
since the cache was updated.
Cache synchronization is an issue, but so is DBMS synchronization. One is more
costly than the other, based on the guarantees it is designed to provide. The point is
that the user experience defines a 'session/transaction' that lends itself to a varying
level of concurrency-vs-isolation, and intelligent distributed caches provide the
desired spectrum.
Logic should reside as close to the data as possible,

I agree, but some of this data can be cached, with logic in/around a fast intelligent
cache that resides between the millions of users and the very busy DBMS(s).
in fact, data should
be protected via API's to reduce the security risk,

Always true...
the only logic that
should reside in the middle tier is the glue to piece together the API's to
implement business rules - so long as there's more than one datastore
involved, for a single datastore the logic should always reside with the
data.

Unless/until one sees the value of cached data. Not raw cached data, but
an intelligent middle tier.
My second biggest problem with middleware is with vendors who put all their
logic, referential integrity and validation into middleware as if their
application is the only one of importance within an organisation and
everyone will conform to their rules and methods of working.

I agree. Simple non-volatile referential integrity constraints should certainly
be in the DBMS. It is a mistake to treat the DBMS as a dumb generically replaceable
file system. However, this criticism of yours can also be applied to the DBMS-biassed,
who think that all the logic needs to be in the DBMS, and that everyone must conform
to DBMS ways of working...
This approach
works fine if the vendor is developing a stand-alone application but in the
real world I have yet to see a stand alone application which has no
connection to data feeds of one form or another. Implementing applications
is more about integrating them into systems and this is where middleware
orientated applications fail most often.

Interesting! I see mor often that middleware exists to integrate applications into
unified internet-wide access, and integrating multiple standalone DBMSes. Integration
is hard. This is where middleware succeeds as well as fails (GIGO). However, I wouldn't
put the job of integration of applications to {the/one of the} DBMS(s).
Joe
 
D

Daniel Morgan

Joe said:
However, that dbms-specific level should be as narrow and
controllable/switchable
as possible. J2EE standards help there.
Joe

I appreciate your opinion and your honesty that your perspective comes
from selling that middle tier but I completely disagree.

The 'lets push more bytes down the pipe and across all those routers'
thinking is not going to lead to performance. You may be scalable but
performance will suffer. And you will be no more scalable than a thinner
client.

Render under to database everything you can do in the database and let
the middle tier do what it does best ... fail-over, load levelling, and
serving up the front-end.

Try tuning all that rotten SQL coming from those fat front-ends sometime
and you will understand why those here that have experience with
PeopleSoft, SAP, Baan, and Siebel are remarkably unhappy.

--
Daniel Morgan
http://www.outreach.washington.edu/ext/certificates/oad/oad_crs.asp
http://www.outreach.washington.edu/ext/certificates/aoa/aoa_crs.asp
(e-mail address removed)
(replace 'x' with a 'u' to reply)
 
J

Joe Weinstein

Daniel said:
I appreciate your opinion and your honesty that your perspective comes
from selling that middle tier but I completely disagree.
cool.

The 'lets push more bytes down the pipe and across all those routers'
thinking is not going to lead to performance. You may be scalable but
performance will suffer. And you will be no more scalable than a thinner
client.

Well, I actually want thin clients, but from DBMS perspective, the application
server is not a thin client. I do believe that no bytes should leave the DBMS
that aren't needed at the end client, but once they've left, they should be
retained and milked/reused for all they are worth, and this can be intelligently
and profitably done. Performance and scalability follow, according to the non-
volatility of the data, the relative proximity of the clients to the middle tier
as opposed to their distance to the DBMS, and the relaitve lightness of the
communication protocol between client and middle tier as opposed to the protocol
between DBMS and any of it's clients.
Render under to database everything you can do in the database and let
the middle tier do what it does best ... fail-over, load levelling, and
serving up the front-end.

Sure, and protecting the DBMS from uncontrolled ad-hoc connections and mindless
repeat queries, and acting as a transaction monitor/controller to get the best
performance out of the DBMS, etc. Otherwise you should explain why Oracle chooses
to use BEA as middleware when it simply tries to demonstrate it's own performance
potential in TPC-C benchmarks.
Try tuning all that rotten SQL coming from those fat front-ends sometime
and you will understand why those here that have experience with
PeopleSoft, SAP, Baan, and Siebel are remarkably unhappy.

Well, sure. I have done so, in fact. The stupidities are legion. Many of these
applications are ridiculous 'ports' of 70's era COBOL/ISAM applications to
'client-server' by simply swapping in row-by-row cursor fetches to the DBMS
instead of the previous ISAM call. Intelligent client-server architecture
(as I've said) puts the sawmills where the trees are, not in the client. However,
client-server is dead, at least to the extent that now everyone in the company
or the internet may want the info in the DBMS(s), and transactions now involve
serveral DBMSes and other resources. Thats where the middle tier makes it's
contribution, and depending on what's done, and what degree of concurrency,
isolation, atomicity etc that particular data requires, it can now find a
profitable home in either the DBMS or the middle tier, at least as a cache.
As an example from the Baan suite, they really did architect their application
so it queried the DBMS for a list of the countries in Europe hundreds of times
a day for each user. A current state-of-the-art application would have a generic
browser as one of it's clients, and in such a case, the middle tier is ideal for
caching the list of the countries in Europe, and making is available to all
clients, and given the political instability in Europe, maybe refreshing it's
list once daily from the DBMS? It saves a lot of DBMS cycles...
IMHO...
Joe
 
N

Noons

Joe said:
Over the last several versions of Oracle, developers have been provided with
a pretty revolutionary idea for a database product - namely the ability to
write code that used to belong in the middle tier and store it in the
database. I'm referring here to the ability to write stored procedures in
Java.

You gotta be joking! A "revolutionary idea"? Where the heck have
you been in the last 15 years??? It's only been possible in Oracle
since about 1990 or thereabouts... Oh yes, Java is NOT the only way
to code, in case you haven't noticed.

For starters: there is NO such thing as "code that used to belong
in the middle tier"! That is an invention of middle tier vendors
that has NEVER been proven as valid in real application. Code NEVER
belonged in the middle tier until the concept of multi tier was invented,
about 6 or 7 years ago.

Now of course, Microsoft with their SQL Server product is doing the same
thing. The next version of SQL Server will allow programmers to write
stored procedures in any of the .NET languages.

Amazing! Must be breakneck technology. Pity it's been done by just
about any other serious database vendor for the last 10 years...
But I'm quite sure now that M$ is jumping on the bandwagon, it will
suddenly become a credible "industry standard" to store code in the
database. Oh yeah, they've been able to do so with Transact-SQL
for ages but what the hey...

I'm interested in looking at the increased choices developers now have
because of these new features in more depth ,developing some best practices
on the subject, and possibly publishing an article on the topic.

Do a search on comp.databases.oracle.server for "design" and "J2EE".
Then read.

I personally am more experienced with SQL Server than with Oracle. I am
therefore looking for someone who has been involved with making these
choices in the Oracle environment who would like to collaborate with me on
the subject.

Like I said: search on c.d.o.s.

Cheers
Nuno Souto
(e-mail address removed)
 
N

Noons

to SAP). My personal experience with the issue which interests you is
that as much as possible should be stored in the database.

Hmmm, from the pure performance point of view that is usually
the case. But more needs to be said about that. If someone moves
a Java component into the database and that Java component
decides to retrieve the data to populate an internal object and
continues to apply methods to the object to ensure for example
referential integrity, the problem will still be there. It doesn't
just matter where the code is: the problem is also with the bad
object-relational mapping. Moving the code to the db is not
gonna solve that one.

The techniques for efficient storage and manipulation of data were
investigated ages ago and have been perfected over the last 30 years of
database technology. There is NOTHING that Java can add to that or
the OO model can improve on that subject. However, the OO-design
proponents insist and persist on ignoring the entire edifice of IT
already existant and do things their own way.
The result is a monumental impedance mismatch with no solution until
more rsponsible people are put in charge of these OO-based projects.
Siebel, by
some twisted reasoning about compatibility of code on various
databases (it runs on Oracle, SQL Server and DB2), decided to keep
almost all the code (including referential integrity!) in the middle
tier.

There is worse. I know of one that decided to create its own
indexing structures and keep them inside a table as a BLOB.
So that they'd remain portable and "efficient"...

As a result, on ALL the projects I've been a part of, we have
had data issues. The worst part is for the foreign keys which are not
valid.


I'm surprised you only got data issues. You'd also have appaling
performance on any significant volume of data. That seems to
be the constant on all these products with an internal design
decided by group hug...
objects/columns which don't exist anymore, ...). Also, performance is
always better from inside the database, from what I've seen so far.


By a very large factor. Orders of magnitude better in fact if the
object-relational mapping is properly done.

This is logical since there is less network traffic when everything is
done from Oracle.


Caching could prevent that, but like someone else pointed out
the problem is with cache synchronization. A BIG one. Oracle has
a thing called cache-fusion that is supposed to solve that problem. But
it didn't catch on that much with the 3rd-party brigade.


IMHO, business-rules code should reside in the middle tier. That is
by far the most scalable solution. But ANY code that deals with data
and its storage mechanism MUST reside on the db. How to make the two
talk? That is the province of object-relational mapping design and
database design.
The rules for those have been established long ago, people just insist
on taking 1 week Java courses and believe this replaces proper training and
experience.
Most unfortunate.

Cheers
Nuno Souto
(e-mail address removed)
 
N

Noons

to SAP). My personal experience with the issue which interests you is
that as much as possible should be stored in the database.

Hmmm, from the pure performance point of view that is usually
the case. But more needs to be said about that. If someone moves
a Java component into the database and that Java component
decides to retrieve the data to populate an internal object and
continues to apply methods to the object to ensure for example
referential integrity, the problem will still be there. It doesn't
just matter where the code is: the problem is also with the bad
object-relational mapping. Moving the code to the db is not
gonna solve that one.

The techniques for efficient storage and manipulation of data were
investigated ages ago and have been perfected over the last 30 years of
database technology. There is NOTHING that Java can add to that or
the OO model can improve on that subject. However, the OO-design
proponents insist and persist on ignoring the entire edifice of IT
already existant and do things their own way.
The result is a monumental impedance mismatch with no solution until
more rsponsible people are put in charge of these OO-based projects.
Siebel, by
some twisted reasoning about compatibility of code on various
databases (it runs on Oracle, SQL Server and DB2), decided to keep
almost all the code (including referential integrity!) in the middle
tier.

There is worse. I know of one that decided to create its own
indexing structures and keep them inside a table as a BLOB.
So that they'd remain portable and "efficient"...

As a result, on ALL the projects I've been a part of, we have
had data issues. The worst part is for the foreign keys which are not
valid.


I'm surprised you only got data issues. You'd also have appaling
performance on any significant volume of data. That seems to
be the constant on all these products with an internal design
decided by group hug...
objects/columns which don't exist anymore, ...). Also, performance is
always better from inside the database, from what I've seen so far.


By a very large factor. Orders of magnitude better in fact if the
object-relational mapping is properly done.

This is logical since there is less network traffic when everything is
done from Oracle.


Caching could prevent that, but like someone else pointed out
the problem is with cache synchronization. A BIG one. Oracle has
a thing called cache-fusion that is supposed to solve that problem. But
it didn't catch on that much with the 3rd-party brigade.


IMHO, business-rules code should reside in the middle tier. That is
by far the most scalable solution. But ANY code that deals with data
and its storage mechanism MUST reside on the db. How to make the two
talk? That is the province of object-relational mapping design and
database design.
The rules for those have been established long ago, people just insist
on taking 1 week Java courses and believe this replaces proper training and
experience.
Most unfortunate.

Cheers
Nuno Souto
(e-mail address removed)
 
N

Noons

Joe Weinstein said:
Well, sure. I have done so, in fact. The stupidities are legion. Many of these
applications are ridiculous 'ports' of 70's era COBOL/ISAM applications to
'client-server' by simply swapping in row-by-row cursor fetches to the DBMS
instead of the previous ISAM call.

Bingo! Worse: in many "sophisticated" instances, these "ports" end up creating
instances of classes (objects), where the row-by-row fetch is replaced by a discrete
random access SQL in a class method.

Complete with an "iterator" class that calls the method for EVERY single object
registered. Invariably, these iterator classes are used to implement set group
operations. A task that the database engine itself is many orders of magnitude more
efficient at realizing.

And the list of stupid AND moronic "port" designs goes on and on...

(as I've said) puts the sawmills where the trees are, not in the client. However,
client-server is dead, at least to the extent that now everyone in the company

Minor correction: two-tier client-server is dead. Multi-tier is STILL client-server!
Not that I agree: now that we finally have the gear (CPU and memory) and network
bandwidth to make two-tier viable, what does this industry go and do? It kills it.
Brilliant...
 
D

Daniel Morgan

Joe said:
Sure, and and
mindless
repeat queries, and acting as a transaction monitor/controller to get
the best
performance out of the DBMS, etc. Otherwise you should explain why
Oracle chooses
to use BEA as middleware when it simply tries to demonstrate it's own
performance
potential in TPC-C benchmarks.

There are things a lot more important than benchmarks and performance.
We all know all RDBMS vendors do whatever they need to get the magic
number ... the one better than the competition.

But all of what you refer to as "protecting the DBMS from uncontrolled
ad-hoc connections" as middle-ware is not valid on its face. You put up
the best middleware tool you can and then give me SQL*Plus, MS Access,
whatever ... and all of your protections are null and void. The
middleware only protects from those connections routed through the
middleware ... and it is for that reason specifically we see far too
much data corruption.
Well, sure. I have done so, in fact. The stupidities are legion. Many of
these
applications are ridiculous 'ports' of 70's era COBOL/ISAM applications to
'client-server' by simply swapping in row-by-row cursor fetches to the DBMS
instead of the previous ISAM call. Intelligent client-server architecture
(as I've said) puts the sawmills where the trees are, not in the client.
However,
client-server is dead, at least to the extent that now everyone in the
company
or the internet may want the info in the DBMS(s), and transactions now
involve
serveral DBMSes and other resources. Thats where the middle tier makes it's
contribution, and depending on what's done, and what degree of concurrency,
isolation, atomicity etc that particular data requires, it can now find a
profitable home in either the DBMS or the middle tier, at least as a cache.
As an example from the Baan suite, they really did architect their
application
so it queried the DBMS for a list of the countries in Europe hundreds of
times
a day for each user. A current state-of-the-art application would have a
generic
browser as one of it's clients, and in such a case, the middle tier is
ideal for
caching the list of the countries in Europe, and making is available to all
clients, and given the political instability in Europe, maybe refreshing
it's
list once daily from the DBMS? It saves a lot of DBMS cycles...
IMHO...
Joe

I'm not arguming the valid of your company's product or middleware. Only
the fact that data security and integrity must be protected at the RDBMS
level. That you may layer a little more on top is fine but anyone
depending on it rather than the RDBMS is begging for problems.

--
Daniel Morgan
http://www.outreach.washington.edu/ext/certificates/oad/oad_crs.asp
http://www.outreach.washington.edu/ext/certificates/aoa/aoa_crs.asp
(e-mail address removed)
(replace 'x' with a 'u' to reply)
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,756
Messages
2,569,535
Members
45,008
Latest member
obedient dusk

Latest Threads

Top