R
raidvvan
Hi there,
We have been looking for some time now for a database system that can
fit a large distributed computing project, but we haven't been able to
find one.
I was hoping that someone can point us in the right direction or give
us some advice.
Here is what we need. Mind you, these are ideal requirements so we do
not expect to find something that fits entirely into what we need
but we hope to get somewhat closer to that.
We need a database/file system:
1. built in C preferrably ANSI C, so that we can port it to Linux/
Unix, Windows, Mac and various other platforms; if it can work on
Linux only then it is OK for now
2. that has a public domain or GPL/LGPL licence and source code access
3. uses hashing or b-trees or a similar structure
4. has support for files in the range of 1-10 GB; if it can get to 1
GB only, that should still be OK
5. can work with an unlimited number of files on a local machine; we
don't need access over a network, just local file access
6. that is fairly simple (i.e. library-style, key/data records); it
doesn't have to have SQL support of any kind; as long as we can add,
update, possibly delete data, browse through the records and filter/
query them it should be OK; no other features are required, like
backup, restore, users & security, stored procedures...
7. reliable if possible
8 .local transactional support if possible; there is no need for
distributed transactions
9. fast data access if possible
We can not use any of the major commercial databases (e.g. Oracle, SQL
Server, DB2 or larger systems like Daytona...) obviously because of
licensing and source code issues. We looked closer to MySQL,
PostgreSQL but they are too big and have way too many features that we
do not need. We need to be able to install a database/file system on
possibly tens of thousands of machines and we also expect it to work
without administration.
On top of that, we might end up with thousands of files of different
sizes on each machine. Are there any embedded (i.e. "lighter")
versions of these two databases?
We haven't been able to find anything like that. I am not sure how
much work would involve in "trimming" down some of these databases,
but that doesn't seem to be too easy to do.
Berkeley-DB would have been the best but is now under Oracle hands and
the licence has changed. TinyCDB was a close call, but the fact
that we need to rebuild the database for each data update is making it
unfeasible for large files (i.e. ~1Gb). SQL Lite is very interesting,
but it has many features that we don't need, like SQL support.
Right now we are using plain XML files so anything else would be a
great improvement.
Any suggestions or links to sites or papers or books would be welcome.
Any help would be greatly appreciated.
If this is not in the proper forum I appreciate if someone can move
the post to the right location or point us to the right one.
Thanks in advance.
Best regards,
Ovidiu Anghelidi
(e-mail address removed)
Artificial Intelligence - Reverse Engineering The Brain
We have been looking for some time now for a database system that can
fit a large distributed computing project, but we haven't been able to
find one.
I was hoping that someone can point us in the right direction or give
us some advice.
Here is what we need. Mind you, these are ideal requirements so we do
not expect to find something that fits entirely into what we need
but we hope to get somewhat closer to that.
We need a database/file system:
1. built in C preferrably ANSI C, so that we can port it to Linux/
Unix, Windows, Mac and various other platforms; if it can work on
Linux only then it is OK for now
2. that has a public domain or GPL/LGPL licence and source code access
3. uses hashing or b-trees or a similar structure
4. has support for files in the range of 1-10 GB; if it can get to 1
GB only, that should still be OK
5. can work with an unlimited number of files on a local machine; we
don't need access over a network, just local file access
6. that is fairly simple (i.e. library-style, key/data records); it
doesn't have to have SQL support of any kind; as long as we can add,
update, possibly delete data, browse through the records and filter/
query them it should be OK; no other features are required, like
backup, restore, users & security, stored procedures...
7. reliable if possible
8 .local transactional support if possible; there is no need for
distributed transactions
9. fast data access if possible
We can not use any of the major commercial databases (e.g. Oracle, SQL
Server, DB2 or larger systems like Daytona...) obviously because of
licensing and source code issues. We looked closer to MySQL,
PostgreSQL but they are too big and have way too many features that we
do not need. We need to be able to install a database/file system on
possibly tens of thousands of machines and we also expect it to work
without administration.
On top of that, we might end up with thousands of files of different
sizes on each machine. Are there any embedded (i.e. "lighter")
versions of these two databases?
We haven't been able to find anything like that. I am not sure how
much work would involve in "trimming" down some of these databases,
but that doesn't seem to be too easy to do.
Berkeley-DB would have been the best but is now under Oracle hands and
the licence has changed. TinyCDB was a close call, but the fact
that we need to rebuild the database for each data update is making it
unfeasible for large files (i.e. ~1Gb). SQL Lite is very interesting,
but it has many features that we don't need, like SQL support.
Right now we are using plain XML files so anything else would be a
great improvement.
Any suggestions or links to sites or papers or books would be welcome.
Any help would be greatly appreciated.
If this is not in the proper forum I appreciate if someone can move
the post to the right location or point us to the right one.
Thanks in advance.
Best regards,
Ovidiu Anghelidi
(e-mail address removed)
Artificial Intelligence - Reverse Engineering The Brain