howto convert a *nix DB_File to windows?

D

dan baker

I have a little application that saves data into a tied hash on a *nix
webserver. I would like to be able to pull a copy local to my PC
running windows 98 for testing and backup with as little pain as
possible.

So far the only way I've been able to do this is write export-import
utilities to dump the DB to a text file and reload it at the other
end. Gets to be a pain to maintain if fields change, etc. Is there a
way to more directly convert the binary hash file that gets created to
avoid this manual export-import conversion?

....I use a lot of the defaults for tie() when writing to the hash
like:

use DB_File;
tie %tempHash , 'DB_File' , "${cfgRelPath_cgi2DB}/${dbfile}" ;
... blah, blah, blah

thanks,

d
 
G

Gunnar Hjalmarsson

dan said:
I have a little application that saves data into a tied hash on a
*nix webserver. I would like to be able to pull a copy local to my
PC running windows 98 for testing and backup with as little pain as
possible.

I have done so successfully using SDBM_File, but can't tell whether
the same can be done with files created through DB_File. The issue
might be different Berkeley DB versions.

Btw, you do transfer the files from *nix to Windows in binary mode, right?
 
P

Paul Marquess

dan baker said:
I have a little application that saves data into a tied hash on a *nix
webserver. I would like to be able to pull a copy local to my PC
running windows 98 for testing and backup with as little pain as
possible.

So far the only way I've been able to do this is write export-import
utilities to dump the DB to a text file and reload it at the other
end. Gets to be a pain to maintain if fields change, etc. Is there a
way to more directly convert the binary hash file that gets created to
avoid this manual export-import conversion?

...I use a lot of the defaults for tie() when writing to the hash
like:

use DB_File;
tie %tempHash , 'DB_File' , "${cfgRelPath_cgi2DB}/${dbfile}" ;
... blah, blah, blah

If DB_File has been built with the same version of Berkeley DB on your Unix
box and your Windows box, the data files can be read on either.

Run this on your Unix box to find out what version of Berkeley DB the
DB_File module was built with:


perl -e 'use DB_File; print qq{Berkeley DB ver $DB_File::db_ver\n}'

and this on the Windows box

perl -e "use DB_File; print qq{Berkeley DB ver $DB_File::db_ver\n}"


Paul
 
5

580046470588-0001

Paul Marquess said:
If DB_File has been built with the same version of Berkeley DB on your Unix
box and your Windows box, the data files can be read on either.
how about the BerkeleyDB module instead of DB_File?
the later sleepycat versions provide much more functionality,
like support for multi-writers/transactions/journaling

Klaus Schilling
 
D

dan baker

Paul Marquess said:
If DB_File has been built with the same version of Berkeley DB on your Unix
box and your Windows box, the data files can be read on either.
---------------

Its a different version. I use the pre-compiled stuff from activestate
on windows, and the remote webserver has a little bit newer libs
installed. I move the files in binary mode, but they dont seem to be
compatible.

d
 
G

Gunnar Hjalmarsson

dan said:
Its a different version. I use the pre-compiled stuff from
activestate on windows, and the remote webserver has a little bit
newer libs installed. I move the files in binary mode, but they
dont seem to be compatible.

In your original post you were talking about a "little application".
If the hash is small as well, why not switch from DB_File to the
simple but compatible SDBM_File?
 
D

dan baker

Gunnar Hjalmarsson said:
In your original post you were talking about a "little application".
If the hash is small as well, why not switch from DB_File to the
simple but compatible SDBM_File?
------------

I cant remember the limitations fo the SDBM versus the DB... I think
at the time I had some reason for using DB_File. The largest of the
files is maybe 10k-20k records, with about 20 "fields" of information
of variable length. Basically I store them as key, string with special
character delimiters, and pack/unpack fromt he string to fields when I
need them.
 
G

Gunnar Hjalmarsson

dan said:
I cant remember the limitations fo the SDBM versus the DB... I
think at the time I had some reason for using DB_File. The largest
of the files is maybe 10k-20k records, with about 20 "fields" of
information of variable length.

Okay... An SDBM_File key/value pair may not exceed ~ 1,000 bytes, so
if you have records of 10-20k size, SDBM_File is not an option.

I just tried to be creative. :)
 
D

Dave

dan said:
Its a different version. I use the pre-compiled stuff from activestate
on windows, and the remote webserver has a little bit newer libs
installed. I move the files in binary mode, but they dont seem to be
compatible.

What version does your remote server have? I compile my own native
Windows Perl binary dist and modules from CPAN rather than use the
binaries from ActiveState, so I've also built all the necessary external
supporting libraries, including the Berkeley DB. Currently, my
$DB_File::db_ver is 4.002052.

Dave
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,774
Messages
2,569,598
Members
45,158
Latest member
Vinay_Kumar Nevatia
Top