Multiple instance of process - memory conflicts

S

Shuaib

I wonder if anybody could shed some light on a problem I am
encountering.

I have written a program in C that runs on Solaris 2.8. At busy times
of the day there may be multiple instances of it running (5-10), each
process taking approx. 3 seconds to complete.

Each instance of the program basically fetches data from a Oracle
database (using Remedy ARS API routines) and stores it in a user
defined structure that I have defined as a global variable. The
problem I encounter is when multiple instances of the program are
running. Often I find that the data in memory being held in one
process is mixed up with the data in memory of another.

A simplified example could be:

process instance 1 fetches data from record NA1234 and stores record
id in a string variable str1:

process instance 2 fetches data from another record NA9999 and stores
record
id also in a string variable str1:

However when I check the value of var1 in process instance 2, it
sometimes may say NA9999 (which would be correct) but other times it
may say NA1234 (ie. it is sharing the memory space of process 1),
especially when the processes are running in parallel.

Can this phenomena be attributed to global variables - if so changing
to a local instance would relieve this?

Any assistance greatly appreciated.

-- Shuaib
 
?

=?iso-8859-1?q?M=E5ns_Rullg=E5rd?=

I wonder if anybody could shed some light on a problem I am
encountering.

I have written a program in C that runs on Solaris 2.8. At busy times
of the day there may be multiple instances of it running (5-10), each
process taking approx. 3 seconds to complete.

Each instance of the program basically fetches data from a Oracle
database (using Remedy ARS API routines) and stores it in a user
defined structure that I have defined as a global variable. The
problem I encounter is when multiple instances of the program are
running. Often I find that the data in memory being held in one
process is mixed up with the data in memory of another.

Are these processes using shared memory? If not, it's impossible to
get a mixup. If they are using shared memory, you will get that sort
of problems, unless you do something to prevent it.
 
?

=?iso-8859-1?Q?Nils_O=2E_Sel=E5sdal?=

I wonder if anybody could shed some light on a problem I am
encountering.

I have written a program in C that runs on Solaris 2.8. At busy times
of the day there may be multiple instances of it running (5-10), each
process taking approx. 3 seconds to complete.

Each instance of the program basically fetches data from a Oracle
database (using Remedy ARS API routines) and stores it in a user
defined structure that I have defined as a global variable. The
problem I encounter is when multiple instances of the program are
running. Often I find that the data in memory being held in one
process is mixed up with the data in memory of another.

A simplified example could be:

process instance 1 fetches data from record NA1234 and stores record
id in a string variable str1:

process instance 2 fetches data from another record NA9999 and stores
record
id also in a string variable str1:
This is virtually impossible, I'd say;
* A bug somewhere in your program
* You are using threads rather than processes, and have locking
issues
* you use shared memory, and have locking issues.
* oracle or the api you use somehow screw things up for you,
or you havn't thought things enough through, e.g. something
updates your database while you query it and similar
potentional problesm.
 
L

Logan Shaw

Shuaib said:
I wonder if anybody could shed some light on a problem I am
encountering.

I have written a program in C that runs on Solaris 2.8. At busy times
of the day there may be multiple instances of it running (5-10), each
process taking approx. 3 seconds to complete.

Each instance of the program basically fetches data from a Oracle
database (using Remedy ARS API routines) and stores it in a user
defined structure that I have defined as a global variable. The
problem I encounter is when multiple instances of the program are
running. Often I find that the data in memory being held in one
process is mixed up with the data in memory of another.

My guess is you've done something like this:

(1) open Oracle connection in parent process
(2) fork()
(3) do Oracle query in children

That may not be a valid way to use the Oracle db client code.
I would try doing this instead:

(1) fork()
(2) open Oracle connection in child
(3) do Oracle query in child

I'm not an Oracle expert, I am only thinking of what might happen
if two instances of the client code are sharing the same initial
values in their data structures and are sharing the TCP connection.

Hope that helps.

- Logan
 
M

Michel Bardiaux

Logan said:
My guess is you've done something like this:

(1) open Oracle connection in parent process
(2) fork()
(3) do Oracle query in children

That may not be a valid way to use the Oracle db client code.
I would try doing this instead:

(1) fork()
(2) open Oracle connection in child
(3) do Oracle query in child

I'm not an Oracle expert, I am only thinking of what might happen
if two instances of the client code are sharing the same initial
values in their data structures and are sharing the TCP connection.

Extremely likely.

I have found (painfully!) that even this leads to trouble:

(1) Connect to Oracle in parent
(2) fork
(3) Do nothing related to Oracle in child
(4) Exit child

or this:

(3) Do nothing related to Oracle in parent
(4) Exit parent

Because the Oracle client runtime registers an atexit, just exiting the
process becomes akin to an Oracle request!

Since there does not seem to be any way to unregister an atexit handler,
the only thing that works is to have *NO* Oracle connection at all while
you fork. (There may be something in the most recent Oracle OCI API to
circumvent that problem, but I haven't found it yet)
 
D

David Schwartz

I have found (painfully!) that even this leads to trouble:

(1) Connect to Oracle in parent
(2) fork
(3) Do nothing related to Oracle in child
(4) Exit child

or this:

(3) Do nothing related to Oracle in parent
(4) Exit parent

Because the Oracle client runtime registers an atexit, just exiting the
process becomes akin to an Oracle request!

That's why the child should not call 'exit'. This same problem exists
with stdio streams.

DS
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,744
Messages
2,569,484
Members
44,903
Latest member
orderPeak8CBDGummies

Latest Threads

Top