MPI and Pthread


P

Pallav singh

Hi ,

i am trying to compile following program and getting error due to
unable to include
path to File "mpi.h" . Is it provided under Linux kernel or we need to
download library for this

#include "mpi.h"
#include <stdio.h>
#include <stdlib.h>

/* A simple test of Reduce with all choices of root process */
int main( int argc, char *argv[] )
{
int errs = 0;
int rank, size, root;
int *sendbuf, *recvbuf, i;
int minsize = 2, count;
MPI_Comm comm;

MPI_Init( &argc, &argv );

comm = MPI_COMM_WORLD;
/* Determine the sender and receiver */
MPI_Comm_rank( comm, &rank );
MPI_Comm_size( comm, &size );

for (count = 1; count < 130000; count = count * 2) {
sendbuf = (int *)malloc( count * sizeof(int) );
recvbuf = (int *)malloc( count * sizeof(int) );
for (root = 0; root < size; root ++) {
for (i=0; i<count; i++) sendbuf = i;
for (i=0; i<count; i++) recvbuf = -1;
MPI_Reduce( sendbuf, recvbuf, count, MPI_INT, MPI_SUM,
root, comm );
if (rank == root) {
for (i=0; i<count; i++) {
if (recvbuf != i * size) {
errs++;
}
}
}
}
free( sendbuf );
free( recvbuf );
}

MPI_Finalize();
return errs;
}

Thanks
Pallav Singh
 
Ad

Advertisements

M

Malcolm McLean

Hi ,

i am trying to compile following program and getting error due to
unable to include path to File "mpi.h" .
A standard C compiler won't compile MPI programs. You need an MPI
compiler, usually called "mpicc". It should link the MPI library
automatically.
 
B

blmblm

A standard C compiler won't compile MPI programs. You need an MPI
compiler, usually called "mpicc". It should link the MPI library
automatically.

Are you sure you're not thinking of OpenMP rather than MPI? In at
least some implementations of MPI, mpicc is just a wrapper script
that calls an underlying compiler with appropriate flags to help it
find the include file (mpi.h) and libraries.

But the OP does indeed need an MPI implementation to compile the
program shown.
 
M

Malcolm McLean

Are you sure you're not thinking of OpenMP rather than MPI?  In at
least some implementations of MPI, mpicc is just a wrapper script
that calls an underlying compiler with appropriate flags to help it
find the include file (mpi.h) and libraries.
On my MPI implementation the mpicc was very fussy, rejecting C++ style
comments. So I would guess that it was a special compiler. I never
tried to dive into its internals, however.
 
B

blmblm

On my MPI implementation the mpicc was very fussy, rejecting C++ style
comments. So I would guess that it was a special compiler. I never
tried to dive into its internals, however.

I guess that's possible, though it seems equally possible to me that
it called a regular C compiler but passed it flags that made it reject
C++-style comments. I can't really imagine why someone implementing
MPI would produce a full compiler -- I mean, the MPI API defines a set
of library functions rather than language extensions -- but what do
I know, right?

(What implementation was this? I'm curious.)
 
B

blmblm

Are you sure you're not thinking of OpenMP rather than MPI? In at
least some implementations of MPI, mpicc is just a wrapper script
that calls an underlying compiler with appropriate flags to help it
find the include file (mpi.h) and libraries.

But the OP does indeed need an MPI implementation to compile the
program shown.

This being clc, I feel obligated to dot the T's and cross the I's:

To compile and link the program the OP will need the include
file and the library file(s) from an MPI implementation.
The runtime-system part of the implementation (mpiexec or mpirun
command, e.g.) will also be needed to execute the program.
 
Ad

Advertisements

M

Malcolm McLean

I guess that's possible, though it seems equally possible to me that
it called a regular C compiler but passed it flags that made it reject
C++-style comments.  I can't really imagine why someone implementing
MPI would produce a full compiler -- I mean, the MPI API defines a set
of library functions rather than language extensions -- but what do
I know, right?  

(What implementation was this?  I'm curious.)
It was running on a Beowulf. There were endless versions, for 64 bit
and 32-bit libraries. The system kept falling over because the paths
were set up incorrectly and the wrong version of mpirun was being used
with the mpi compiler, or likewise.

Whilst I agree that someone wouldn't write a compiler from scratch
just to compile MPI, they might have patched a compiler to get rid of
some of the problems, such as IO not working normally any more.
 
B

blmblm

[ snip ]
It was running on a Beowulf. There were endless versions, for 64 bit
and 32-bit libraries. The system kept falling over because the paths
were set up incorrectly and the wrong version of mpirun was being used
with the mpi compiler, or likewise.

Sounds very irritating. So this was a heterogeneous cluster? I don't
have any real experience with those -- in theory MPI is supposed to be
able to deal with them, but in practice, well, we know the difference
between theory and practice, right?
Whilst I agree that someone wouldn't write a compiler from scratch
just to compile MPI, they might have patched a compiler to get rid of
some of the problems, such as IO not working normally any more.

Define "normally" :). (I/O does get a little potentially
strange when a "program" consists of multiple processes running on
different computers that don't share access to a terminal window
and indeed may not even have access to a shared filesystem.)

I guess it's possible, though it seems like making stdout/stderr
do something sensible would be something one would address in a
runtime system rather than in the compiler. But you used this
system and I didn't ....

I think I'm drifting further and further off topic, here, though.
 
M

Malcolm McLean

I guess it's possible, though it seems like making stdout/stderr
do something sensible would be something one would address in a
runtime system rather than in the compiler.  But you used this
system and I didn't ....
I never delved into the system's internals. But all the standard
output was scooped up by the runtime system and mixed together, so
effectively you could only call printf() from the root node. If you
had any bugs things got very strange. Sometimes it would report that a
node had failed, but only if you were lucky. Usually it tended to get
tangled up in a web of unresolved messages, and the program output
would be of no help - the output log didn't relect the point the
program had reached, at all. Whilst they could have done everything by
rewriting the standard library, I suspect there were compiler patches
to ease things along, maybe inserting calls to the output scavenger,
or something like that.
 
S

Seebs

Hi ,

i am trying to compile following program and getting error due to
unable to include
path to File "mpi.h" . Is it provided under Linux kernel or we need to
download library for this

This is largely non-topical, but:
#include "mpi.h"

That it's in "" rather than <> suggests that this file was supposed to
be provided with or as part of whatever source you got the program from.

That's the sum total of C-related answers you can get. My guess is that
you are very deeply confused, because you seem to think that what appears
to be a normal userspace library thing should be "provided under Linux
kernel". That's nonsensical.

-s
 
K

Keith Thompson

Seebs said:
This is largely non-topical, but:


That it's in "" rather than <> suggests that this file was supposed to
be provided with or as part of whatever source you got the program from.

That's the sum total of C-related answers you can get. My guess is that
you are very deeply confused, because you seem to think that what appears
to be a normal userspace library thing should be "provided under Linux
kernel". That's nonsensical.

For someone who doesn't quite get the distinction between a kernel and
an OS, it's not entirely nonsensical. There are plenty of headers and
libraries that aren't part of the C standard library, or even of POSIX,
that are (typically) provided by default by many Linux (or GNU/Linux if
you prefer; *please* let's not get into that debate) distributions.

mpi.h isn't one of them.
 
Ad

Advertisements

B

blmblm

I never delved into the system's internals. But all the standard
output was scooped up by the runtime system and mixed together, so
effectively you could only call printf() from the root node.

That sounds like what in my experience is fairly typical for MPI
implementations -- output to stdout/stderr from all processes is
sent back to the root node, with no particular attempt to impose
any ordering among messages from different processes.
If you
had any bugs things got very strange. Sometimes it would report that a
node had failed, but only if you were lucky. Usually it tended to get
tangled up in a web of unresolved messages, and the program output
would be of no help - the output log didn't relect the point the
program had reached, at all.

That sounds like a not-very-satisfactory implementation, and not very
like any of the ones I can remember using (MPICH, LAM/MPI, OpenMPI).
I wonder if someone in comp.parallel.mpi would recognize it from your
description. I'm getting kind of curious!
Whilst they could have done everything by
rewriting the standard library, I suspect there were compiler patches
to ease things along, maybe inserting calls to the output scavenger,
or something like that.

To me it seems likeliest that an MPI implementation would deal with
the standard input/output streams via the runtime system -- I mean,
that's the component that launches processes, and it seems to me that
the sensible approach would be for it connect those streams to itself
rather than either modifying the compiler or rewriting the standard
library. But I also haven't delved into internals.
 
M

Malcolm McLean

That sounds like a not-very-satisfactory implementation, and not very
like any of the ones I can remember using (MPICH, LAM/MPI, OpenMPI).
I wonder if someone in comp.parallel.mpi would recognize it from your
description.  I'm getting kind of curious!
It was LAM - at least that name is recognisable. However there was
also mpich installed as well. Whenever the cluster was rebooted the
paths would get tangled up and things would refuse to run. Then if you
added Fortran you had to run another compiler because it would refuse
to link the Fortran runtime library, but that compiler wouldn't
recognise C main as the entry point unless you gave it a special flag
to say "no main". It was all very intricate.

The worst problem with it, however, was quite trivial from a technical
point of view. You had to submit jobs the the queue, which meant that
each one needed a script. So of course there were constantly little
typing errors and other problems in these scripts which meant that the
job would get to the head of the queue and then crash out because it
couldn't read an input file, or, if you got realy unlucky, run for two
weeks then report "Can't open output file". I got into the habit of
attempting to open the output for writing first thing to avoid that
little scenario.
 
B

blmblm

It was LAM - at least that name is recognisable. However there was
also mpich installed as well. Whenever the cluster was rebooted the
paths would get tangled up and things would refuse to run.

What a mess! Sounds like a badly-configured system. I wonder why
there were two implementations installed. (I can actually imagine
somewhat sensible reasons for having more than one, but it does
complicate things.)
Then if you
added Fortran you had to run another compiler because it would refuse
to link the Fortran runtime library, but that compiler wouldn't
recognise C main as the entry point unless you gave it a special flag
to say "no main". It was all very intricate.

Huh. I just did some brief experiments with an oldish version
of LAM/MPI and was able to get a simple program with a C main
and an F77 subroutine going without any particular trouble.
(It probably helps that I did some similar experiments with the
successor implementation OpenMPI recently.) I vaguely remember
hearing at some point, though, that in one of these mixed-language
programs it was best not to try to do I/O in both languages, and
I can believe that doing so could lead to trouble.

Also, as best I can tell, while "mpicc" is a program rather than
a script, it appears to just set up flags and call an "underlying
compiler" (gcc on the system where I tried this). Assembly-language
output from compiling with mpicc was identical to what I got when I
compiled with gcc directly.
The worst problem with it, however, was quite trivial from a technical
point of view. You had to submit jobs the the queue, which meant that
each one needed a script. So of course there were constantly little
typing errors and other problems in these scripts which meant that the
job would get to the head of the queue and then crash out because it
couldn't read an input file, or, if you got realy unlucky, run for two
weeks then report "Can't open output file". I got into the habit of
attempting to open the output for writing first thing to avoid that
little scenario.

It's been a *long* time since I've worked with that kind of system.
It does have the advantage that if you can request exclusive use
of nodes you can get meaningful timings for programs, which is nice
if part of your goal is academic publication. (It's more difficult
to do that -- get meaningful timings -- if you're sharing the nodes
that make up the cluster with other users.)

Of course one of the hazards of any kind of batch work is that it can
take a while to find out about errors .... In a perfect world a system
like the one you describe would have some way of doing short tests with
quick turnaround, so you could debug the programs and the scripts. In
a not-perfect world, eh, <shrug>.
 
Ad

Advertisements

M

Malcolm McLean

Huh.  I just did some brief experiments with an oldish version
of LAM/MPI and was able to get a simple program with a C main
and an F77 subroutine going without any particular trouble.
To replicate the problem I had, both the C code and the Fortran code
need to call the maths library functions.

Then mpicc will refuse to link the Fortran unless you tell it to link
the Fortran library explicitly, at which point it will fail because
the symbols are multiply-defined. However mpif77 comes to your rescue.
Its complaint is only that main is multiply defined, which you can fix
with the -Nomain flag. To actually find out about that flag took me an
entire day of searching about on the web.
 

Top