I happened to misspell a hostname, then it hanged.
[mishima@manage ~]$ mpirun -np 6 -host node05,nod06
~/mis/openmpi/demos/myprog
nod06: Unknown host
mpirun: abort is already in progress...hit ctrl-c again to forcibly
terminate
Tetsuya
> No problem - we appreciate you taking the time to confir
On Thu, 2014-03-13 at 14:53 -0700, Ross Boylan wrote:
> On Thu, 2014-03-13 at 13:13 -0700, Ross Boylan wrote:
> > I might just switch to mpi.send, though the fact that something is
> > going
> > wrong makes me nervous.
> I tried using mpi.send, but it fails also. The failure behavior is
> peculia
No problem - we appreciate you taking the time to confirm. Jeff encountered
something late today, so we may indeed still have a lingering problem. :-(
Will keep you posted
Ralph
On Mar 13, 2014, at 5:08 PM, tmish...@jcity.maeda.co.jp wrote:
>
>
> Hi Ralph, I'm late to your release again due
Hi Ralph, I'm late to your release again due to TD.
At that time, I manually applied #4386 and #4383 to 1.7 branch
- namely openmpi-1.7.5rc2, and did the check. I might have
made some mistake.
Now, I found openmpi-1.7.5rc3 had just released and confirmed
it worked fine, thanks.
Tetsuya
> It's
On Thu, 2014-03-13 at 13:13 -0700, Ross Boylan wrote:
> I might just switch to mpi.send, though the fact that something is
> going
> wrong makes me nervous.
I tried using mpi.send, but it fails also. The failure behavior is
peculiar.
After I launch the processes I can send a message to the assem
I changed the calls to dlopen in Rmpi.c so that it tried libmpi.so
before libmpi.so.0. I also rebuilt MPI, R, and Rmpi as suggested
earlier by Bennet Fauber
(http://www.open-mpi.org/community/lists/users/2014/03/23823.php).
Thanks Bennet!
My theory is that the change to dlopen by itself was suffi
On Wed, 2014-03-12 at 10:52 -0400, Bennet Fauber wrote:
> My experience with Rmpi and OpenMPI is that it doesn't seem to do well
> with the dlopen or dynamic loading. I recently installed R 3.0.3, and
> Rmpi, which failed when built against our standard OpenMPI but
> succeeded using the following
Huh - well, the man page is definitely wrong. We ignore all other app
information on the command line, but not the MCA parameters.
On Mar 13, 2014, at 6:27 AM, Jianyu Liu wrote:
> Hi,
>
> The man page said all other command line options will be ignored if --appfile
> used.
>
> So just won
Hi,
The man page said all other command line options will be ignored if --appfile
used.
So just wondering
1. how to specify "--mca btl ^sm " option while launching MPMD applications
with --appfile?
2. how to know the option "--mca btl ^sm " worked?
Appreciating your kindly help
It's okay - we thought we had it fixed, but not for that scenario.
On Mar 12, 2014, at 9:02 PM, tmish...@jcity.maeda.co.jp wrote:
>
>
> Sorry for disturbing, please keep going ...
>
> Tetsuya
>
>> Yes, I know - I am just finishing the fix now.
>>
>> On Mar 12, 2014, at 8:48 PM, tmish...@jcit
We haven't figured it out yet - it seems somewhat erratic as your observations
don't match anything we are seeing on our machines. We know the coll/ml
component is causing trouble for Java applications (but nothing else, oddly
enough), but that doesn't match your experience.
On Mar 12, 2014, a
Sorry for disturbing, please keep going ...
Tetsuya
> Yes, I know - I am just finishing the fix now.
>
> On Mar 12, 2014, at 8:48 PM, tmish...@jcity.maeda.co.jp wrote:
>
> >
> >
> > Hi Ralph, this problem is not fixed completely by today's latest
> > ticket #4383, I guess ...
> >
> > https://sv
Just checking if there's some solution for this.
Thank you,
Saliya
On Tue, Mar 11, 2014 at 10:54 PM, Saliya Ekanayake wrote:
> I forgot to mention that I tried the hello.c version instead of Java and
> it too failed in a similar manner, but
>
> 1. On a single node with --mca btl ^tcp it went up
13 matches
Mail list logo