Yes, Empire does the fluid structure coupling. It couples OpenFoam (fluid
analysis) and Abaqus (structural analysis).
Does all the software need to have the same MPI architecture in order to
communicate ?
Regards,Islem
Le Mardi 24 mai 2016 1h02, Gilles Gouaillardet a écrit :
what do y
Ralph Castain writes:
> Nobody ever filed a PR to update the branch with the patch - looks
> like you never responded to confirm that George’s proposed patch was
> acceptable.
I've never seen anything asking me about it, but I'm not an OMPI
developer in a position to review backports or even put
Megdich Islem writes:
> Yes, Empire does the fluid structure coupling. It couples OpenFoam (fluid
> analysis) and Abaqus (structural analysis).
> Does all the software need to have the same MPI architecture in order to
> communicate ?
I doubt it's doing that, and presumably you have no control
Hi Ralph,
thank you very much for your answer and your example program.
On 05/23/16 17:45, Ralph Castain wrote:
I cannot replicate the problem - both scenarios work fine for me. I’m not
convinced your test code is correct, however, as you call Comm_free the
inter-communicator but didn’t call Co
> On May 24, 2016, at 4:19 AM, Siegmar Gross
> wrote:
>
> Hi Ralph,
>
> thank you very much for your answer and your example program.
>
> On 05/23/16 17:45, Ralph Castain wrote:
>> I cannot replicate the problem - both scenarios work fine for me. I’m not
>> convinced your test code is correct
Doesn't Abaqus do its own environment setup? I.e., I'm *guessing* that you
should be able to set your environment startup files (e.g., $HOME/.bashrc) to
point your PATH / LD_LIBRARY_PATH to point to whichever MPI implementation you
want, and Abaqus will do whatever it needs to a) be independent
On May 24, 2016, at 7:19 AM, Siegmar Gross
wrote:
>
> I don't see a difference for my spawned processes, because both functions will
> "wait" until all pending operations have finished, before the object will be
> destroyed. Nevertheless, perhaps my small example program worked all the years
> b
Hi Ralph,
I copy the relevant lines to this place, so that it is easier to see what
happens. "a.out" is your program, which I compiled with mpicc.
>> loki spawn 153 ompi_info | grep -e "OPAL repo revision:" -e "C compiler
>> absolute:"
>> OPAL repo revision: v1.10.2-201-gd23dda8
>> C co
Just to clarify, as this is a frequent misconception: the statement that the
absolute path will setup your remote environment is only true when using the
rsh/ssh launcher. It is not true when running under a resource manager (e.g.,
SLURM, LSF, PBSPro, etc.). In those cases, it is up to the RM co
Most commercial applications, i.e., Ansys Fluent, Abaqus, NASTRAN, and
PAM-CRASH is IBM Platform MPI bundled with each application and is the default
MPI when running parallel simulations. Depending on which Abaqus release
you're using your choices are IBM Platform MPI or Intel MPI. I don't r
> On May 24, 2016, at 6:21 AM, Siegmar Gross
> wrote:
>
> Hi Ralph,
>
> I copy the relevant lines to this place, so that it is easier to see what
> happens. "a.out" is your program, which I compiled with mpicc.
>
> >> loki spawn 153 ompi_info | grep -e "OPAL repo revision:" -e "C compiler
> >
Hi Ralph and Gilles,
the program breaks only, if I combine "--host" and "--slot-list". Perhaps this
information is helpful. I use a different machine now, so that you can see that
the problem is not restricted to "loki".
pc03 spawn 115 ompi_info | grep -e "OPAL repo revision:" -e "C compiler
a
Works perfectly for me, so I believe this must be an environment issue - I am
using gcc 6.0.0 on CentOS7 with x86:
$ mpirun -n 1 -host bend001 --slot-list 0:0-1,1:0-1 --report-bindings
./simple_spawn
[bend001:17599] MCW rank 0 bound to socket 0[core 0[hwt 0-1]], socket 0[core
1[hwt 0-1]], socke
Hi Siegmar,
Sorry for the delay, I seem to have missed this one.
It looks like there's an error in the way the native methods are processing
java exceptions. The code correctly builds up an exception message for
cases where MPI 'c' returns non-success but, not if the problem occured
in one of th
> On May 18, 2016, at 6:59 PM, Jeff Squyres (jsquyres)
> wrote:
>
> On May 18, 2016, at 6:16 PM, Ryan Novosielski wrote:
>>
>> I’m pretty sure this is no longer relevant (having read Roland’s messages
>> about it from a couple of years ago now). Can you please confirm that for
>> me, and the
On May 21, 2016, at 12:17 PM, Andrea Negri wrote:
>
> Hi, in the last few days I ported my entire fortran mpi code to "use
> mpif_08". You really did a great job with this interface. However,
> since HDF5 still uses integers to handle communicators, I have a
> module where I still use "use mpi",
16 matches
Mail list logo