Hi Jeff,
Thank you so much. I will try it.
Best Regards,
Qianjin
Sent from my iPhone
> On Nov 1, 2019, at 11:26 AM, Jeff Squyres (jsquyres)
> wrote:
>
> Sorry for all this back-n-forth, but I honestly haven't thought about or
> tried v1.10 in (literally) years. You should ping the WRF mai
Sorry for all this back-n-forth, but I honestly haven't thought about or tried
v1.10 in (literally) years. You should ping the WRF maintainers and tell them
to upgrade to a later version of Open MPI; the v1.10 series is no longer
supported and clearly the networking APIs that it's using no long
Open MPI doesn't have a public function in its Fortran interface named
"random_seed". So I'm not sure what that's about.
On Nov 1, 2019, at 11:36 AM, Qianjin Zheng
mailto:qianjin.zh...@hotmail.com>> wrote:
Hi Jeff,
After the service got upgraded, I recompile my model with Open MPI v 1.10.7,
Hi Jeff,
After the service got upgraded, I recompile my model with Open MPI v 1.10.7,
and the application failed to compile. I got an error message; however, before
the service upgraded I did not get any error. Here is the error message:
call random_seed (PUT=seed)
Good day,
We have a cluster with some MPI distributions and a SLURM serving as queue
manager. We also have a SLURM Spank Plugin, and it is simple, you just
define some functions in a library and SLURM loads and calls eventually.
The isue arises with OpenMPI 4.0.1 (and possibly greater) and MPIRUN
Your last mail didn't make it through to the list because your log file wasn't
compressed and therefore didn't fit under the size limit (we limit mail size
because our web archiving service limits size).
But that's ok -- I got it, because I was CC'ed.
I viewed it and I see what the problem is.
On Nov 1, 2019, at 10:14 AM, Reuti
mailto:re...@staff.uni-marburg.de>> wrote:
For the most part, this whole thing needs to get documented.
Especially that the colon is a disallowed character in the directory name. Any
suffix :foo will just be removed AFAICS without any error output about foo
b
> Am 01.11.2019 um 14:46 schrieb Jeff Squyres (jsquyres) via users
> :
>
> On Nov 1, 2019, at 9:34 AM, Jeff Squyres (jsquyres) via users
> wrote:
>>
>>> Point to make: it would be nice to have an option to suppress the output on
>>> stdout and/or stderr when output redirection to file is re
On Nov 1, 2019, at 9:34 AM, Jeff Squyres (jsquyres) via users
wrote:
>
>> Point to make: it would be nice to have an option to suppress the output on
>> stdout and/or stderr when output redirection to file is requested. In my
>> case, having stdout still visible on the terminal is desirable bu
On Oct 31, 2019, at 6:43 PM, Joseph Schuchart via users
wrote:
>
> Just to throw in my $0.02: I recently found that the output to stdout/stderr
> may not be desirable: in an application that writes a lot of log data to
> stderr on all ranks, stdout was significantly slower than the files I
>
Joseph,
I had to use the absolute path of the fork agent.
I may have misunderstood your request.
Now it seems you want to have each task stderr redirected to a unique file but
not to (duplicated) to mpirun stderr. Is that right?
If so, instead of the --output-filename option, you can do it "manu
Gilles,
Thanks for your suggestions! I just tried both of them, see below:
On 11/1/19 1:15 AM, Gilles Gouaillardet via users wrote:
Joseph,
you can achieve this via an agent (and it works with DDT too)
For example, the nostderr script below redirects each MPI task's stderr
to /dev/null (so
12 matches
Mail list logo