Am 28.02.2013 um 19:50 schrieb Reuti:
> Am 28.02.2013 um 19:21 schrieb Ralph Castain:
>
>>
>> On Feb 28, 2013, at 9:53 AM, Reuti wrote:
>>
>>> Am 28.02.2013 um 17:54 schrieb Ralph Castain:
>>>
Hmmmthe problem is that we are mapping procs using the provided slots
instead of divi
Am 28.02.2013 um 19:21 schrieb Ralph Castain:
>
> On Feb 28, 2013, at 9:53 AM, Reuti wrote:
>
>> Am 28.02.2013 um 17:54 schrieb Ralph Castain:
>>
>>> Hmmmthe problem is that we are mapping procs using the provided slots
>>> instead of dividing the slots by cpus-per-proc. So we put too man
On Feb 28, 2013, at 9:53 AM, Reuti wrote:
> Am 28.02.2013 um 17:54 schrieb Ralph Castain:
>
>> Hmmmthe problem is that we are mapping procs using the provided slots
>> instead of dividing the slots by cpus-per-proc. So we put too many on the
>> first node, and the backend daemon aborts th
Am 28.02.2013 um 17:54 schrieb Ralph Castain:
> Hmmmthe problem is that we are mapping procs using the provided slots
> instead of dividing the slots by cpus-per-proc. So we put too many on the
> first node, and the backend daemon aborts the job because it lacks sufficient
> processors for
oh! it works now. Thanks a lot and sorry about my negligence.
2013/3/1 Ake Sandgren
> On Fri, 2013-03-01 at 01:24 +0900, Pradeep Jha wrote:
> > Sorry for those mistakes. I addressed all the three problems
> > - I put "implicit none" at the top of main program
> > - I initialized tag.
> > - chan
Hmmmthe problem is that we are mapping procs using the provided slots
instead of dividing the slots by cpus-per-proc. So we put too many on the first
node, and the backend daemon aborts the job because it lacks sufficient
processors for cpus-per-proc=2.
Given that there are no current plans
Hi ,
First, I don't see any cpu utilization but %time (of a function wrt others in
a process/application).
Generally for high cpu utilization, there could be many reason. Two of them
that comes to my mind is,
1. Depends on the network stack, eg. the "tcp" way will use more CPU than the
"openi
Am 28.02.2013 um 17:29 schrieb Ralph Castain:
>
> On Feb 28, 2013, at 6:17 AM, Reuti wrote:
>
>> Am 28.02.2013 um 08:58 schrieb Reuti:
>>
>>> Am 28.02.2013 um 06:55 schrieb Ralph Castain:
>>>
I don't off-hand see a problem, though I do note that your "working"
version incorrectly r
On Fri, 2013-03-01 at 01:24 +0900, Pradeep Jha wrote:
> Sorry for those mistakes. I addressed all the three problems
> - I put "implicit none" at the top of main program
> - I initialized tag.
> - changed MPI_INT to MPI_INTEGER
> - "send_length" should be just "send", it was a typo.
>
>
> But the
I don't see tag being set to any value
On Feb 28, 2013, at 8:24 AM, Pradeep Jha
wrote:
> Sorry for those mistakes. I addressed all the three problems
> - I put "implicit none" at the top of main program
> - I initialized tag.
> - changed MPI_INT to MPI_INTEGER
> - "send_length" should be ju
On Feb 28, 2013, at 6:17 AM, Reuti wrote:
> Am 28.02.2013 um 08:58 schrieb Reuti:
>
>> Am 28.02.2013 um 06:55 schrieb Ralph Castain:
>>
>>> I don't off-hand see a problem, though I do note that your "working"
>>> version incorrectly reports the universe size as 2!
>>
>> Yes, it was 2 in the
Sorry for those mistakes. I addressed all the three problems
- I put "implicit none" at the top of main program
- I initialized tag.
- changed MPI_INT to MPI_INTEGER
- "send_length" should be just "send", it was a typo.
But the code is still hanging in sendrecv. The present form is below:
main.f
On Feb 28, 2013, at 9:59 AM, Pradeep Jha
wrote:
> Is it possible to call the MPI_send and MPI_recv commands inside a subroutine
> and not the main program?
Yes.
> I have written a minimal program for what I am trying to do. It is compiling
> fine but it is not working. The program just hangs
Hi,
I notice that a simple MPI program in which rank 0 sends 4 bytes to each
rank and receives a reply uses a
considerable amount of CPU in system call.s
% time seconds usecs/call callserrors syscall
-- --- --- - -
61.10
Thanks Ralph, you were right I was not aware of --kill-on-bad-exit
and KillOnBadExit, setting it to 1 shuts down
the entire MPI job when MPI_Abort() is called. I was thinking this MPI
protocol message was just transported
by slurm and then each task would exit. Oh well I should not guess the
implem
Is it possible to call the MPI_send and MPI_recv commands inside a
subroutine and not the main program? I have written a minimal program for
what I am trying to do. It is compiling fine but it is not working. The
program just hangs in the "sendrecv" subroutine. Any ideas how can I do it?
main.f
Am 28.02.2013 um 08:58 schrieb Reuti:
> Am 28.02.2013 um 06:55 schrieb Ralph Castain:
>
>> I don't off-hand see a problem, though I do note that your "working" version
>> incorrectly reports the universe size as 2!
>
> Yes, it was 2 in the case when it was working by giving only two hostnames
Am 28.02.2013 um 06:55 schrieb Ralph Castain:
> I don't off-hand see a problem, though I do note that your "working" version
> incorrectly reports the universe size as 2!
Yes, it was 2 in the case when it was working by giving only two hostnames
without any dedicated slot count. What should it
I don't off-hand see a problem, though I do note that your "working" version
incorrectly reports the universe size as 2!
I'll have to take a look at this and get back to you on it.
On Feb 27, 2013, at 3:15 PM, Reuti wrote:
> Hi,
>
> I have an issue using the option -cpus-per-proc 2. As I have
19 matches
Mail list logo