[OMPI users] Relative indexing error in OpenMPI 1.8.7

2015-10-09 Thread waku2005
Dear OpenMPI users

Relative indexing error occurs in my CentOS small cluster.
What and where should I check ?

Environment:
- 4node GbE cluster (CentOS 6.7)
- OpenMPI 1.8.7 (builded usin system compiler, gcc version 4.4.7 20120313
and installed /usr/local/openmpi-1.8.7)
- use ssh without password authentification (using RSA key)

This is "myhosts" file:
--
ensis10 slots=4
ensis12 slots=6
ensis13 slots=6
ensis14 slots=6
--

Command line and error messgae:
$ mpirun --hostfile ./myhosts -np 4 -host +n2 hostname
--
A relative host was specified, but no prior allocation has been made.
Thus, there is no way to determine the proper host to be used.

-host: +n2

Please see the orte_hosts man page for further information.
--

# In case of direct hostname specification, it works fine such as:
# [@ensis10] $ mpirun --hostfile ./myhosts -np 4 -host ensis12 hostname
# ensis12
# ensis12
# ensis12
# ensis12
#

Thanks in advance


-- 

S.Wakashima  (waku2...@gmail.com)


Re: [OMPI users] python, mpi and shell subprocess: orte_error_log

2015-10-09 Thread Lisandro Dalcin
On 8 October 2015 at 14:54, simona bellavista  wrote:
>

>>
>> I cannot figure out how spawn would work with a string-command. I tried
>> MPI.COMM_SELF.Spawn(cmd, args=None,maxproc=4) and it just hangs
>

MPI.COMM_SELF.Spawn("/bin/echo", args=["Hello",
"World!"],maxprocs=1).Disconnect()

Could you try the line above and confirm whether it hangs?

>
> I couldn't figure out how to run Spawn with a string-like command, in fact
> the command that I want to run varies for each processor.

Use maxprocs=1 and make different spawn calls.

However, I have to insist. If you are using mpi4py as a tool to spawn
a bunch of different processes that work in isolation and then collect
result at the end, then mpi4py is likely not the right tool for the
task, at least if you do not have previous experience with MPI
programming.


-- 
Lisandro Dalcin

Research Scientist
Computer, Electrical and Mathematical Sciences & Engineering (CEMSE)
Numerical Porous Media Center (NumPor)
King Abdullah University of Science and Technology (KAUST)
http://numpor.kaust.edu.sa/

4700 King Abdullah University of Science and Technology
al-Khawarizmi Bldg (Bldg 1), Office # 4332
Thuwal 23955-6900, Kingdom of Saudi Arabia
http://www.kaust.edu.sa

Office Phone: +966 12 808-0459


Re: [OMPI users] python, mpi and shell subprocess: orte_error_log

2015-10-09 Thread simona bellavista
2015-10-09 9:40 GMT+02:00 Lisandro Dalcin :

> On 8 October 2015 at 14:54, simona bellavista  wrote:
> >
>
> >>
> >> I cannot figure out how spawn would work with a string-command. I tried
> >> MPI.COMM_SELF.Spawn(cmd, args=None,maxproc=4) and it just hangs
> >
>
> MPI.COMM_SELF.Spawn("/bin/echo", args=["Hello",
> "World!"],maxprocs=1).Disconnect()
>
> Could you try the line above and confirm whether it hangs?
>

I have tried the line above and it hangs


> >
> > I couldn't figure out how to run Spawn with a string-like command, in
> fact
> > the command that I want to run varies for each processor.
>
> Use maxprocs=1 and make different spawn calls.
>
> However, I have to insist. If you are using mpi4py as a tool to spawn
> a bunch of different processes that work in isolation and then collect
> result at the end, then mpi4py is likely not the right tool for the
> task, at least if you do not have previous experience with MPI
> programming.
>
> Well, I don't have a big experience in MPI programming, but I do use and
modify existing MPI codes, and I thought MPI would be easiest choice.
Clustershells looks a bit an overshoot for the goal I would like to
achieve. What shall I use instead? Shall I try multiprocessing module?


> --
> Lisandro Dalcin
> 
> Research Scientist
> Computer, Electrical and Mathematical Sciences & Engineering (CEMSE)
> Numerical Porous Media Center (NumPor)
> King Abdullah University of Science and Technology (KAUST)
> http://numpor.kaust.edu.sa/
>
> 4700 King Abdullah University of Science and Technology
> al-Khawarizmi Bldg (Bldg 1), Office # 4332
> Thuwal 23955-6900, Kingdom of Saudi Arabia
> http://www.kaust.edu.sa
>
> Office Phone: +966 12 808-0459
> ___
> users mailing list
> us...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post:
> http://www.open-mpi.org/community/lists/users/2015/10/27853.php
>


Re: [OMPI users] python, mpi and shell subprocess: orte_error_log

2015-10-09 Thread Lisandro Dalcin
On 9 October 2015 at 12:05, simona bellavista  wrote:
>
>
> 2015-10-09 9:40 GMT+02:00 Lisandro Dalcin :
>>
>> On 8 October 2015 at 14:54, simona bellavista  wrote:
>> >
>>
>> >>
>> >> I cannot figure out how spawn would work with a string-command. I tried
>> >> MPI.COMM_SELF.Spawn(cmd, args=None,maxproc=4) and it just hangs
>> >
>>
>> MPI.COMM_SELF.Spawn("/bin/echo", args=["Hello",
>> "World!"],maxprocs=1).Disconnect()
>>
>> Could you try the line above and confirm whether it hangs?
>
>
> I have tried the line above and it hangs
>

OK, as "echo" is not an MPI application, then it seems OpenMPI does
not support spawning.

>>
>> >
>> > I couldn't figure out how to run Spawn with a string-like command, in
>> > fact
>> > the command that I want to run varies for each processor.
>>
>> Use maxprocs=1 and make different spawn calls.
>>
>> However, I have to insist. If you are using mpi4py as a tool to spawn
>> a bunch of different processes that work in isolation and then collect
>> result at the end, then mpi4py is likely not the right tool for the
>> task, at least if you do not have previous experience with MPI
>> programming.
>>
> Well, I don't have a big experience in MPI programming, but I do use and
> modify existing MPI codes, and I thought MPI would be easiest choice.

Have you seen these existing MPI codes calling back to the shell to
execute commands?

> Clustershells looks a bit an overshoot for the goal I would like to achieve.
> What shall I use instead? Shall I try multiprocessing module?
>

As long as running on a single compute node is many cores is enough
for your application, there is no reason to use MPI. Python's
multiprocessing of perhaps the Python 3 "concurrent.futures" package
(there is a backport for Python 2 on PyPI) would be trivial to get
working.


-- 
Lisandro Dalcin

Research Scientist
Computer, Electrical and Mathematical Sciences & Engineering (CEMSE)
Numerical Porous Media Center (NumPor)
King Abdullah University of Science and Technology (KAUST)
http://numpor.kaust.edu.sa/

4700 King Abdullah University of Science and Technology
al-Khawarizmi Bldg (Bldg 1), Office # 4332
Thuwal 23955-6900, Kingdom of Saudi Arabia
http://www.kaust.edu.sa

Office Phone: +966 12 808-0459


Re: [OMPI users] python, mpi and shell subprocess: orte_error_log

2015-10-09 Thread Ralph Castain
FWIW: OpenMPI does support spawning of both MPI and non-MPI jobs. If you are 
spawning a non-MPI job, then you have to -tell- us that so we don’t hang trying 
to connect the new procs to the spawning proc as per MPI requirements.

This is done by providing an info key to indicate that the child job is 
non-MPI, as explained in the MPI_Comm_spawn man page:

  ompi_non_mpi   bool If set to true, launching a non-MPI
   application; the returned communicator
   will be MPI_COMM_NULL. Failure to set
   this flag when launching a non-MPI
   application will cause both the child
   and parent jobs to "hang".


> On Oct 9, 2015, at 2:15 AM, Lisandro Dalcin  wrote:
> 
> On 9 October 2015 at 12:05, simona bellavista  > wrote:
>> 
>> 
>> 2015-10-09 9:40 GMT+02:00 Lisandro Dalcin :
>>> 
>>> On 8 October 2015 at 14:54, simona bellavista  wrote:
 
>>> 
> 
> I cannot figure out how spawn would work with a string-command. I tried
> MPI.COMM_SELF.Spawn(cmd, args=None,maxproc=4) and it just hangs
 
>>> 
>>> MPI.COMM_SELF.Spawn("/bin/echo", args=["Hello",
>>> "World!"],maxprocs=1).Disconnect()
>>> 
>>> Could you try the line above and confirm whether it hangs?
>> 
>> 
>> I have tried the line above and it hangs
>> 
> 
> OK, as "echo" is not an MPI application, then it seems OpenMPI does
> not support spawning.
> 
>>> 
 
 I couldn't figure out how to run Spawn with a string-like command, in
 fact
 the command that I want to run varies for each processor.
>>> 
>>> Use maxprocs=1 and make different spawn calls.
>>> 
>>> However, I have to insist. If you are using mpi4py as a tool to spawn
>>> a bunch of different processes that work in isolation and then collect
>>> result at the end, then mpi4py is likely not the right tool for the
>>> task, at least if you do not have previous experience with MPI
>>> programming.
>>> 
>> Well, I don't have a big experience in MPI programming, but I do use and
>> modify existing MPI codes, and I thought MPI would be easiest choice.
> 
> Have you seen these existing MPI codes calling back to the shell to
> execute commands?
> 
>> Clustershells looks a bit an overshoot for the goal I would like to achieve.
>> What shall I use instead? Shall I try multiprocessing module?
>> 
> 
> As long as running on a single compute node is many cores is enough
> for your application, there is no reason to use MPI. Python's
> multiprocessing of perhaps the Python 3 "concurrent.futures" package
> (there is a backport for Python 2 on PyPI) would be trivial to get
> working.
> 
> 
> -- 
> Lisandro Dalcin
> 
> Research Scientist
> Computer, Electrical and Mathematical Sciences & Engineering (CEMSE)
> Numerical Porous Media Center (NumPor)
> King Abdullah University of Science and Technology (KAUST)
> http://numpor.kaust.edu.sa/
> 
> 4700 King Abdullah University of Science and Technology
> al-Khawarizmi Bldg (Bldg 1), Office # 4332
> Thuwal 23955-6900, Kingdom of Saudi Arabia
> http://www.kaust.edu.sa
> 
> Office Phone: +966 12 808-0459
> ___
> users mailing list
> us...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post: 
> http://www.open-mpi.org/community/lists/users/2015/10/27855.php 
> 


Re: [OMPI users] Hybrid OpenMPI+OpenMP tasks using SLURM

2015-10-09 Thread Marcin Krotkiewski

Ralph,

Here is the result running

mpirun --map-by slot:pe=4 -display-allocation ./affinity

==   ALLOCATED NODES   ==
c12-29: slots=4 max_slots=0 slots_inuse=0 state=UP
=
rank 0 @ compute-12-29.local  1, 2, 3, 4, 17, 18, 19, 20,

I also attach output with --mca rmaps_base_verbose 10. It says 4 slots 
all over the place, so it is really weird it should not work.


Thanks!

Marcin



[login-0-1.local:30710] mca: base: components_register: registering 
rmaps components
[login-0-1.local:30710] mca: base: components_register: found loaded 
component round_robin
[login-0-1.local:30710] mca: base: components_register: component 
round_robin register function successful
[login-0-1.local:30710] mca: base: components_register: found loaded 
component rank_file
[login-0-1.local:30710] mca: base: components_register: component 
rank_file register function successful
[login-0-1.local:30710] mca: base: components_register: found loaded 
component seq
[login-0-1.local:30710] mca: base: components_register: component seq 
register function successful
[login-0-1.local:30710] mca: base: components_register: found loaded 
component resilient
[login-0-1.local:30710] mca: base: components_register: component 
resilient register function successful
[login-0-1.local:30710] mca: base: components_register: found loaded 
component staged
[login-0-1.local:30710] mca: base: components_register: component staged 
has no register or open function
[login-0-1.local:30710] mca: base: components_register: found loaded 
component mindist
[login-0-1.local:30710] mca: base: components_register: component 
mindist register function successful
[login-0-1.local:30710] mca: base: components_register: found loaded 
component ppr
[login-0-1.local:30710] mca: base: components_register: component ppr 
register function successful

[login-0-1.local:30710] [[61064,0],0] rmaps:base set policy with slot:pe=4
[login-0-1.local:30710] [[61064,0],0] rmaps:base policy slot modifiers 
pe=4 provided

[login-0-1.local:30710] [[61064,0],0] rmaps:base check modifiers with pe=4
[login-0-1.local:30710] [[61064,0],0] rmaps:base setting pe/rank to 4
[login-0-1.local:30710] mca: base: components_open: opening rmaps components
[login-0-1.local:30710] mca: base: components_open: found loaded 
component round_robin
[login-0-1.local:30710] mca: base: components_open: component 
round_robin open function successful
[login-0-1.local:30710] mca: base: components_open: found loaded 
component rank_file
[login-0-1.local:30710] mca: base: components_open: component rank_file 
open function successful
[login-0-1.local:30710] mca: base: components_open: found loaded 
component seq
[login-0-1.local:30710] mca: base: components_open: component seq open 
function successful
[login-0-1.local:30710] mca: base: components_open: found loaded 
component resilient
[login-0-1.local:30710] mca: base: components_open: component resilient 
open function successful
[login-0-1.local:30710] mca: base: components_open: found loaded 
component staged
[login-0-1.local:30710] mca: base: components_open: component staged 
open function successful
[login-0-1.local:30710] mca: base: components_open: found loaded 
component mindist
[login-0-1.local:30710] mca: base: components_open: component mindist 
open function successful
[login-0-1.local:30710] mca: base: components_open: found loaded 
component ppr
[login-0-1.local:30710] mca: base: components_open: component ppr open 
function successful
[login-0-1.local:30710] mca:rmaps:select: checking available component 
round_robin

[login-0-1.local:30710] mca:rmaps:select: Querying component [round_robin]
[login-0-1.local:30710] mca:rmaps:select: checking available component 
rank_file

[login-0-1.local:30710] mca:rmaps:select: Querying component [rank_file]
[login-0-1.local:30710] mca:rmaps:select: checking available component seq
[login-0-1.local:30710] mca:rmaps:select: Querying component [seq]
[login-0-1.local:30710] mca:rmaps:select: checking available component 
resilient

[login-0-1.local:30710] mca:rmaps:select: Querying component [resilient]
[login-0-1.local:30710] mca:rmaps:select: checking available component 
staged

[login-0-1.local:30710] mca:rmaps:select: Querying component [staged]
[login-0-1.local:30710] mca:rmaps:select: checking available component 
mindist

[login-0-1.local:30710] mca:rmaps:select: Querying component [mindist]
[login-0-1.local:30710] mca:rmaps:select: checking available component ppr
[login-0-1.local:30710] mca:rmaps:select: Querying component [ppr]
[login-0-1.local:30710] [[61064,0],0]: Final mapper priorities
[login-0-1.local:30710] Mapper: ppr Priority: 90
[login-0-1.local:30710] Mapper: seq Priority: 60
[login-0-1.local:30710] Mapper: resilient Priority: 40
[login-0-1.local:30710] Mapper: mindist Priority: 20
[login-0-1.local:30710] Mapper: round_robin Priority: 10
[login-0-1.local:30710] 

Re: [OMPI users] Hybrid OpenMPI+OpenMP tasks using SLURM

2015-10-09 Thread Ralph Castain
Actually, you just confirmed the problem for me. You are correct in that it 
says 4 slots. However, if you then tell us pe=4, we will consume all 4 of those 
slots with the very first process.

What we need to see was that slurm was assigning us 16 slots to correspond to 
16 cpus. Instead, it is trying to tell us to launch only 4 procs, but to use 16 
cpus as if they belong to us. This is where the confusion is coming from - 
could be something in the slurm envar syntax changed, or something else did as 
I seem to recall we handled this okay before (but I could be wrong).

Fixing that will take some time that I honestly won’t have for awhile.


> On Oct 9, 2015, at 6:14 AM, Marcin Krotkiewski  
> wrote:
> 
> Ralph,
> 
> Here is the result running
> 
> mpirun --map-by slot:pe=4 -display-allocation ./affinity
> 
> ==   ALLOCATED NODES   ==
>c12-29: slots=4 max_slots=0 slots_inuse=0 state=UP
> =
> rank 0 @ compute-12-29.local  1, 2, 3, 4, 17, 18, 19, 20,
> 
> I also attach output with --mca rmaps_base_verbose 10. It says 4 slots all 
> over the place, so it is really weird it should not work.
> 
> Thanks!
> 
> Marcin
> 
> 
> 
> [login-0-1.local:30710] mca: base: components_register: registering rmaps 
> components
> [login-0-1.local:30710] mca: base: components_register: found loaded 
> component round_robin
> [login-0-1.local:30710] mca: base: components_register: component round_robin 
> register function successful
> [login-0-1.local:30710] mca: base: components_register: found loaded 
> component rank_file
> [login-0-1.local:30710] mca: base: components_register: component rank_file 
> register function successful
> [login-0-1.local:30710] mca: base: components_register: found loaded 
> component seq
> [login-0-1.local:30710] mca: base: components_register: component seq 
> register function successful
> [login-0-1.local:30710] mca: base: components_register: found loaded 
> component resilient
> [login-0-1.local:30710] mca: base: components_register: component resilient 
> register function successful
> [login-0-1.local:30710] mca: base: components_register: found loaded 
> component staged
> [login-0-1.local:30710] mca: base: components_register: component staged has 
> no register or open function
> [login-0-1.local:30710] mca: base: components_register: found loaded 
> component mindist
> [login-0-1.local:30710] mca: base: components_register: component mindist 
> register function successful
> [login-0-1.local:30710] mca: base: components_register: found loaded 
> component ppr
> [login-0-1.local:30710] mca: base: components_register: component ppr 
> register function successful
> [login-0-1.local:30710] [[61064,0],0] rmaps:base set policy with slot:pe=4
> [login-0-1.local:30710] [[61064,0],0] rmaps:base policy slot modifiers pe=4 
> provided
> [login-0-1.local:30710] [[61064,0],0] rmaps:base check modifiers with pe=4
> [login-0-1.local:30710] [[61064,0],0] rmaps:base setting pe/rank to 4
> [login-0-1.local:30710] mca: base: components_open: opening rmaps components
> [login-0-1.local:30710] mca: base: components_open: found loaded component 
> round_robin
> [login-0-1.local:30710] mca: base: components_open: component round_robin 
> open function successful
> [login-0-1.local:30710] mca: base: components_open: found loaded component 
> rank_file
> [login-0-1.local:30710] mca: base: components_open: component rank_file open 
> function successful
> [login-0-1.local:30710] mca: base: components_open: found loaded component seq
> [login-0-1.local:30710] mca: base: components_open: component seq open 
> function successful
> [login-0-1.local:30710] mca: base: components_open: found loaded component 
> resilient
> [login-0-1.local:30710] mca: base: components_open: component resilient open 
> function successful
> [login-0-1.local:30710] mca: base: components_open: found loaded component 
> staged
> [login-0-1.local:30710] mca: base: components_open: component staged open 
> function successful
> [login-0-1.local:30710] mca: base: components_open: found loaded component 
> mindist
> [login-0-1.local:30710] mca: base: components_open: component mindist open 
> function successful
> [login-0-1.local:30710] mca: base: components_open: found loaded component ppr
> [login-0-1.local:30710] mca: base: components_open: component ppr open 
> function successful
> [login-0-1.local:30710] mca:rmaps:select: checking available component 
> round_robin
> [login-0-1.local:30710] mca:rmaps:select: Querying component [round_robin]
> [login-0-1.local:30710] mca:rmaps:select: checking available component 
> rank_file
> [login-0-1.local:30710] mca:rmaps:select: Querying component [rank_file]
> [login-0-1.local:30710] mca:rmaps:select: checking available component seq
> [login-0-1.local:30710] mca:rmaps:select: Querying component [seq]
> [login-0-1.local:30710] mca:rmaps:select: checking available component 
> resilie

Re: [OMPI users] Hybrid OpenMPI+OpenMP tasks using SLURM

2015-10-09 Thread Marcin Krotkiewski


Thank you, Ralph. The world wan wait, no problem :)

Marcin


On 10/09/2015 03:27 PM, Ralph Castain wrote:

Actually, you just confirmed the problem for me. You are correct in that it 
says 4 slots. However, if you then tell us pe=4, we will consume all 4 of those 
slots with the very first process.

What we need to see was that slurm was assigning us 16 slots to correspond to 
16 cpus. Instead, it is trying to tell us to launch only 4 procs, but to use 16 
cpus as if they belong to us. This is where the confusion is coming from - 
could be something in the slurm envar syntax changed, or something else did as 
I seem to recall we handled this okay before (but I could be wrong).

Fixing that will take some time that I honestly won’t have for awhile.



On Oct 9, 2015, at 6:14 AM, Marcin Krotkiewski  
wrote:

Ralph,

Here is the result running

mpirun --map-by slot:pe=4 -display-allocation ./affinity

==   ALLOCATED NODES   ==
c12-29: slots=4 max_slots=0 slots_inuse=0 state=UP
=
rank 0 @ compute-12-29.local  1, 2, 3, 4, 17, 18, 19, 20,

I also attach output with --mca rmaps_base_verbose 10. It says 4 slots all over 
the place, so it is really weird it should not work.

Thanks!

Marcin



[login-0-1.local:30710] mca: base: components_register: registering rmaps 
components
[login-0-1.local:30710] mca: base: components_register: found loaded component 
round_robin
[login-0-1.local:30710] mca: base: components_register: component round_robin 
register function successful
[login-0-1.local:30710] mca: base: components_register: found loaded component 
rank_file
[login-0-1.local:30710] mca: base: components_register: component rank_file 
register function successful
[login-0-1.local:30710] mca: base: components_register: found loaded component 
seq
[login-0-1.local:30710] mca: base: components_register: component seq register 
function successful
[login-0-1.local:30710] mca: base: components_register: found loaded component 
resilient
[login-0-1.local:30710] mca: base: components_register: component resilient 
register function successful
[login-0-1.local:30710] mca: base: components_register: found loaded component 
staged
[login-0-1.local:30710] mca: base: components_register: component staged has no 
register or open function
[login-0-1.local:30710] mca: base: components_register: found loaded component 
mindist
[login-0-1.local:30710] mca: base: components_register: component mindist 
register function successful
[login-0-1.local:30710] mca: base: components_register: found loaded component 
ppr
[login-0-1.local:30710] mca: base: components_register: component ppr register 
function successful
[login-0-1.local:30710] [[61064,0],0] rmaps:base set policy with slot:pe=4
[login-0-1.local:30710] [[61064,0],0] rmaps:base policy slot modifiers pe=4 
provided
[login-0-1.local:30710] [[61064,0],0] rmaps:base check modifiers with pe=4
[login-0-1.local:30710] [[61064,0],0] rmaps:base setting pe/rank to 4
[login-0-1.local:30710] mca: base: components_open: opening rmaps components
[login-0-1.local:30710] mca: base: components_open: found loaded component 
round_robin
[login-0-1.local:30710] mca: base: components_open: component round_robin open 
function successful
[login-0-1.local:30710] mca: base: components_open: found loaded component 
rank_file
[login-0-1.local:30710] mca: base: components_open: component rank_file open 
function successful
[login-0-1.local:30710] mca: base: components_open: found loaded component seq
[login-0-1.local:30710] mca: base: components_open: component seq open function 
successful
[login-0-1.local:30710] mca: base: components_open: found loaded component 
resilient
[login-0-1.local:30710] mca: base: components_open: component resilient open 
function successful
[login-0-1.local:30710] mca: base: components_open: found loaded component 
staged
[login-0-1.local:30710] mca: base: components_open: component staged open 
function successful
[login-0-1.local:30710] mca: base: components_open: found loaded component 
mindist
[login-0-1.local:30710] mca: base: components_open: component mindist open 
function successful
[login-0-1.local:30710] mca: base: components_open: found loaded component ppr
[login-0-1.local:30710] mca: base: components_open: component ppr open function 
successful
[login-0-1.local:30710] mca:rmaps:select: checking available component 
round_robin
[login-0-1.local:30710] mca:rmaps:select: Querying component [round_robin]
[login-0-1.local:30710] mca:rmaps:select: checking available component rank_file
[login-0-1.local:30710] mca:rmaps:select: Querying component [rank_file]
[login-0-1.local:30710] mca:rmaps:select: checking available component seq
[login-0-1.local:30710] mca:rmaps:select: Querying component [seq]
[login-0-1.local:30710] mca:rmaps:select: checking available component resilient
[login-0-1.local:30710] mca:rmaps:select: Querying component [resilient]
[log