Thanks a lot Ron, it was clearly a really nice response. You leave in no doubt about using between cpu and ssh. I would like to try to do it now, but it semms to me that i have authentication problems from terminal, because when i try to do cpu(1) command from terminal (log in as Armando) i got nothing, i.e. term% cpu -h NODE -c date term% otherwise by doing: term% cpu -h NODE term% i got the same, and /mnt/term is empty, instead i think that cpu's name space should be mounted on /mtn/term, isn't it? Furthermore i also checked lib/ndb/auth on the file server, and this is what i have: hostid=bootes uid=!sys uid=!adm uid=* I think that is correct, is it? Thank you very much for your patience,
Armando. > suppose you have a list of nodes > > cpu% NODES=(a b c d) > cpu% echo $NODES > a b c d > cpu% for (i in $NODES) { > cpu -h $i -c some-command& > } > > Go ahead. Try it! > for (i in $NODES) { > cpu -h $i -c date& > } > > OK, now suppose you have what in the high end business is still called > an 'input deck'. It's in a weird place. You get to it by saying > some-command -i input-file > > for (i in $NODES) { > cpu -h $i -c some-command -i your-file& > } > > This will work whether there is a mount on those nodes for your home > directory or not. Comes free with cpu. > > What if you for whatever reason want a ps to show all the proces on > all the nodes you're running on. > > for (i in $NODES) { > import -a $i .com /proc /proc > > } > > Your /proc is now the unified /proc of all your nodes. (I used to do > this all the time with my plan 9 minicluster) > > That way, if you want to kill all the some-commands running on ALL your nodes: > slay some-command | rc > > The point being that you only need to run this command on the > front-end, not on each node. > > You just can't even try to do this sort of thing with ssh. > > ron