Olivier Teytaud: <aa5e3c330911250119x5e01fa32w2e5f3db68704d...@mail.gmail.com>: >> Even if the sum-up is done in a logarithmic time (with binary tree >> style), the collecting time of all infomation from all nodes is >> proportional to the number of nodes if the master node has few >> communication ports, isn't it? >> > >No (unless I misunderstood what you mean, sorry in that case!) ! >Use a tree of nodes, to agregate informations, and everything is >logarithmic. This is implicitly done in MPI. > >If you have 8 nodes A, B, C, D, E, F, G, H, >then >(i) first layer >A and B send information to B >C and D send information to D >E and F send information to F >G and H send information to H > (ii) second layer >B and D send information to D >F and H send information to H >(iii) third layer >D and H send information to H > >then do the same in the reverse order so that the cumulated information is >sent back to all nodes.
Interesting, surely the order is almost logarithmic. But how long it takes a packet to pass through a layer. I'm afraid the actual delay time may increase. >> By the way, have you experimented not averaging but just adding sceme? >> When I tested that my code had some bugs and no success. >> > >Yes, we have tested. Surprisingly, no significant difference. But I don't >know >if this would still hold today, as we have some pattern-based exploration. >For a code with a score almost only depending on percentages, it's not >surprising that averaging and summing are equivalent. Simple adding has an advantage that no synchronization to sum-up all statstical numbers of all computers is required and so the time from sending a statistics packet to receiving & adding it to the root node will be reduced. This advantage, however, may not be effective in MPI environments because the number of packets inceases from N to N^2 if real (ie. using UDP) broadcasting is not used. It's not so surprising that there was no significant difference in MPI environments. Ah, if the tree structure is used to broadcast packets, things may vary. Thaks a lot, Hideki -- g...@nue.ci.i.u-tokyo.ac.jp (Kato) _______________________________________________ computer-go mailing list computer-go@computer-go.org http://www.computer-go.org/mailman/listinfo/computer-go/