An interesting article on using GPUs for general purpose computation:
http://www.linux-mag.com/microsites.php?site=business-class-hpc&sid=build&p=4543

In (current mainstream) computer go there are two main CPU-bound
algorithms: playouts (random, or incorporating logic or patterns) and
tactical search. But the GPUs seem even more restricted in what they can
do than the PS3 Cell processors (where the main restriction there was
only a small amount of local memory). E.g. from the above article:
    * No stack or heap
    * No integer or bit-wise operations
    * No scatter operations (a[i]=b)
    * No reduction operations (max(), min(), sum())

On the other hand this quote [1] from the Sh language page says it has
for/if statements, which the latest GPUs support.

Does anyone here both understood the above go algorithms *and* had
experience with programming GPUs, and can confirm that they are not
really useful?

Darren


[1]: From http://www.libsh.org/about.html
Sh incorporates full language constructs for branching (e.g. for loops
and if statements). Once GPUs are powerful enough to execute such
constructs (which, to some extent, is true today) backends can be
adjusted to compile such code to real hardware assembly. In the mean
time our GPU simulator Sm implements various features expected to be in
GPUs in the near future, such as a unified vertex and fragment
instruction set.


-- 
Darren Cook
http://dcook.org/mlsn/ (English-Japanese-German-Chinese free dictionary)
http://dcook.org/work/ (About me and my work)
http://dcook.org/work/charts/  (My flash charting demos)
_______________________________________________
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Reply via email to