A document of 2 weeks ago where they write at least *something*,
not bad from nvidia, knowing they soon have to give lessons to topcoders :)

It's not really systematic approach though. We want a list of all instructions with latencies and throughput latency that belong to it. Also lookup times to the caches, shared memory and RAM would be great to know, even when it
would only be bandwidth numbers.

I do see references to sections B and C for a multiplication instruction and memory fetch instruction that seems to exist,
but can't find that section at all.

Yet nowhere in document i see which hardware instructions the Nvidia hardware supports.

Mind giving page number?

Vincent

On Sep 13, 2009, at 11:43 AM, Petr Baudis wrote:

On Sun, Sep 13, 2009 at 10:48:12AM +0200, Vincent Diepeveen wrote:

On Sep 13, 2009, at 10:19 AM, Petr Baudis wrote:
Just read the nVidia docs. Shifting has the same cost as addition.


Document number and url?

http://developer.download.nvidia.com/compute/cuda/2_3/toolkit/docs/ NVIDIA_CUDA_Programming_Guide_2.3.pdf

P.S.: The PTX assembler is also documented. Or have you meant some
secret undocumented instruction goodies?

http://www.nvidia.com/object/io_1213955209837.html

--
                                Petr "Pasky" Baudis
A lot of people have my books on their bookshelves.
That's the problem, they need to read them. -- Don Knuth
_______________________________________________
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


_______________________________________________
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Reply via email to