Hi, all I might be missunderstanding things but...
First of all, machines with long pipelines will suffer from cache misses (p4 in this case). Depending on the size copied, (i don't know how large they are so..) can't one run out of cachelines and/or evict more useful cache data? Ie, if it's cached from begining to end, we generally only need 'some of' the begining, the cpu's prefetch should manage the rest. I might, as i said, not know all about things like this and i also suffer from a fever but i still find Hiro's data interesting. Isn't there some way to do the same test for the same time and measure the differences in allround data? to see if we really are punished as bad on accessing the data post copy? (could it be size dependant?) -- Ian Kumlien <pomac () vapor ! com> -- http://pomac.netswarm.net
signature.asc
Description: This is a digitally signed message part