Mag Gam wrote:
Hi John:
Well, we run a lot of statistical analysis and our code loads a lot of
data into a vector for fast calculations. I am not sure how else to do
these calculations fast without loading it into memory. Thats why we
have to do it this way.
well, if you got several processes that each need 32GB in a 64GB
machine, you're gonna end up swapping.
the traditional way of doing this sort of thing on limited memory
machines was to take a sequential pass through the data, calculating the
statistics on the fly. I know that kind of thing is very difficult for
some algorithms (FFT's are notorious for being unfriendly to sequential
processing), but for many algorithms, a few sequential passes of
calculations can be /faster/ than random access and swapping when theres
memory/process contention.
_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos