Hi all, this some kind of Off-Topic since it has nothing to do with LFS at all. I send this to this list because I know that there are quite a lot geeks and experienced people out there who may have a hint for me.
Have you ever seen a machine which has too much RAM installed? We have a brand new HP (DL380G6) server here with 48G RAM installed. It has an p410i and a p800 SAS controller connected to a MSA70 with 1TB usable disk space. There are 2 Quad-Core-Xeon (X5560) with hyperthreading (a total of 16 CPUs). As an OS there is s SuSE SLES10 installed and as DB-Software (because of which we have this machine) Oracle10g. The problem: When we create a file which is 32GByte large (for instance using dd if=/dev/null of=test.dat...) the copy process runs very fast up to round about 29-30GB slows down quite immediatly and the workload increases up to loadavg=35 or so. It finally results in a non-responding machine where we can only press the reset-button. What we did: Adding the mentioned 30GB plus the rest what the OS itself is using it adds up to quite exact 32GB. We removed 6 of the 12 RAM modules (every has 4GB) so that the machine now has less that 32GB RAM (exact 24GB). The dd command runs quite fast until the file was created and nothing special is seen on the machine - everything worked as expected. Do you have an idea what the 32GB-limit could be? Or what could make the machine to behave the way it does? Our next step will be to test that all with the 64-bit stuff but it should work on 32bit too, isn't it? Any ideas? Thank you! Thomas -- http://linuxfromscratch.org/mailman/listinfo/lfs-dev FAQ: http://www.linuxfromscratch.org/faq/ Unsubscribe: See the above information page