Greetings, all. Does anyone have a good whitepaper or three on how ZFS uses memory and swap? I did some Googling, but found nothing that was useful.
The reason I ask is that we have a small issue with some of our DBA's. We have a server with 16GB of memory, and they are looking at moving over databases to it from a smaller system. The catch is that they are moving to 10g. Oracle suggests a 2GB SGA. They are using top to determine how many databases they can fit on the server (I know, I know...not the right tool) based on top's reporting of memory and swap usage. What they did as a test is to fire up a single database with a 10GB SGA "to simulate 5 2GB databases" while running top. The system could not de-allocate the memory from the ZFS (and any other) cache, re-allocate it to the database, and start the database, all in under 2 minutes. A few moments later (we didn't get any times on their top screenshots), the 10GB DB was able to start. What I'm basically looking for is information, and perhaps the best use of vmstat, et al. to show them that the server can indeed handle several databases started up in a realistic manner, and explain to them how memory is used by ZFS, and let go when other applications require said memory. I know this is a bit of a strange one, but these DBA's seem to insist that everything work the same as it did using UFS under Solaris 8 (and that top is the Holder of the Truth(tm)), and we need to prove to them using well-reasoned arguments that the changes to memory management and usage by Solaris 10 and ZFS do not stop their databases from running properly. The fact that we have other databases running quite happily on other systems using ZFS, including 12 really big databases on a 32GB V880, seems to be irrelevant. Thank you all for any help you can provide. Rainer This message posted from opensolaris.org _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss