[removing all lists except ZFS-discuss, as this is really pertinent only there]

ольга крыжановская wrote:
Are there plans to reduce the memory usage of ZFS in the near future?

Olga

2010/4/2 Alan Coopersmith <alan.coopersm...@oracle.com>:
ольга крыжановская wrote:
Does Opensolaris have an option to install without ZFS, i.e. use UFS
for root like SXCE did?
No.  beadm & pkg image-update rely on ZFS functionality for the root
filesystem.

--
       -Alan Coopersmith-        alan.coopersm...@oracle.com
        Oracle Solaris Platform Engineering: X Window System
The vast majority of ZFS memory consumption is for caching, which can be manually reduced if it's impinging on your application. See the tuning guide for more info:

http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide

As pointed out elsewhere, these tuning parameters are generally for highwater marks - ZFS will return RAM back to the system if it's needed for applications. So, in your original problem, the likelihood is /not/ that ZFS is consuming RAM and not releasing it, but rather than your other apps are overloading the system.


That said, there are certain minimum allocations that can't be reduced and must be held in RAM, but they're not generally significant. UFS's memory usage is really not measurably different than ZFS's, so far as I can measure from a kernel standpoint. It's all the caching that makes ZFS look like a RAM pig.

One thing though: taking away all of ZFS's caching hurts performance more than removing all of UFS's file cache, because ZFS stores more than simple data in it's filecache (ARC).

Realistically speaking, I can't see running ZFS on a machine with less than 1GB of RAM. I also can't see modifying ZFS to work well in such circumstances, as (a) ZFS isn't targeted at such limited platforms and (b) you'd seriously compromise a major chunk of performance trying to make it fit. These days, 4GB is really more of a minimum for a 64-bit machine/OS in any case.

I certainly would be interested in seeing what a large L2ARC cache would mean for reduction in RAM footprint; on one hand, having an L2ARC requires ARC (i.e. DRAM) allocations for each entry in the L2ARC, but on the other hand, it would reduce/eliminate storage of actual data and metadata in DRAM.

Anyone up for running tests for a box with say 512MB of RAM and a 10GB+ L2ARC (in say an SSD)?

--
Erik Trimble
Java System Support
Mailstop:  usca22-123
Phone:  x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to