Hi All,

Has anyone seen problems with the arc cache holding on to memory in
memory pressure conditions?

We have several Oracle DB servers running zfs for the root file
systems and the databases on vxfs.

An unexpected number of clients connected and cause a memory shortage
such that some processes were swapped out.

The system recovered partially with around 1G free however the arch
cache was still around 9-10g.

It appears that the arc cache didn't dump memory as fast as it was
recovered from processes etc.

As a workaround we have limited the max_arc_cache to 2G.

Shouldn't the arc_cache be recovered in preference to active process memory?
Having to competing systems recovering memory does not make sense to
me and seems to result in a strange situation with memory shortages
and a arc large cache.

Also would it be better if the min_arc_cache was based on the size of
zfs file systems rather than a percentage of total memory?
3or 4G minimums seem huge!

Thanks

Peter
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to