Hi all,

I am not a developer, but I have a background in engineering and a strong 
interest in performance and optimization.  A recent Slashdot reference really 
piqued my interest.  The reference is to an ACM Queue article that challenges 
some conventional wisdom regarding algorithm performance when memory paging is 
taken into consideration.  As file systems and ZFS in particular deal with 
block hierarchies, I was curious if some of the principles discussed in the 
article might have substantial performance implications for ZFS or de-dup (e.g. 
in reducing the DDT memory thrashing and performance hit that can take place 
when deleting large datasets from a de-duped pool).

The ACM Queue article URL is below:
http://queue.acm.org/detail.cfm?id=1814327

Any thoughts?  Thanks in advance.

Richard Bruce
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to