I've finally returned to this dedup testing project, trying to get a handle on why performance is so terrible. At the moment I'm re-running tests and monitoring memory_throttle_count, to see if maybe that's what's causing the limit. But while that's in progress and I'm still thinking...
I assume the DDT tree must be stored on disk, in the regular pool, and each entry is stored independently from each other entry, right? So whenever you're performing new unique writes, that means you're creating new entries in the tree, and every so often the tree will need to rebalance itself. By any chance, are DDT entry creation treated as sync writes? If so, that could be hurting me. For every new unique block written, there might be a significant amount of small random writes taking place that are necessary to support the actual data write. Anyone have any knowledge to share along these lines? Thanks...
_______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss