Mario Goebbels wrote:
An introduction to btrfs, from somebody who used to work on ZFS:

http://www.osnews.com/story/21920/A_Short_History_of_btrfs
*very* interesting article.. Not sure why James didn't directly link to
it, but courteous of Valerie Aurora (formerly Henson)

http://lwn.net/Articles/342892/

I'm trying to understand the argument against the SLAB allocator approach. If I understood correctly how BTRFS allocates space, changing and deleting files may just punch randomly sized holes into the disk layout. How's that better?

It's an interesting article, for sure.  The core of the article is actually
how a solution (b+trees with copy-on-write) found a problem (file systems).

To answer the question, the article claims that reallocation is part of the
normal process writing data:

        > Defragmentation is an ongoing process - repacking the items
        > efficiently is part of the normal code path preparing extents to be
        > written to disk. Doing checksums, reference counting, and other
        > assorted metadata busy-work on a per-extent basis reduces overhead
        > and makes new features (such as fast reverse mapping from an extent
        > to everything that references it) possible.

It sure suggests what is happening, but I haven't got a clue on how the above
makes a difference.  Translating this to the ZFS design, I guess it involves
delaying the block layout to the actual txg i/o phase, while zfs already decides this when a block enters the txg, it's layout has been decided already.

This allows for blocks to be dumped into a slog device as soon as it is 
available.

Cheers,
Henk
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to