On Tue, Jul 12, 2011 at 7:41 AM, Eric Sproul <espr...@omniti.com> wrote:
> But that's exactly the problem-- ZFS being copy-on-write will
> eventually have written to all of the available LBA addresses on the
> drive, regardless of how much live data exists.  It's the rate of
> change, in other words, rather than the absolute amount that gets us
> into trouble with SSDs.  The SSD has no way of knowing what blocks

Most "enterprise" SSDs use something like 30% for spare area. So a
drive with 128MiB (base 2) of flash will have 100MB (base 10) of
available storage. A consumer level drive will have ~ 6% spare, or
128MiB of flash and 128MB of available storage. Some drives have 120MB
available, but still have 128 MiB of flash and therefore slightly more
spare area. Controllers like the Sandforce that do some dedup can give
you even more effective spare area, depending on the type of data.

When the OS starts reusing LBAs, the drive will re-map them into new
flash blocks in the spare area and may perform garbage collection on
the now partially used blocks. The effectiveness of this depends on
how quickly the system is writing and how full the drive is.

I failed to mention earlier that ZFS's write aggregation is also
helpful when used with flash drives since it can help to ensure that a
whole flash block is written at once. Increasing the ashift value to
4k when the pool is created may also help.

> Now, others have hinted that certain controllers are better than
> others in the absence of TRIM, but I don't see how GC could know what
> blocks are available to be erased without information from the OS.

The changed LBAs are remapped rather than overwritten in place. The
drive knows which LBAs in a flash block have been re-mapped, and can
do garbage collection when the right criteria are met.

-B

-- 
Brandon High : bh...@freaks.com
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to