On 10/16/07, Dave Johnson <[EMAIL PROTECTED]> wrote:
>
> does anyone actually *use* compression ?  i'd like to see a poll on how many
> people are using (or would use) compression on production systems that are
> larger than your little department catch-all dumping ground server.

We don't use compression on our thumpers - they're mostly for image
storage where the original (eg. jpeg) is already compressed.

What will be interesting is to look at the effect of compression on the
attribute files (largely text and xml) as we start to deploy zfs there
as well.

> i mean,
> unless you had some NDMP interface directly to ZFS, daily tape backups for
> any large system will likely be an excersize in futility unless the systems
> are largely just archive servers, at which point it's probably smarter to
> perform backups less often, coinciding with the workflow of migrating
> archive data to it.  otherwise wouldn't the system just plain get pounded?

I'm not worried about the compression effect. Where I see problems is
backing up million/tens of millions of files in a single dataset. Backing up
each file is essentially a random read (and this isn't helped by raidz
which gives you a single disks worth of  random read I/O per vdev). I
would love to see better ways of backing up huge numbers of files.

-- 
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to