Re: [zfs-discuss] Raidz vdev size... again.

2009-04-28 Thread Tim
On Mon, Apr 27, 2009 at 8:25 PM, Richard Elling wrote: > > I do not believe you can achieve five 9s with current consumer disk > drives for an extended period, say >1 year. > Just to pipe up, while very few vendors can pull it off, we've seen five 9's with Hitachi gear using SATA. --Tim

Re: [zfs-discuss] Raidz vdev size... again.

2009-04-28 Thread Blake
On Tue, Apr 28, 2009 at 10:08 AM, Tim wrote: > > > On Mon, Apr 27, 2009 at 8:25 PM, Richard Elling > wrote: >> >> I do not believe you can achieve five 9s with current consumer disk >> drives for an extended period, say >1 year. > > Just to pipe up, while very few vendors can pull it off, we've s

Re: [zfs-discuss] Raidz vdev size... again.

2009-04-28 Thread Fajar A. Nugraha
On Tue, Apr 28, 2009 at 9:42 AM, Scott Lawson wrote: >> Mainstream Solaris 10 gets a port of ZFS from OpenSolaris, so its >> features are fewer and later.  As time ticks away, fewer features >> will be back-ported to Solaris 10.  Meanwhile, you can get a production >> support  agreement for OpenSo

Re: [zfs-discuss] can zfs create return with no error code before the mount takes place?

2009-04-28 Thread Alastair Neil
On Mon, Apr 27, 2009 at 6:54 PM, Robert Milkowski wrote: > > Hello Alastair, > > Monday, April 27, 2009, 10:18:50 PM, you wrote: > > Seems or did you confirm it with mount or df command? > > Do you mount it manually then? > >http://milek.blogspot.com > This

[zfs-discuss] Question about ZFS Incremental Send/Receive

2009-04-28 Thread Patrick Pinchera
I'm using ZFS snapshots and send and receive for a proof of concept, and I'd like to better understand how the incremental feature works. Consider this example: 1. create a tar file using tar -cvf of 10 image files 2. ZFS snapshot the filesystem that contains this tar file 3. Use ZFS send

Re: [zfs-discuss] storage & zilstat assistance

2009-04-28 Thread Marion Hakanson
bfrie...@simple.dallas.tx.us said: > Your IOPS don't seem high. You are currently using RAID-5, which is a poor > choice for a database. If you use ZFS mirrors you are going to unleash a > lot more IOPS from the available spindles. RAID-5 may be poor for some database loads, but it's perfectl

Re: [zfs-discuss] Question about ZFS Incremental Send/Receive

2009-04-28 Thread Mattias Pantzare
O> I feel like I understand what tar is doing, but I'm curious about what is it > that ZFS is looking at that makes it a "successful" incremental send? That > is, not send the entire file again. Does it have to do with how the > application (tar in this example) does a file open, fopen(), and what

Re: [zfs-discuss] Raidz vdev size... again.

2009-04-28 Thread Kees Nuyt
On Mon, 27 Apr 2009 18:25:42 -0700, Richard Elling wrote: >The concern with large drives is unrecoverable reads during resilvering. >One contributor to this is superparamagnetic decay, where the bits are >lost over time as the medium tries to revert to a more steady state. >To some extent, period

Re: [zfs-discuss] Raidz vdev size... again.

2009-04-28 Thread Miles Nordin
> "kn" == Kees Nuyt writes: kn> Some high availablility storage systems overcome this decay by kn> not just reading, but also writing all blocks during a kn> scrub. sounds like a good idea but harder in the ZFS model where the software isn't the proprietary work of the only perm

Re: [zfs-discuss] Raidz vdev size... again.

2009-04-28 Thread Richard Elling
Tim wrote: On Mon, Apr 27, 2009 at 8:25 PM, Richard Elling mailto:richard.ell...@gmail.com>> wrote: I do not believe you can achieve five 9s with current consumer disk drives for an extended period, say >1 year. Just to pipe up, while very few vendors can pull it off, we've seen

Re: [zfs-discuss] Raidz vdev size... again.

2009-04-28 Thread Richard Elling
Kees Nuyt wrote: On Mon, 27 Apr 2009 18:25:42 -0700, Richard Elling wrote: The concern with large drives is unrecoverable reads during resilvering. One contributor to this is superparamagnetic decay, where the bits are lost over time as the medium tries to revert to a more steady state. To

Re: [zfs-discuss] Raidz vdev size... again.

2009-04-28 Thread David Magda
On Apr 28, 2009, at 18:02, Richard Elling wrote: Kees Nuyt wrote: Some high availablility storage systems overcome this decay by not just reading, but also writing all blocks during a scrub. In those systems, scrubbing is done semi-continously in the background, not on user/admin demand.

Re: [zfs-discuss] Raidz vdev size... again.

2009-04-28 Thread Tim
On Tue, Apr 28, 2009 at 4:52 PM, Richard Elling wrote: > > Well done! Of course Hitachi doesn't use consumer-grade disks in > their arrays... > > I'll also confess that I did set a bit of a math trap here :-) The trap is > that if you ever have to recover data from tape/backup, then you'll > hav

Re: [zfs-discuss] Raidz vdev size... again.

2009-04-28 Thread Bob Friesenhahn
On Tue, 28 Apr 2009, Miles Nordin wrote: * it'd be harmful to do this on SSD's. it might also be a really good idea to do it on SSD's. who knows yet. SSDs can be based on many types of technologies, and not just those that wear out. * it may be wasteful to do read/rewrite on an ordin

Re: [zfs-discuss] Raidz vdev size... again.

2009-04-28 Thread Bob Friesenhahn
On Tue, 28 Apr 2009, Richard Elling wrote: Yes and there is a very important point here. There are 2 different sorts of scrubbing: read and rewrite. ZFS (today) does read scrubbing, which does not reset the decay process. Some RAID arrays also do rewrite scrubs which does reset the decay process

Re: [zfs-discuss] Raidz vdev size... again.

2009-04-28 Thread Bob Friesenhahn
On Tue, 28 Apr 2009, Tim wrote: I'll stick with the 3 year life cycle of the system followed by a hot migration to new storage, thank you very much. Once again there is a fixation on the idea that computers gradually degrade over time and that simply replacing the hardware before the expirat

Re: [zfs-discuss] Raidz vdev size... again.

2009-04-28 Thread Tim
On Tue, Apr 28, 2009 at 11:12 PM, Bob Friesenhahn < bfrie...@simple.dallas.tx.us> wrote: > On Tue, 28 Apr 2009, Tim wrote: > > I'll stick with the 3 year life cycle of the system followed by a hot >> migration to new storage, thank you very much. >> > > Once again there is a fixation on the idea

Re: [zfs-discuss] Raidz vdev size... again.

2009-04-28 Thread Richard Elling
Bob Friesenhahn wrote: On Tue, 28 Apr 2009, Richard Elling wrote: Yes and there is a very important point here. There are 2 different sorts of scrubbing: read and rewrite. ZFS (today) does read scrubbing, which does not reset the decay process. Some RAID arrays also do rewrite scrubs which does