Re: [zfs-discuss] cannot delete file when fs 100% full

2008-08-31 Thread Sanjeev
Thanks Michael for the clarification about the IDR ! :-) I was planing to give this explaination myself. The fix I have in there is a temporary fix. I am currently looking at a better way of accounting the fatzap blocks to make sure we cover all the cases. I have got some pointers from Mark Maybee

[zfs-discuss] RFE: allow zfs to interpret '.' as da datatset?

2008-08-31 Thread Gavin Maltby
Hi, I'd like to be able to utter cmdlines such as $ zfs set readonly=on . $ zfs snapshot [EMAIL PROTECTED] with '.' interpreted to mean the dataset corresponding to the current working directory. This would shorten what I find to be a very common operaration - that of discovering your current (

Re: [zfs-discuss] Sidebar to ZFS Availability discussion

2008-08-31 Thread Richard Elling
Miles Nordin wrote: >> "dc" == David Collier-Brown <[EMAIL PROTECTED]> writes: >> > > dc> one discovers latency growing without bound on disk > dc> saturation, > > yeah, ZFS needs the same thing just for scrub. > ZFS already schedules scrubs at a low priority. Howe

Re: [zfs-discuss] Availability: ZFS needs to handle disk removal / driver failure better

2008-08-31 Thread Richard Elling
Ross Smith wrote: > Triple mirroring you say? That'd be me then :D > > The reason I really want to get ZFS timeouts sorted is that our long > term goal is to mirror that over two servers too, giving us a pool > mirrored across two servers, each of which is actually a zfs iscsi > volume hosted o

Re: [zfs-discuss] Sidebar to ZFS Availability discussion

2008-08-31 Thread Miles Nordin
> "dc" == David Collier-Brown <[EMAIL PROTECTED]> writes: dc> one discovers latency growing without bound on disk dc> saturation, yeah, ZFS needs the same thing just for scrub. I guess if the disks don't let you tag commands with priorities, then you have to run them at slightly belo

Re: [zfs-discuss] Sidebar to ZFS Availability discussion

2008-08-31 Thread Richard Elling
David Collier-Brown wrote: > Re Availability: ZFS needs to handle disk removal / > driver failure better > >>> A better option would be to not use this to perform FMA diagnosis, but >>> instead work into the mirror child selection code. This has already >>> been alluded to before, but it woul

Re: [zfs-discuss] EMC - top of the table for efficiency, how well would ZFS do?

2008-08-31 Thread Ross Smith
Dear god. Thanks Tim, that's useful info. The sales rep we spoke to was really trying quite hard to persuade us that NetApp was the best solution for us, they spent a couple of months working with us, but ultimately we were put off because of those 'limitations'. They knew full well that tho

Re: [zfs-discuss] EMC - top of the table for efficiency, how well would ZFS do?

2008-08-31 Thread Brian Hechinger
On Sun, Aug 31, 2008 at 11:06:16AM -0500, Tim wrote: > > The problem though for our usage with NetApp was that we actually couldn't > > reserve enough space for snapshots. 50% of the pool was their maximum, and > > we're interested in running ten years worth of snapshots here, which could > > see

Re: [zfs-discuss] EMC - top of the table for efficiency, how well would ZFS do?

2008-08-31 Thread Tim
On Sun, Aug 31, 2008 at 10:39 AM, Ross Smith <[EMAIL PROTECTED]> wrote: > Hey Tim, > > I'll admit I just quoted the blog without checking, I seem to remember the > sales rep I spoke to recommending putting aside 20-50% of my disk for > snapshots. Compared to ZFS where I don't need to reserve any

Re: [zfs-discuss] EMC - top of the table for efficiency, how well would ZFS do?

2008-08-31 Thread Ross Smith
Hey Tim, I'll admit I just quoted the blog without checking, I seem to remember the sales rep I spoke to recommending putting aside 20-50% of my disk for snapshots. Compared to ZFS where I don't need to reserve any space it feels very old fashioned. With ZFS, snapshots just take up as much s

Re: [zfs-discuss] Proposed 2540 and ZFS configuration

2008-08-31 Thread Bob Friesenhahn
On Sun, 31 Aug 2008, Ross wrote: > You could split this into two raid-z2 sets if you wanted, that would > have a bit better performance, but if you can cope with the speed of > a single pool for now I'd be tempted to start with that. It's > likely that by Christmas you'll be able to buy flash

Re: [zfs-discuss] Proposed 2540 and ZFS configuration

2008-08-31 Thread Tim
With the restriping: wouldn't it be as simple as creating a new folder/dataset/whatever on the same pool and doing an rsync to the same pool/new location. This would obviously cause a short downtime to switch over and delete the old dataset, but seems like it should work fine. If you're doubling

Re: [zfs-discuss] EMC - top of the table for efficiency, how well would ZFS do?

2008-08-31 Thread Tim
Netapp does NOT recommend 100 percent. Perhaps you should talk to netapp or one of their partners who know their tech instead of their competitors next time. Zfs, the way its currently implemented will require roughly the same as netapp... Which still isn't 100. On 8/30/08, Ross <[EMAIL PROTEC

[zfs-discuss] Sidebar to ZFS Availability discussion

2008-08-31 Thread David Collier-Brown
Re Availability: ZFS needs to handle disk removal / driver failure better >> A better option would be to not use this to perform FMA diagnosis, but >> instead work into the mirror child selection code. This has already >> been alluded to before, but it would be cool to keep track of latency >> o

Re: [zfs-discuss] Availability: ZFS needs to handle disk removal / driver failure better

2008-08-31 Thread Johan Hartzenberg
On Thu, Aug 28, 2008 at 11:21 PM, Ian Collins <[EMAIL PROTECTED]> wrote: > Miles Nordin writes: > > > suggested that unlike the SVM feature it should be automatic, because > > by so being it becomes useful as an availability tool rather than just > > performance optimisation. > > > So on a server

Re: [zfs-discuss] Proposed 2540 and ZFS configuration

2008-08-31 Thread Ross
Personally I'd go for an 11 disk raid-z2, with one hot spare. You loose some capacity, but you've got more than enough for your current needs, and with 1TB disks single parity raid means a lot of time with your data unprotected when one fails. You could split this into two raid-z2 sets if you