Re: [zfs-discuss] Take Three: PSARC 2007/171 ZFS Separate Intent Log

2007-07-11 Thread Cyril Plisko
Neil, many thanks for publishing this doc - it is exactly what I was looking for ! On 7/9/07, Neil Perrin <[EMAIL PROTECTED]> wrote: > Er with attachment this time. > > > > So I've attached the accepted proposal. There was (as expected) not > > much discussion of this case as it was considered a

Re: [zfs-discuss] Pseudo file system access to snapshots?

2007-07-11 Thread Darren J Moffat
Mike Gerdts wrote: > Perhaps a better approach is to create a pseudo file system that looks like: > > /pool >/@@ >/@today >/@yesterday >/fs > /@@ > /@2007-06-01 >/otherfs >/@@ How is this d

Re: [zfs-discuss] Pseudo file system access to snapshots?

2007-07-11 Thread Mike Gerdts
On 7/11/07, Darren J Moffat <[EMAIL PROTECTED]> wrote: > Mike Gerdts wrote: > > Perhaps a better approach is to create a pseudo file system that looks like: > > > > /pool > >/@@ > >/@today > >/@yesterday > >/fs > > /@@ > >

Re: [zfs-discuss] Pseudo file system access to snapshots?

2007-07-11 Thread Matthew Ahrens
> This "restore problem" is my key worry in deploying ZFS in the area > where I see it as most beneficial. Another solution that would deal > with the same problem is block-level deduplication. So far my queries > in this area have been met with silence. I must have missed your messages on dedup

Re: [zfs-discuss] ZFS and IBM's TSM

2007-07-11 Thread Hans-Juergen Schnitzer
Our main problem with TSM and ZFS is currently that there seems to be no efficient way to do a disaster restore when the backup resides on tape - due to the large number of filesystems/TSM filespaces. The graphical client (dsmj) does not work at all and with dsmc one has to start a separate resto

[zfs-discuss] pool analysis

2007-07-11 Thread Kent Watsen
Richard's blog analyzes MTTDL as a function of N+P+S: http://blogs.sun.com/relling/entry/raid_recommendations_space_vs_mttdl But to understand how to best utilize an array with a fixed number of drives, I add the following constraints: - N+P should follow ZFS best-practice rule of N={2,4,8

Re: [zfs-discuss] pool analysis

2007-07-11 Thread Kent Watsen
Resent as HTML to avoid line-wrapping: Richard's blog analyzes MTTDL as a function of N+P+S: http://blogs.sun.com/relling/entry/raid_recommendations_space_vs_mttdl But to understand how to best utilize an array with a fixed number of drives, I add the following constraints: - N+P should fol

Re: [zfs-discuss] pool analysis

2007-07-11 Thread Darren Dunham
> While its true that RAIDZ2 is /much /safer that RAIDZ, it seems that > /any /RAIDZ configuration will outlive me and so I conclude that RAIDZ2 > is unnecessary in a practical sense... This conclusion surprises me > given the amount of attention people give to double-parity solutions - > what

Re: [zfs-discuss] pool analysis

2007-07-11 Thread Kent Watsen
All, When I reformatted to HTML, I forgot ro fix the code also - here is the correct code: #include #include #define NUM_BAYS 24 #define DRIVE_SIZE_GB 300 #define MTBF_YEARS 4 #define MTTR_HOURS_NO_SPARE 16 #define MTTR_HOURS_SPARE 4 int main() { printf("\n"); printf("%u bays w/ %u

Re: [zfs-discuss] pool analysis

2007-07-11 Thread David Dyer-Bennet
Darren Dunham wrote: >> While its true that RAIDZ2 is /much /safer that RAIDZ, it seems that >> /any /RAIDZ configuration will outlive me and so I conclude that RAIDZ2 >> is unnecessary in a practical sense... This conclusion surprises me >> given the amount of attention people give to double-p

Re: [zfs-discuss] pool analysis

2007-07-11 Thread Anton B. Rang
> Are Netapp using some kind of block checksumming? They provide an option for it, I'm not sure how often it's used. > If Netapp doesn't do something like [ZFS checksums], that would > explain why there's frequently trouble reconstructing, and point up a > major ZFS advantage. Actually, the real

Re: [zfs-discuss] pool analysis

2007-07-11 Thread Rob Logan
#define DRIVE_SIZE_GB 300 #define MTBF_YEARS 2 #define MTTR_HOURS_NO_SPARE 48 #define MTTR_HOURS_SPARE 8 #define NUM_BAYS 10 - can have 3 (2+1) w/ 1 spares providing 1800 GB with MTTDL of 243.33 years - can have 2 (4+1) w/ 0 spares providing 2400 GB with MTTDL of 18.25 years - can have 1

Re: [zfs-discuss] pool analysis

2007-07-11 Thread Darren Dunham
> Are Netapp using some kind of block checksumming? That seems to be one > of the big wins of ZFS compared to ordinary filesystems -- I have a > higher confidence that data I haven't accessed recently is still good. > If Netapp doesn't do something like that, that would explain why there's >

Re: [zfs-discuss] pool analysis

2007-07-11 Thread Richard Elling
cool. comments below... Kent Watsen wrote: Richard's blog analyzes MTTDL as a function of N+P+S: http://blogs.sun.com/relling/entry/raid_recommendations_space_vs_mttdl But to understand how to best utilize an array with a fixed number of drives, I add the following constraints: - N+P s

Re: [zfs-discuss] pool analysis

2007-07-11 Thread Kent Watsen
>> But to understand how to best utilize an array with a fixed number of >> drives, I add the following constraints: >> - N+P should follow ZFS best-practice rule of N={2,4,8} and P={1,2} >> - all sets in an array should be configured similarly >> - the MTTDL for S sets is equal to (MTTDL f

Re: [zfs-discuss] pool analysis

2007-07-11 Thread David Dyer-Bennet
Kent Watsen wrote: >> #define MTTR_HOURS_NO_SPARE 16 >> >> I think this is optimistic :-) >> > Not really for me as the array is in my basement - so I assume that I'll > swap in a drive when I get home from work ;) > Yes, it's interesting how the parameters for home setups differ from "p

Re: [zfs-discuss] pool analysis

2007-07-11 Thread Richard Elling
Kent Watsen wrote: > >>> But to understand how to best utilize an array with a fixed number of >>> drives, I add the following constraints: >>> - N+P should follow ZFS best-practice rule of N={2,4,8} and P={1,2} >>> - all sets in an array should be configured similarly >>> - the MTTDL for S

Re: [zfs-discuss] pool analysis

2007-07-11 Thread Torrey McMahon
David Dyer-Bennet wrote: > Kent Watsen wrote: > >>> #define MTTR_HOURS_NO_SPARE 16 >>> >>> I think this is optimistic :-) >>> >>> >> Not really for me as the array is in my basement - so I assume that I'll >> swap in a drive when I get home from work ;) >> >> > Yes, it's in

Re: [zfs-discuss] ZFS Compression algorithms - Project Proposal

2007-07-11 Thread Richard Elling
Adam Leventhal wrote: > This is a great idea. I'd like to add a couple of suggestions: > > It might be interesting to focus on compression algorithms which are > optimized for particular workloads and data types, an Oracle database for > example. NB. Oracle 11g has builtin compression. In genera

Re: [zfs-discuss] pool analysis

2007-07-11 Thread Toby Thain
On 11-Jul-07, at 3:16 PM, David Dyer-Bennet wrote: > Kent Watsen wrote: >>> #define MTTR_HOURS_NO_SPARE 16 >>> >>> I think this is optimistic :-) >>> >> Not really for me as the array is in my basement - so I assume >> that I'll >> swap in a drive when I get home from work ;) >> > Yes, it's in

[zfs-discuss] [AVS] Question concerning reverse synchronization of a zpool

2007-07-11 Thread Ralf Ramge
Hi, I'm struggling to get a stable ZFS replication using Solaris 10 110/06 (actual patches) and AVS 4.0 for several weeks now. We tried it on VMware first and ended up in kernel panics en masse (yes, we read Jim Dunham's blog articles :-). Now we try on the real thing, two X4500 servers. Well