Re: [zfs-discuss] ZFS Scalability/performance

2007-06-22 Thread Brian Hechinger
On Wed, Jun 20, 2007 at 12:03:02PM -0400, Will Murnane wrote: > Yes. 2 disks means when one fails, you've still got an extra. In > raid 5 boxes, it's not uncommon with large arrays for one disk to die, > and when it's replaced, the stress on the other disks causes another > failure. Then the arr

Re: [zfs-discuss] ZFS Scalability/performance

2007-06-20 Thread Richard Elling
Oliver Schinagl wrote: zo basically, what you are saying is that on FBSD there's no performane issue, whereas on solaris there (can be if write caches aren't enabled) Solaris plays it safe by default. You can, of course, override that safety. Whether it is a performance win seems to be the sub

Re: [zfs-discuss] ZFS Scalability/performance

2007-06-20 Thread Richard Elling
mike wrote: I would be interested in hearing if there are any other configuration options to squeeze the most space out of the drives. I have no issue with powering down to replace a bad drive, and I expect that I'll only have one at the most fail at a time. This is what is known as "famous las

Re: [zfs-discuss] ZFS Scalability/performance

2007-06-20 Thread Pawel Jakub Dawidek
On Wed, Jun 20, 2007 at 09:48:08AM -0700, Eric Schrock wrote: > On Wed, Jun 20, 2007 at 12:45:52PM +0200, Pawel Jakub Dawidek wrote: > > > > Will be nice to not EFI label disks, though:) Currently there is a > > problem with this - zpool created on Solaris is not recognized by > > FreeBSD, because

Re: [zfs-discuss] ZFS Scalability/performance

2007-06-20 Thread Bill Sommerfeld
On Wed, 2007-06-20 at 12:45 +0200, Pawel Jakub Dawidek wrote: > Will be nice to not EFI label disks, though:) Currently there is a > problem with this - zpool created on Solaris is not recognized by > FreeBSD, because FreeBSD claims GPT label is corrupted. Hmm. I'd think the right answer here is

Re: [zfs-discuss] ZFS Scalability/performance

2007-06-20 Thread Oliver Schinagl
mike wrote: > On 6/20/07, Constantin Gonzalez <[EMAIL PROTECTED]> wrote: > >> One disk can be one vdev. >> A 1+1 mirror can be a vdev, too. >> A n+1 or n+2 RAID-Z (RAID-Z2) set can be a vdev too. >> >> - Then you concatenate vdevs to create a pool. Pools can be extended by >> adding more vdev

Re: [zfs-discuss] ZFS Scalability/performance

2007-06-20 Thread Eric Schrock
On Wed, Jun 20, 2007 at 12:45:52PM +0200, Pawel Jakub Dawidek wrote: > > Will be nice to not EFI label disks, though:) Currently there is a > problem with this - zpool created on Solaris is not recognized by > FreeBSD, because FreeBSD claims GPT label is corrupted. On the other > hand, creating ZF

Re: [zfs-discuss] ZFS Scalability/performance

2007-06-20 Thread Oliver Schinagl
Pawel Jakub Dawidek wrote: > On Wed, Jun 20, 2007 at 01:45:29PM +0200, Oliver Schinagl wrote: > >> Pawel Jakub Dawidek wrote: >> >>> On Tue, Jun 19, 2007 at 07:52:28PM -0700, Richard Elling wrote: >>> >>> > On that note, i have a different first question to start with. I >>>

Re: [zfs-discuss] ZFS Scalability/performance

2007-06-20 Thread Pawel Jakub Dawidek
On Wed, Jun 20, 2007 at 01:45:29PM +0200, Oliver Schinagl wrote: > > > Pawel Jakub Dawidek wrote: > > On Tue, Jun 19, 2007 at 07:52:28PM -0700, Richard Elling wrote: > > > >>> On that note, i have a different first question to start with. I > >>> personally am a Linux fanboy, and would love to

Re: [zfs-discuss] ZFS Scalability/performance

2007-06-20 Thread Will Murnane
On 6/20/07, mike <[EMAIL PROTECTED]> wrote: On 6/20/07, Paul Fisher <[EMAIL PROTECTED]> wrote: > I would not risk raidz on that many disks. A nice compromise may be 14+2 > raidz2, which should perform nicely for your workload and be pretty reliable > when the disks start to fail. Would anyone on

Re: [zfs-discuss] ZFS Scalability/performance

2007-06-20 Thread Tomas Ă–gren
On 20 June, 2007 - Oliver Schinagl sent me these 1,9K bytes: > Also what about full disk vs full partition, e.g. make 1 partition to > span the entire disk vs using the entire disk. > Is there any significant performance penalty? (So not having a disk > split into 2 partitions, but 1 disk, 1 parti

Re: [zfs-discuss] ZFS Scalability/performance

2007-06-20 Thread mike
On 6/20/07, Paul Fisher <[EMAIL PROTECTED]> wrote: I would not risk raidz on that many disks. A nice compromise may be 14+2 raidz2, which should perform nicely for your workload and be pretty reliable when the disks start to fail. Would anyone on the list not recommend this setup? I could li

RE: [zfs-discuss] ZFS Scalability/performance

2007-06-20 Thread Paul Fisher
> From: [EMAIL PROTECTED] > [mailto:[EMAIL PROTECTED] On Behalf Of mike > Sent: Wednesday, June 20, 2007 9:30 AM > > I would prefer something like 15+1 :) I want ZFS to be able to detect > and correct errors, but I do not need to squeeze all the performance > out of it (I'll be using it as a home

Re: [zfs-discuss] ZFS Scalability/performance

2007-06-20 Thread Constantin Gonzalez
Hi Mike, > If I was to plan for a 16 disk ZFS-based system, you would probably > suggest me to configure it as something like 5+1, 4+1, 4+1 all raid-z > (I don't need the double parity concept) > > I would prefer something like 15+1 :) I want ZFS to be able to detect > and correct errors, but I d

Re: [zfs-discuss] ZFS Scalability/performance

2007-06-20 Thread mike
On 6/20/07, Constantin Gonzalez <[EMAIL PROTECTED]> wrote: One disk can be one vdev. A 1+1 mirror can be a vdev, too. A n+1 or n+2 RAID-Z (RAID-Z2) set can be a vdev too. - Then you concatenate vdevs to create a pool. Pools can be extended by adding more vdevs. - Then you create ZFS file s

Re: [zfs-discuss] ZFS Scalability/performance

2007-06-20 Thread Constantin Gonzalez
Hi, > How are paired mirrors more flexiable? well, I'm talking of a small home system. If the pool gets full, the way to expand with RAID-Z would be to add 3+ disks (typically 4-5). With mirror only, you just add two. So in my case it's just about the granularity of expansion. The reasoning is

Re: [zfs-discuss] ZFS Scalability/performance

2007-06-20 Thread Oliver Schinagl
Constantin Gonzalez wrote: > Hi, > > >> I'm quite interested in ZFS, like everybody else I suppose, and am about >> to install FBSD with ZFS. >> > > welcome to ZFS! > > >> Anyway, back to business :) >> I have a whole bunch of different sized disks/speeds. E.g. 3 300GB disks >> @ 40mb,

Re: [zfs-discuss] ZFS Scalability/performance

2007-06-20 Thread Oliver Schinagl
Pawel Jakub Dawidek wrote: > On Tue, Jun 19, 2007 at 07:52:28PM -0700, Richard Elling wrote: > >>> On that note, i have a different first question to start with. I >>> personally am a Linux fanboy, and would love to see/use ZFS on linux. I >>> assume that I can use those ZFS disks later with a

Re: [zfs-discuss] ZFS Scalability/performance

2007-06-20 Thread Pawel Jakub Dawidek
On Tue, Jun 19, 2007 at 07:52:28PM -0700, Richard Elling wrote: > >On that note, i have a different first question to start with. I > >personally am a Linux fanboy, and would love to see/use ZFS on linux. I > >assume that I can use those ZFS disks later with any os that can > >work/recognizes ZFS c

Re: [zfs-discuss] ZFS Scalability/performance

2007-06-20 Thread Constantin Gonzalez
Hi, > I'm quite interested in ZFS, like everybody else I suppose, and am about > to install FBSD with ZFS. welcome to ZFS! > Anyway, back to business :) > I have a whole bunch of different sized disks/speeds. E.g. 3 300GB disks > @ 40mb, a 320GB disk @ 60mb/s, 3 120gb disks @ 50mb/s and so on. >

Re: [zfs-discuss] ZFS Scalability/performance

2007-06-19 Thread Richard Elling
Oliver Schinagl wrote: Hello, I'm quite interested in ZFS, like everybody else I suppose, and am about to install FBSD with ZFS. cool. On that note, i have a different first question to start with. I personally am a Linux fanboy, and would love to see/use ZFS on linux. I assume that I can us