Behalf Of Richard Elling
Sent: Thursday, June 03, 2010 3:51 AM
To: Roman Naumenko
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] one more time: pool size changes
On Jun 2, 2010, at 3:54 PM, Roman Naumenko wrote:
> Recently I talked to a co-worker who manages NetApp storages.
On Jun 3, 2010 7:35 PM, David Magda wrote:
> On Jun 3, 2010, at 13:36, Garrett D'Amore wrote:
>
> > Perhaps you have been unlucky. Certainly, there is
> a window with N
> > +1 redundancy where a single failure leaves the
> system exposed in
> > the face of a 2nd fault. This is a statistics
>
On Jun 3, 2010, at 13:36, Garrett D'Amore wrote:
Perhaps you have been unlucky. Certainly, there is a window with N
+1 redundancy where a single failure leaves the system exposed in
the face of a 2nd fault. This is a statistics game...
It doesn't even have to be a drive failure, but an unr
On Thu, 3 Jun 2010, David Dyer-Bennet wrote:
But is having a RAIDZ2 drop to single redundancy, with replacement
starting instantly, actually as good or better than having a RAIDZ3 drop
to double redundancy, with actual replacement happening later? The
"degraded" state of the RAIDZ3 has the same
On Jun 3, 2010, at 3:16 AM, Erik Trimble wrote:
> Expanding a RAIDZ (which, really, is the only thing that can't do right now,
> w/r/t adding disks) requires the Block Pointer (BP) Rewrite functionality
> before it can get implemented.
Strictly speaking BP rewrite is not required to expand a RAI
frank+lists/z...@linetwo.net said:
> I remember, and this was a few years back but I don't see why it would be any
> different now, we were trying to add drives 1-2 at a time to medium-sized
> arrays (don't buy the disks until we need them, to hold onto cash), and the
> Netapp performance kept goin
On Thu, Jun 03, 2010 at 12:40:34PM -0700, Frank Cusack wrote:
> On 6/3/10 12:06 AM -0400 Roman Naumenko wrote:
> >I think there is a difference. Just quickly checked netapp site:
> >
> >Adding new disks to a RAID group If a volume has more than one RAID
> >group, you can specify the RAID group to w
frank+lists/z...@linetwo.net said:
> Well in that case it's invalid to compare against Netapp since they can't do
> it either (seems to be the consensus on this list). Neither zfs nor Netapp
> (nor any product) is really designed to handle adding one drive at a time.
> Normally you have to add an
On 6/3/10 12:06 AM -0400 Roman Naumenko wrote:
I think there is a difference. Just quickly checked netapp site:
Adding new disks to a RAID group If a volume has more than one RAID
group, you can specify the RAID group to which you are adding disks.
hmm that's a surprising feature to me.
I rem
On 6/3/10 8:45 AM +0200 Juergen Nickelsen wrote:
Richard Elling writes:
And some time before I had suggested to a my buddy zfs for his new
home storage server, but he turned it down since there is no
expansion available for a pool.
Heck, let him buy a NetApp :-)
Definitely a possibility, g
On 6/2/10 11:10 PM -0400 Roman Naumenko wrote:
Well, I explained it not very clearly. I meant the size of a raidz array
can't be changed.
For sure zpool add can do the job with a pool. Not with a raidz
configuration.
Well in that case it's invalid to compare against Netapp since they
can't do i
On Thu, June 3, 2010 12:03, Bob Friesenhahn wrote:
> On Thu, 3 Jun 2010, David Dyer-Bennet wrote:
>>
>> In an 8-bay chassis, there are other concerns, too. Do I keep space
>> open
>> for a hot spare? There's no real point in a hot spare if you have only
>> one vdev; that is, 8-drive RAIDZ3 is cl
On Thu, June 3, 2010 13:04, Garrett D'Amore wrote:
> On Thu, 2010-06-03 at 11:49 -0500, David Dyer-Bennet wrote:
>> hot spares in place, but I have the bays reserved for that use.
>>
>> In the latest upgrade, I added 4 2.5" hot-swap bays (which got the
>> system
>> disks out of the 3.5" hot-swap b
On Thu, 2010-06-03 at 11:49 -0500, David Dyer-Bennet wrote:
> hot spares in place, but I have the bays reserved for that use.
>
> In the latest upgrade, I added 4 2.5" hot-swap bays (which got the system
> disks out of the 3.5" hot-swap bays). I have two free, and that's the
> form-factor SSDs co
On Thu, 2010-06-03 at 12:22 -0400, Dennis Clarke wrote:
> > If you're clever, you'll also try to make sure each side of the mirror
> > is on a different controller, and if you have enough controllers
> > available, you'll also try to balance the controllers across stripes.
>
> Something like this
On Thu, 2010-06-03 at 08:50 -0700, Marty Scholes wrote:
> Maybe I have been unlucky too many times doing storage admin in the 90s, but
> simple mirroring still scares me. Even with a hot spare (you do have one,
> right?) the rebuild window leaves the entire pool exposed to a single failure.
>
On Thu, 2010-06-03 at 12:03 -0500, Bob Friesenhahn wrote:
> On Thu, 3 Jun 2010, David Dyer-Bennet wrote:
> >
> > In an 8-bay chassis, there are other concerns, too. Do I keep space open
> > for a hot spare? There's no real point in a hot spare if you have only
> > one vdev; that is, 8-drive RAIDZ
On Thu, 3 Jun 2010, David Dyer-Bennet wrote:
In an 8-bay chassis, there are other concerns, too. Do I keep space open
for a hot spare? There's no real point in a hot spare if you have only
one vdev; that is, 8-drive RAIDZ3 is clearly better than 7-drive RAIDZ2
plus a hot spare. And putting ev
On Jun 3, 2010, at 8:36 AM, Freddie Cash wrote:
> On Wed, Jun 2, 2010 at 8:10 PM, Roman Naumenko wrote:
> Well, I explained it not very clearly. I meant the size of a raidz array
> can't be changed.
> For sure zpool add can do the job with a pool. Not with a raidz configuration.
>
> You can't i
On Thu, June 3, 2010 10:50, Garrett D'Amore wrote:
> On Thu, 2010-06-03 at 10:35 -0500, David Dyer-Bennet wrote:
>> On Thu, June 3, 2010 10:15, Garrett D'Amore wrote:
>> > Using a stripe of mirrors (RAID0) you can get the benefits of multiple
>> > spindle performance, easy expansion support (just
On Thu, June 3, 2010 10:50, Marty Scholes wrote:
> David Dyer-Bennet wrote:
>> My choice of mirrors rather than RAIDZ is based on
>> the fact that I have
>> only 8 hot-swap bays (I still think of this as LARGE
>> for a home server;
>> the competition, things like the Drobo, tends to have
>> 4 or 5
> If you're clever, you'll also try to make sure each side of the mirror
> is on a different controller, and if you have enough controllers
> available, you'll also try to balance the controllers across stripes.
Something like this ?
# zpool status fibre0
pool: fibre0
state: ONLINE
status: Th
David Dyer-Bennet wrote:
> My choice of mirrors rather than RAIDZ is based on
> the fact that I have
> only 8 hot-swap bays (I still think of this as LARGE
> for a home server;
> the competition, things like the Drobo, tends to have
> 4 or 5), that I
> don't need really large amounts of storage (af
On Thu, 2010-06-03 at 10:35 -0500, David Dyer-Bennet wrote:
> On Thu, June 3, 2010 10:15, Garrett D'Amore wrote:
> > Using a stripe of mirrors (RAID0) you can get the benefits of multiple
> > spindle performance, easy expansion support (just add new mirrors to the
> > end of the raid0 stripe), and
On Wed, Jun 2, 2010 at 8:10 PM, Roman Naumenko wrote:
> Well, I explained it not very clearly. I meant the size of a raidz array
> can't be changed.
> For sure zpool add can do the job with a pool. Not with a raidz
> configuration.
>
You can't increase the number of drives in a raidz vdev, no.
On Thu, June 3, 2010 10:15, Garrett D'Amore wrote:
> Using a stripe of mirrors (RAID0) you can get the benefits of multiple
> spindle performance, easy expansion support (just add new mirrors to the
> end of the raid0 stripe), and 100% data redundancy. If you can afford
> to pay double for your
Using a stripe of mirrors (RAID0) you can get the benefits of multiple
spindle performance, easy expansion support (just add new mirrors to the
end of the raid0 stripe), and 100% data redundancy. If you can afford
to pay double for your storage (the cost of mirroring), this is IMO the
best soluti
> Expanding a RAIDZ (which, really, is the only thing
> that can't do right
> now, w/r/t adding disks) requires the Block Pointer
> (BP) Rewrite
> functionality before it can get implemented.
>
> We've been promised BP rewrite for awhile, but I have
> no visibility as
> to where development on
On Wed, June 2, 2010 17:54, Roman Naumenko wrote:
> Recently I talked to a co-worker who manages NetApp storages. We discussed
> size changes for pools in zfs and aggregates in NetApp.
>
> And some time before I had suggested to a my buddy zfs for his new home
> storage server, but he turned it do
Erik Trimble said the following, on 06/02/2010 07:16 PM:
Roman Naumenko wrote:
Recently I talked to a co-worker who manages NetApp storages. We
discussed size changes for pools in zfs and aggregates in NetApp.
And some time before I had suggested to a my buddy zfs for his new
home storage serve
Brandon High said the following, on 06/02/2010 11:47 PM:
On Wed, Jun 2, 2010 at 3:54 PM, Roman Naumenko wrote:
And some time before I had suggested to a my buddy zfs for his new home storage
server, but he turned it down since there is no expansion available for a pool.
There's no e
Richard Elling said the following, on 06/02/2010 08:50 PM:
On Jun 2, 2010, at 3:54 PM, Roman Naumenko wrote:
Recently I talked to a co-worker who manages NetApp storages. We discussed size
changes for pools in zfs and aggregates in NetApp.
And some time before I had suggested to a my buddy zfs
Richard Elling writes:
>> And some time before I had suggested to a my buddy zfs for his new
>> home storage server, but he turned it down since there is no
>> expansion available for a pool.
>
> Heck, let him buy a NetApp :-)
Definitely a possibility, given the availability and pricing of
oldis
On Wed, Jun 2, 2010 at 3:54 PM, Roman Naumenko wrote:
> And some time before I had suggested to a my buddy zfs for his new home
> storage server, but he turned it down since there is no expansion available
> for a pool.
There's no expansion for aggregates in OnTap, either. You can add more
disk
On Jun 2, 2010, at 3:54 PM, Roman Naumenko wrote:
> Recently I talked to a co-worker who manages NetApp storages. We discussed
> size changes for pools in zfs and aggregates in NetApp.
>
> And some time before I had suggested to a my buddy zfs for his new home
> storage server, but he turned it
On Jun 2, 2010, at 4:08 PM, Freddie Cash wrote:
> On Wed, Jun 2, 2010 at 3:54 PM, Roman Naumenko wrote:
> Recently I talked to a co-worker who manages NetApp storages. We discussed
> size changes for pools in zfs and aggregates in NetApp.
>
> And some time before I had suggested to a my buddy z
Roman Naumenko wrote:
Recently I talked to a co-worker who manages NetApp storages. We discussed size
changes for pools in zfs and aggregates in NetApp.
And some time before I had suggested to a my buddy zfs for his new home storage server, but he turned it down since there is no expansion avai
On Wed, Jun 2, 2010 at 3:54 PM, Roman Naumenko wrote:
> Recently I talked to a co-worker who manages NetApp storages. We discussed
> size changes for pools in zfs and aggregates in NetApp.
>
> And some time before I had suggested to a my buddy zfs for his new home
> storage server, but he turned
On 6/2/10 3:54 PM -0700 Roman Naumenko wrote:
And some time before I had suggested to a my buddy zfs for his new home
storage server, but he turned it down since there is no expansion
available for a pool.
That's incorrect. zfs pools can be expanded at any time. AFAIK zfs has
always had this
Recently I talked to a co-worker who manages NetApp storages. We discussed size
changes for pools in zfs and aggregates in NetApp.
And some time before I had suggested to a my buddy zfs for his new home storage
server, but he turned it down since there is no expansion available for a pool.
And
40 matches
Mail list logo