> Would your opinion change if the disks you used took
> 7 days to resilver?
>
> Bob
That will only make a stronger case that hot spare is absolutely needed.
This will also make a strong case for choosing raidz3 over raidz2 as well as
vdev smaller number of disks.
--
This message posted from op
On Wed, 28 Apr 2010, Jim Horng wrote:
Why would you recommend a spare for raidz2 or raidz3?
Spare is to minimize the reconstruction time. Because remember a
vdev can not start resilvering until there is a spare disk
available. And with disks as big as they are today, resilvering also
take
> Why would you recommend a spare for raidz2 or raidz3?
> -- richard
Spare is to minimize the reconstruction time. Because remember a vdev can not
start resilvering until there is a spare disk available. And with disks as big
as they are today, resilvering also take many hours. I rather have
On Apr 28, 2010, at 9:48 PM, Jim Horng wrote:
>> 3 shelves with 2 controllers each. 48 drive per
>> shelf. These are Fibrechannel attached. We would like
>> all 144 drives added to the same large pool.
>
> I would do either a 12 or 16 disk raidz3 vdev and do spread out the disk
> across controll
> 3 shelves with 2 controllers each. 48 drive per
> shelf. These are Fibrechannel attached. We would like
> all 144 drives added to the same large pool.
I would do either a 12 or 16 disk raidz3 vdev and do spread out the disk across
controllers within vdevs. also may want to leave a least 1 spare
On Wed, 28 Apr 2010, Jim Horng wrote:
I understand your point. however in most production system the
selves are added incrementally so make sense to be related to number
of slots per shelf. and in most case withstand a shelf failure is
to much of overhead on storage any are. for example in h
I understand your point. however in most production system the selves are added
incrementally so make sense to be related to number of slots per shelf. and in
most case withstand a shelf failure is to much of overhead on storage any are.
for example in his case he will have to configure 1+0 ra
3 shelves with 2 controllers each. 48 drive per shelf. These are Fibrechannel
attached. We would like all 144 drives added to the same large pool.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http:
On Wed, 28 Apr 2010, Jim Horng wrote:
So on the point of not need an migration back.
Even at 144 disk. they won't be on the same raid group. So figure
out what is the best raid group size for you since zfs don't support
changing number of disk in raidz yet. I usually use the number of
th
Sorry, I need to correct myself. Mirror luns on the windows side to switch
storage pool under it is a great idea and I think you can do this without
downtime.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensol
So on the point of not need an migration back.
Even at 144 disk. they won't be on the same raid group. So figure out what is
the best raid group size for you since zfs don't support changing number of
disk in raidz yet. I usually use the number of the slots per shelf. or a good
number is 7~10
For this type of migration a downtime is required. However, it can be reduce
to only a few hours to a few minutes depending how much change need to be
synced.
I have done this many times on a NetApp Filer but can be apply to zfs as well.
First thing is consider is only do the migration once so
On Apr 28, 2010, at 8:39 AM, Wolfraider wrote:
>> Mirrors are made with vdevs (LUs or disks), not
>> pools. However, the
>> vdev attached to a mirror must be the same size (or
>> nearly so) as the
>> original. If the original vdevs are 4TB, then a
>> migration to a pool made
>> with 1TB vdevs can
> Mirrors are made with vdevs (LUs or disks), not
> pools. However, the
> vdev attached to a mirror must be the same size (or
> nearly so) as the
> original. If the original vdevs are 4TB, then a
> migration to a pool made
> with 1TB vdevs cannot be done by replacing vdevs
> (mirror method).
> --
> On Apr 28, 2010, at 6:37 AM, Wolfraider wrote:
> > The original drive pool was configured with 144 1TB
> drives and a hardware raid 0 strip across every 4
> drives to create 4TB luns.
>
> For the archives, this is not a good idea...
Exactly, This is the reason I want to blow all the old configu
On Apr 28, 2010, at 6:37 AM, Wolfraider wrote:
> The original drive pool was configured with 144 1TB drives and a hardware
> raid 0 strip across every 4 drives to create 4TB luns.
For the archives, this is not a good idea...
> These luns where then combined into 6 raidz2 luns and added to the zf
On Apr 28, 2010, at 6:40 AM, Wolfraider wrote:
> We are running the latest dev release.
>
> I was hoping to just mirror the zfs voumes and not the whole pool. The
> original pool size is around 100TB in size. The spare disks I have come up
> with will total around 40TB. We only have 11TB of spa
We are running the latest dev release.
I was hoping to just mirror the zfs voumes and not the whole pool. The original
pool size is around 100TB in size. The spare disks I have come up with will
total around 40TB. We only have 11TB of space in use on the original zfs pool.
--
This message poste
The original drive pool was configured with 144 1TB drives and a hardware raid
0 strip across every 4 drives to create 4TB luns. These luns where then
combined into 6 raidz2 luns and added to the zfs pool. I would like to delete
the original hardware raid 0 stripes and add the 144 drives directl
Unclear what you want to do? What's the goal for this excise?
If you want to replace the pool with larger disks and the pool is in mirror or
raidz. You just replace one disk at a time and allow the pool to rebuild it
self. Once all the disk has been replace, it will atomically realize the disk
Hi Wolf,
Which Solaris release is this?
If it is an OpenSolaris system running a recent build, you might
consider the zpool split feature, which splits a mirrored pool into two
separate pools, while the original pool is online.
If possible, attach the spare disks to create the mirrored pool as
We would like to delete and recreate our existing zfs pool without losing any
data. The way we though we could do this was attach a few HDDs and create a new
temporary pool, migrate our existing zfs volume to the new pool, delete and
recreate the old pool and migrate the zfs volumes back. The bi
22 matches
Mail list logo