Am I right in thinking though that for every raidz1/2 vdev, you're
effectively losing the storage of one/two disks in that vdev?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Wed, Sep 17, 2008 at 8:40 AM, gm_sjo <[EMAIL PROTECTED]> wrote:
> Am I right in thinking though that for every raidz1/2 vdev, you're
> effectively losing the storage of one/two disks in that vdev?
Well yeah - you've got to have some allowance for redundancy.
--
-Peter Tribble
http://www.peter
2008/9/17 Peter Tribble:
> On Wed, Sep 17, 2008 at 8:40 AM, gm_sjo <[EMAIL PROTECTED]> wrote:
>> Am I right in thinking though that for every raidz1/2 vdev, you're
>> effectively losing the storage of one/two disks in that vdev?
>
> Well yeah - you've got to have some allowance for redundancy.
Thi
If 2 disks of a mirror fail do the pool will be faulted ?
NAMESTATE READ WRITE CKSUM
homez ONLINE 0 0 0
mirrorONLINE 0 0 0
c0t2d0 ONLINE 0 0 0
c0t3d0 ONLINE 0 0 0
Francois wrote:
> If 2 disks of a mirror fail do the pool will be faulted ?
>
> NAMESTATE READ WRITE CKSUM
> homez ONLINE 0 0 0
>mirrorONLINE 0 0 0
> c0t2d0 ONLINE 0 0 0
> c0t3d
On Wed, Sep 17, 2008 at 10:11 AM, gm_sjo <[EMAIL PROTECTED]> wrote:
> 2008/9/17 Peter Tribble:
>> On Wed, Sep 17, 2008 at 8:40 AM, gm_sjo <[EMAIL PROTECTED]> wrote:
>>> Am I right in thinking though that for every raidz1/2 vdev, you're
>>> effectively losing the storage of one/two disks in that vde
gm_sjo wrote:
> Are you not infact losing performance by reducing the
> amount of spindles used for a given pool?
This depends. Usually, RAIDZ1/2 isn't a good performancer when it comes
to random access read I/O, for instance. If I wanted to scale
performance by adding spindles, I would use mir
Darren J Moffat wrote:
> If c0t6d0 and c0t7d0 both fail (ie both sides of the same mirror vdev)
> then the pool will be unable to retrieve all the data stored in it. If
> c0t6d0 and c0t3d0 both fail then there are sufficient replicas of data
> available in that case because it was disks from d
> I believe the problem you're seeing might be related to deadlock
> condition (CR 6745310), if you run pstack on the
> iscsi target daemon you might find a bunch of zombie
> threads. The fix
> is putback to snv-99, give snv-99 a try.
Yes, a pstack of the core I've generated from iscsitgtd does
Moore, Joe wrote:
>> I believe the problem you're seeing might be related to deadlock
>> condition (CR 6745310), if you run pstack on the
>> iscsi target daemon you might find a bunch of zombie
>> threads. The fix
>> is putback to snv-99, give snv-99 a try.
>>
>
> Yes, a pstack of the core I
> "djm" == Darren J Moffat <[EMAIL PROTECTED]> writes:
djm> If c0t6d0 and c0t7d0 both fail (ie both sides of the same
djm> mirror vdev) then the pool will be unable to retrieve all the
djm> data stored in it.
won't be able to retrieve ANY of the data stored on it. It's correct
as yo
Running Nevada build 95 on an ultra 40.
Had to replace a drive.
Resilver in progress, but it looks like each
time I do a zpool status, the resilver starts over.
Is this a known issue?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.o
On 17 September, 2008 - Neal Pollack sent me these 0,3K bytes:
> Running Nevada build 95 on an ultra 40.
> Had to replace a drive.
> Resilver in progress, but it looks like each
> time I do a zpool status, the resilver starts over.
> Is this a known issue?
I recall some issue with 'zpool status'
> "t" == Tomas Ögren <[EMAIL PROTECTED]> writes:
t> I recall some issue with 'zpool status' as root restarting
t> resilvering.. Doing it as a regular user will not..
is there an mdb command similar to zpool status? maybe it's safer.
pgp8jYtCisPzr.pgp
Description: PGP signature
_
Cyril Plisko wrote:
> On Wed, Sep 17, 2008 at 6:06 AM, Erik Trimble <[EMAIL PROTECTED]> wrote:
>
>> Just one more things on this:
>>
>> Run with a 64-bit processor. Don't even think of using a 32-bit one -
>> there are known issues with ZFS not quite properly using 32-bit only
>> structures. Th
Are you doing snaps? If so unless you have the new bits to handle the
issue, each snap restarts a scrub or resilver.
Thanks!
Wade Stuart
we are fallon
P: 612.758.2660
C: 612.877.0385
** Fallon has moved. Effective May 19, 2008 our address is 901 Marquette
Ave, Suite 2400, Minneapolis, MN 554
On 09/17/08 02:29 PM, [EMAIL PROTECTED] wrote:
Are you doing snaps?
No, no snapshots ever.
Logged in as root to do;
zpool replace poolname deaddisk
and then did a few zpool status
as root. It restarted each time.
If so unless you have the new bits to handle the
issue, each snap restarts a
On Sep 16, 2008, at 5:39 PM, Miles Nordin wrote:
>> "jd" == Jim Dunham <[EMAIL PROTECTED]> writes:
>
>jd> If at the time the SNDR replica is deleted the set was
>jd> actively replicating, along with ZFS actively writing to the
>jd> ZFS storage pool, I/O consistency will be lost, l
18 matches
Mail list logo