Just to clarify this post. This isn't data I care about recovering.
I'm just interested in understanding how zfs determined there was data
corruption when I have checksums disabled and there were no
non-retryable read errors reported in the messages file.
On Wed, Apr 23, 2008 at 9:52
Thanks! That would explain things. I don't believe it was a real disk
read error because of the absence of evidence in /var/adm/messages.
I'll review the man page and documentation to confirm that metadata is
checksummed.
Regards,
Vic
On Wed, Apr 23, 2008 at 6:30 PM, Nathan Kroenert
<[EMAIL PRO
In /kernel/drv/scsi_vhci.conf you could do this
load-balance="none";
That way mpxio would use only one device. I imagine you need a vid/pid
entry also in scsi_vhci.conf for your target.
Regards,
Vic
On Fri, Apr 4, 2008 at 3:36 PM, Chris Siebenmann <[EMAIL PROTECTED]> wrote:
> We're currently
I don't think this is so much a ZFS problem as an iSCSI initiator
problem. Are you using static configs or Send Target discovery? There
are many reports of sent target discovery misbehavior in the storage
discuss forum.
To recover:
1. Boot into single user from CD
2. mount the root slice on /a
3.
> Plan is to replace disks with new and larger disks.
>
> So will pool get bigger just by replasing all 4 disks one-by-one ?
> And if it will get larger how this should be done , fail disks one-by-one ..
> or ???
>
> Or is data backup and pool recreation only way to get bigger pool
>
There is ano
> I tried taking it offline and online again, but then zpool says the disk
> is unavailable. Trying a zpool replace didn't work because it complains
> that the "new" disk is part of a zfs pool...
So it would look like a new disk to ZFS and not like a disk belonging
to a zpool.
Vic
___
> I tried taking it offline and online again, but then zpool says the disk
> is unavailable. Trying a zpool replace didn't work because it complains
> that the "new" disk is part of a zfs pool...
So you offlined the disk and moved it to the new controller and then
tried to add it back to the pool?
>
> I will soon be making a presentation comparing ZFS against Veritas Storage
> Foundation , do we have any document comparing features ?
>
Hi Mertol,
I think simple administration is at least one significant difference.
For example if you have new luns and want to use them to add a new
filesyst
Hi Al,
That depends on whether you want to go back to a VTOC/SMI label or
keep the EFI label created by ZFS. To keep the EFI label just
repartition and use the partitions as desired. If you want to go back
to a VTOC/SMI label you have to run format -e and then relabel the
disk and select SMI.
Be
Are all 24 disks in one big raidz raid set with no spares assigned to
the pool? If so then maybe the host is having trouble operating on
parity over that many drives when the "experienced an unrecoverable
error" errors occur. From what I've read it might be better to create
the pool with 3 raidz se
This class looks pretty good...
http://www.sun.com/training/catalog/courses/SA-229-S10.xml
On 10/31/07, Lisa Richards <[EMAIL PROTECTED]> wrote:
>
>
>
>
> Is there a class on ZFS installation and administration ?
>
>
>
> Lisa Richards
>
> Zykis Corporation
>
> [EMAIL PROTECTED]
> __
> Perhaps it's the same cause, I don't know...
>
> But I'm certainly not convinced that I'd be happy with a 25K, for
> example, panicing just because I tried to import a dud pool...
>
> I'm ok(ish) with the panic on a failed write to a non-redundant storage.
> I expect it by now...
>
I agree, forc
Wouldn't this be the known feature where a write error to zfs forces a panic?
Vic
On 10/4/07, Ben Rockwood <[EMAIL PROTECTED]> wrote:
> Dick Davies wrote:
> > On 04/10/2007, Nathan Kroenert <[EMAIL PROTECTED]> wrote:
> >
> >
> >> Client A
> >> - import pool make couple-o-changes
> >>
> >> Cli
I like option #1 because it is simple and quick. It seems unlikely
that this will lead to an excessive number of luns in the pool in most
cases unless you start with a large number of very small luns. If you
begin with 5 100GB luns and over time add 5 more it still seems like a
reasonable and manag
On 8/25/07, Matt B <[EMAIL PROTECTED]> wrote:
> the 4 database servers are part of an Oracle RAC configuration. 3 databases
> are hosted on these servers, BIGDB1 on all 4, littledb1 on the first 2, and
> littledb2 on the last two. The oracle backup system spawns db backup jobs
> that could occur
storage management
> interface, showing mountpoints on the hosts for each Luns
> The EMC CLariion Storage best practices recommends to use one LUN per volume
> The write coelesing feature may be unusable if using more than one lun
> per volume if striped in ZFS
>
> Yannick
>
> On 8/7/
I can understand lun expansion capability being an issue with more a
traditional volume manager or even a single lun but with pooled
storage and the ability to expand the pool, benefiting all filesystems
in the pool it seems a shame to consider lun expansion a show stopper.
Even so, having all the
On 6/20/07, Torrey McMahon <[EMAIL PROTECTED]> wrote:
Also, how does replication at the ZFS level use more storage - I'm
assuming raw block - then at the array level?
___
Just to add to the previous comments. In the case where you have a SAN
array pro
> The best practices guide on opensolaris does recommend replicated
> pools even if your backend storage is redundant. There are at least 2
> good reasons for that. ZFS needs a replica for the self healing
> feature to work. Also there is no fsck like tool for ZFS so it is a
> good idea to make s
yer). In our case it was a little old Sun
> StorEdge 3511 FC SATA Array, but the principle applies to any RAID
> arraythat is not configured as a JBOD.
>
>
>
> Victor Engle wrote:
> > Roshan,
> >
> > Could you provide more detail please. The host and zfs should be
&g
Roshan,
Could you provide more detail please. The host and zfs should be
unaware of any EMC array side replication so this sounds more like an
EMC misconfiguration than a ZFS problem. Did you look in the messages
file to see if anything happened to the devices that were in your
zpools? If so then
Well I suppose complexity is relative. Still, to use Sun Cluster at
all I have to install the cluster framework on each node, correct? And
even before that I have to install an interconnect with 2 switches
unless I direct connect a simple 2 node cluster.
My thinking was that ZFS seems to try and
22 matches
Mail list logo