Re: [zfs-discuss] zfs data corruption

2008-04-24 Thread Victor Engle
Just to clarify this post. This isn't data I care about recovering. I'm just interested in understanding how zfs determined there was data corruption when I have checksums disabled and there were no non-retryable read errors reported in the messages file. On Wed, Apr 23, 2008 at 9:52

Re: [zfs-discuss] zfs data corruption

2008-04-23 Thread Victor Engle
Thanks! That would explain things. I don't believe it was a real disk read error because of the absence of evidence in /var/adm/messages. I'll review the man page and documentation to confirm that metadata is checksummed. Regards, Vic On Wed, Apr 23, 2008 at 6:30 PM, Nathan Kroenert <[EMAIL PRO

Re: [zfs-discuss] ZFS and multipath with iSCSI

2008-04-04 Thread Victor Engle
In /kernel/drv/scsi_vhci.conf you could do this load-balance="none"; That way mpxio would use only one device. I imagine you need a vid/pid entry also in scsi_vhci.conf for your target. Regards, Vic On Fri, Apr 4, 2008 at 3:36 PM, Chris Siebenmann <[EMAIL PROTECTED]> wrote: > We're currently

Re: [zfs-discuss] ZFS hang and boot hang when iSCSI device removed

2008-02-05 Thread Victor Engle
I don't think this is so much a ZFS problem as an iSCSI initiator problem. Are you using static configs or Send Target discovery? There are many reports of sent target discovery misbehavior in the storage discuss forum. To recover: 1. Boot into single user from CD 2. mount the root slice on /a 3.

Re: [zfs-discuss] Q : change disks to get bigger pool

2008-01-20 Thread Victor Engle
> Plan is to replace disks with new and larger disks. > > So will pool get bigger just by replasing all 4 disks one-by-one ? > And if it will get larger how this should be done , fail disks one-by-one .. > or ??? > > Or is data backup and pool recreation only way to get bigger pool > There is ano

Re: [zfs-discuss] how to relocate a disk

2008-01-18 Thread Victor Engle
> I tried taking it offline and online again, but then zpool says the disk > is unavailable. Trying a zpool replace didn't work because it complains > that the "new" disk is part of a zfs pool... So it would look like a new disk to ZFS and not like a disk belonging to a zpool. Vic ___

Re: [zfs-discuss] how to relocate a disk

2008-01-18 Thread Victor Engle
> I tried taking it offline and online again, but then zpool says the disk > is unavailable. Trying a zpool replace didn't work because it complains > that the "new" disk is part of a zfs pool... So you offlined the disk and moved it to the new controller and then tried to add it back to the pool?

Re: [zfs-discuss] Help needed ZFS vs Veritas Comparison

2007-12-28 Thread Victor Engle
> > I will soon be making a presentation comparing ZFS against Veritas Storage > Foundation , do we have any document comparing features ? > Hi Mertol, I think simple administration is at least one significant difference. For example if you have new luns and want to use them to add a new filesyst

Re: [zfs-discuss] Clearing partition/label info

2007-12-17 Thread Victor Engle
Hi Al, That depends on whether you want to go back to a VTOC/SMI label or keep the EFI label created by ZFS. To keep the EFI label just repartition and use the partitions as desired. If you want to go back to a VTOC/SMI label you have to run format -e and then relabel the disk and select SMI. Be

Re: [zfs-discuss] Major problem with a new ZFS setup

2007-11-09 Thread Victor Engle
Are all 24 disks in one big raidz raid set with no spares assigned to the pool? If so then maybe the host is having trouble operating on parity over that many drives when the "experienced an unrecoverable error" errors occur. From what I've read it might be better to create the pool with 3 raidz se

Re: [zfs-discuss] ZFS Training

2007-10-31 Thread Victor Engle
This class looks pretty good... http://www.sun.com/training/catalog/courses/SA-229-S10.xml On 10/31/07, Lisa Richards <[EMAIL PROTECTED]> wrote: > > > > > Is there a class on ZFS installation and administration ? > > > > Lisa Richards > > Zykis Corporation > > [EMAIL PROTECTED] > __

Re: [zfs-discuss] When I stab myself with this knife, it hurts... But - should it kill me?

2007-10-04 Thread Victor Engle
> Perhaps it's the same cause, I don't know... > > But I'm certainly not convinced that I'd be happy with a 25K, for > example, panicing just because I tried to import a dud pool... > > I'm ok(ish) with the panic on a failed write to a non-redundant storage. > I expect it by now... > I agree, forc

Re: [zfs-discuss] When I stab myself with this knife, it hurts... But - should it kill me?

2007-10-04 Thread Victor Engle
Wouldn't this be the known feature where a write error to zfs forces a panic? Vic On 10/4/07, Ben Rockwood <[EMAIL PROTECTED]> wrote: > Dick Davies wrote: > > On 04/10/2007, Nathan Kroenert <[EMAIL PROTECTED]> wrote: > > > > > >> Client A > >> - import pool make couple-o-changes > >> > >> Cli

Re: [zfs-discuss] Again ZFS with expanding LUNs!

2007-09-12 Thread Victor Engle
I like option #1 because it is simple and quick. It seems unlikely that this will lead to an excessive number of luns in the pool in most cases unless you start with a large number of very small luns. If you begin with 5 100GB luns and over time add 5 more it still seems like a reasonable and manag

Re: [zfs-discuss] Single SAN Lun presented to 4 Hosts

2007-08-25 Thread Victor Engle
On 8/25/07, Matt B <[EMAIL PROTECTED]> wrote: > the 4 database servers are part of an Oracle RAC configuration. 3 databases > are hosted on these servers, BIGDB1 on all 4, littledb1 on the first 2, and > littledb2 on the last two. The oracle backup system spawns db backup jobs > that could occur

Re: [zfs-discuss] Again ZFS with expanding LUNs!

2007-08-07 Thread Victor Engle
storage management > interface, showing mountpoints on the hosts for each Luns > The EMC CLariion Storage best practices recommends to use one LUN per volume > The write coelesing feature may be unusable if using more than one lun > per volume if striped in ZFS > > Yannick > > On 8/7/

Re: [zfs-discuss] Again ZFS with expanding LUNs!

2007-08-07 Thread Victor Engle
I can understand lun expansion capability being an issue with more a traditional volume manager or even a single lun but with pooled storage and the ability to expand the pool, benefiting all filesystems in the pool it seems a shame to consider lun expansion a show stopper. Even so, having all the

Re: [zfs-discuss] Re: ZFS - SAN and Raid

2007-06-20 Thread Victor Engle
On 6/20/07, Torrey McMahon <[EMAIL PROTECTED]> wrote: Also, how does replication at the ZFS level use more storage - I'm assuming raw block - then at the array level? ___ Just to add to the previous comments. In the case where you have a SAN array pro

Re: [zfs-discuss] Re: ZFS - SAN and Raid

2007-06-19 Thread Victor Engle
> The best practices guide on opensolaris does recommend replicated > pools even if your backend storage is redundant. There are at least 2 > good reasons for that. ZFS needs a replica for the self healing > feature to work. Also there is no fsck like tool for ZFS so it is a > good idea to make s

[zfs-discuss] Re: ZFS - SAN and Raid

2007-06-19 Thread Victor Engle
yer). In our case it was a little old Sun > StorEdge 3511 FC SATA Array, but the principle applies to any RAID > arraythat is not configured as a JBOD. > > > > Victor Engle wrote: > > Roshan, > > > > Could you provide more detail please. The host and zfs should be &g

Re: [zfs-discuss] ZFS - SAN and Raid

2007-06-19 Thread Victor Engle
Roshan, Could you provide more detail please. The host and zfs should be unaware of any EMC array side replication so this sounds more like an EMC misconfiguration than a ZFS problem. Did you look in the messages file to see if anything happened to the devices that were in your zpools? If so then

Re: [zfs-discuss] Virtual IP Integration

2007-06-15 Thread Victor Engle
Well I suppose complexity is relative. Still, to use Sun Cluster at all I have to install the cluster framework on each node, correct? And even before that I have to install an interconnect with 2 switches unless I direct connect a simple 2 node cluster. My thinking was that ZFS seems to try and