Re: [zfs-discuss] X4500 device disconnect problem persists

2007-11-13 Thread Lida Horn
The "reset: no matching NCQ I/O found" issue appears to be related to the error recovery for bad blocks on the disk. In general it should be harmless, but I have looked into this. If there is someone out there who; 1) Is hitting this issue, and; 2) Is running recent Solaris Nevada bits (not Sola

Re: [zfs-discuss] zpool status can not detect the vdev removed?

2007-11-13 Thread hex.cookie
and when the system is reboot, I run zpool status, status told me that one vdev is corrupt, and I recreate the file what I had removed. After all those operation, I run zpool destroy pool, the system reboot again.. should solaris do it? This message posted from opensolaris.org ___

[zfs-discuss] in a zpool consist of regular files, when I remove the file vdev, zpool status can not detect?

2007-11-13 Thread Chookiex
I make a file zpool like this: bash-3.00# zpool status pool: filepool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM filepool ONLINE 0 0 0 /export/f1.dat ONLINE 0 0 0 /export/f2.d

Re: [zfs-discuss] Yager on ZFS

2007-11-13 Thread Jason J. W. Williams
Hi Darren, > Ah, your "CPU end" was referring to the NFS client cpu, not the storage > device CPU. That wasn't clear to me. The same limitations would apply > to ZFS (or any other filesystem) when running in support of an NFS > server. > > I thought you were trying to describe a qualitative diff

Re: [zfs-discuss] Suggestion/Request: ZFS-aware rm command

2007-11-13 Thread Paul Jochum
I agree, being able to delete the snapshot that a clone is attached to would be a nice feature. Until we get that, this is what I have done (in case this helps anyone else): 1) snapshot the filesystem 2) clone the snapshot into a seperate pool 3) only nfs mount the seperate pool with clones Th

Re: [zfs-discuss] zfs on a raid box

2007-11-13 Thread Richard Elling
Paul Boven wrote: > Hi everyone, > > We've building a storage system that should have about 2TB of storage > and good sequential write speed. The server side is a Sun X4200 running > Solaris 10u4 (plus yesterday's recommended patch cluster), the array we > bought is a Transtec Provigo 510 12-disk

Re: [zfs-discuss] ZFS + DB + "fragments"

2007-11-13 Thread Richard Elling
Nathan Kroenert wrote: > This question triggered some silly questions in my mind: > > Lots of folks are determined that the whole COW to different locations > are a Bad Thing(tm), and in some cases, I guess it might actually be... There is a lot of speculation about this, but no real data. I've

Re: [zfs-discuss] Yager on ZFS

2007-11-13 Thread A Darren Dunham
On Tue, Nov 13, 2007 at 07:33:20PM -0200, Toby Thain wrote: > >>> Yup - that's exactly the kind of error that ZFS and > >> WAFL do a > >>> perhaps uniquely good job of catching. > >> > >> WAFL can't catch all: It's distantly isolated from > >> the CPU end. > > > > WAFL will catch everything that ZF

Re: [zfs-discuss] Yager on ZFS

2007-11-13 Thread Toby Thain
On 11-Nov-07, at 10:19 AM, can you guess? wrote: >> >> On 9-Nov-07, at 2:45 AM, can you guess? wrote: > > ... > >>> This suggests that in a ZFS-style installation >> without a hardware >>> RAID controller they would have experienced at >> worst a bit error >>> about every 10^14 bits or 12 TB >> >

Re: [zfs-discuss] ZFS + DB + "fragments"

2007-11-13 Thread Nathan Kroenert
This question triggered some silly questions in my mind: Lots of folks are determined that the whole COW to different locations are a Bad Thing(tm), and in some cases, I guess it might actually be... What if ZFS had a pool / filesystem property that caused zfs to do a journaled, but non-COW upd

Re: [zfs-discuss] X4500 device disconnect problem persists

2007-11-13 Thread Peter Tribble
On 11/13/07, Dan Poltawski <[EMAIL PROTECTED]> wrote: > I've just discovered patch 125205-07, which wasn't installed on our system > because we don't have SUNWhea.. > > Has anyone with problems tried this patch, and has it helped at all? We were having a pretty rough time running S10U4. While I w

Re: [zfs-discuss] Suggestion/Request: ZFS-aware rm command

2007-11-13 Thread Ross
> But to create a clone you'll need a snapshot so I > think the problem > will still be there... This might be a way around this problem though. Deleting files from snapshots sounds like a messy approach in terms of the architecture, but deleting files from clones would be fine. So what's need

Re: [zfs-discuss] Yager on ZFS

2007-11-13 Thread Jonathan Stewart
can you guess? wrote:> Vitesse VSC410 > Yes, it will help detect > hardware faults as well if they happen to occur between RAM and the > disk (and aren't otherwise detected - I'd still like to know whether > the 'bad cable' experiences reported here occurred before ATA started > CRCing its transf

Re: [zfs-discuss] Nice chassis for ZFS server

2007-11-13 Thread Richard Elling
Mick Russom wrote: > Sun's "own" v60 and Sun v65 were pure Intel reference servers that worked > GREAT! I'm glad they worked for you. But I'll note that the critical deficiencies in those platforms is solved by the newer Sun AMD/Intel/SPARC small form factor rackmount servers. The new chassis a

Re: [zfs-discuss] zpool status can not detect the vdev removed?

2007-11-13 Thread Eric Schrock
As with any application, if you hold the vnode (or file descriptor) open and remove the underlying file, you can still write to the file even if it is removed. Removing the file only removes it from the namespace; until the last reference is closed it will continue to exist. You can use 'zpool on

[zfs-discuss] zpool status can not detect the vdev removed?

2007-11-13 Thread Chookiex
I make a file zpool like this: bash-3.00# zpool status pool: filepool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM filepool ONLINE 0 0 0 /export/f1.dat ONLINE 0 0 0 /export/f2.d

[zfs-discuss] zpool status can not detect the vdev removed?

2007-11-13 Thread hex.cookie
I make a file zpool like this: bash-3.00# zpool status pool: filepool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM filepool ONLINE 0 0 0 /export/f1.dat ONLINE 0 0 0 /export/f2.d

Re: [zfs-discuss] Suggestion/Request: ZFS-aware rm command

2007-11-13 Thread Darren J Moffat
Paul Jochum wrote: > Hi Richard: > > I just tried your suggestion, unfortunately it doesn't work. Basically: > make a clone of the snapshot - works bine > in the clone, remove the directories - works fine > make a snapshot of the clone - works fine > destroy the clone - fails, because ZFS report

Re: [zfs-discuss] X4500 device disconnect problem persists

2007-11-13 Thread Dan Poltawski
I've just discovered patch 125205-07, which wasn't installed on our system because we don't have SUNWhea.. Has anyone with problems tried this patch, and has it helped at all? This message posted from opensolaris.org ___ zfs-discuss mailing list zfs

[zfs-discuss] zfs on a raid box

2007-11-13 Thread Paul Boven
Hi everyone, We've building a storage system that should have about 2TB of storage and good sequential write speed. The server side is a Sun X4200 running Solaris 10u4 (plus yesterday's recommended patch cluster), the array we bought is a Transtec Provigo 510 12-disk array. The disks are SATA, and

Re: [zfs-discuss] Nice chassis for ZFS server

2007-11-13 Thread Mick Russom
>Internal drives suck. If you go through the trouble of putting in a >drive, at least make it hot pluggable. They are all hot-swappable/pluggable on the the SSR212MC2. There are two additional internal 2.5" SAS bonus drives that arent, but the front 12 are. I for one think external enclosures ar

Re: [zfs-discuss] Nice chassis for ZFS server

2007-11-13 Thread Mick Russom
Sun did something like this with the v60 and v65 servers, and they should do it again with the SSR212MC2. The heart of the SAS subsystem of the SSR212MC2 is the SRCSAS144E . This card is interfacing with a Vitesse VSC410 SAS-expander and is plugged into a S5000PSL motherboard. This card is cl

Re: [zfs-discuss] ZFS + DB + "fragments"

2007-11-13 Thread Roch - PAE
Louwtjie Burger writes: > Hi > > After a clean database load a database would (should?) look like this, > if a random stab at the data is taken... > > [8KB-m][8KB-n][8KB-o][8KB-p]... > > The data should be fairly (100%) sequential in layout ... after some > days though that same spot (

[zfs-discuss] Securing a risky situation with zfs

2007-11-13 Thread Gabriele Bulfon
Hi, we're having a bad situation with a SAN iScsi solution in a production environment of a customer: the storage hardware may panic its kernel because of its software fault, with the risk of loosing data. We want to give the SAN manufacturer a last chance of correcting their solution: we're goi

Re: [zfs-discuss] Suggestion/Request: ZFS-aware rm command

2007-11-13 Thread Sylvain Dusart
2007/11/13, Paul Jochum <[EMAIL PROTECTED]>: > (the only option I can think of, is to use clones instead of snapshots in the > future, just so that I can delete files in the clones in case I ever need to) But to create a clone you'll need a snapshot so I think the problem will still be there...