[zfs-discuss] space_map.c 'ss == NULL' panic strikes back.

2007-11-14 Thread Pawel Jakub Dawidek
Hi. Someone currently reported a 'ss == NULL' panic in space_map.c/space_map_add() on FreeBSD's version of ZFS. I found that this problem was previously reported on Solaris and is already fixed. I verified it and FreeBSD's version have this fix in place... http://src.opensolaris.org/sou

Re: [zfs-discuss] ZFS + DB + "fragments"

2007-11-14 Thread can you guess?
> This question triggered some silly questions in my > mind: Actually, they're not silly at all. > > Lots of folks are determined that the whole COW to > different locations > are a Bad Thing(tm), and in some cases, I guess it > might actually be... > > What if ZFS had a pool / filesystem prop

Re: [zfs-discuss] Yager on ZFS

2007-11-14 Thread can you guess?
> some business do not accept any kind of risk Businesses *always* accept risk: they just try to minimize it within the constraints of being cost-effective. Which is a good thing for ZFS, because it can't eliminate risk either, just help to minimize it cost-effectively. However, the subject h

Re: [zfs-discuss] Yager on ZFS

2007-11-14 Thread can you guess?
... > >> And how about FAULTS? > >> hw/firmware/cable/controller/ram/... > > > > If you had read either the CERN study or what I > already said about > > it, you would have realized that it included the > effects of such > > faults. > > > ...and ZFS is the only prophylactic available. You d

Re: [zfs-discuss] Yager on ZFS

2007-11-14 Thread can you guess?
> Darrell My apologies, Darren. - bill This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Yager on ZFS

2007-11-14 Thread can you guess?
... I > was running the card in RAID0 and getting random > corrupted bytes on > reads that went away when I switched to JBOD. Then it kind of sounds like a card problem rather than a cable problem. Perhaps there's a very basic definition issue here: when I use the term 'consumer', I'm referrin

[zfs-discuss] iSCSI on ZFS with Linux initiator

2007-11-14 Thread Mertol Ozyoney
Hi; Do anyone have experiance on iSCSI target volumes on ZFS accessed by linux clients? (Red hat , suse ?) regards http://www.sun.com/emrkt/sigs/6g_top.gif Mertol Ozyoney Storage Practice - Sales Manager Sun Microsystems, TR Istanbul TR Phone +902123352200

Re: [zfs-discuss] Yager on ZFS

2007-11-14 Thread David Dyer-Bennet
can you guess? wrote: >> at the moment only ZFS can give this assurance, plus >> the ability to >> self correct detected >> errors. >> > > You clearly aren't very familiar with WAFL (which can do the same). > > That's quite possibly a factor. I'm pretty thoroghly unfamiliar with WAFL my

Re: [zfs-discuss] Yager on ZFS

2007-11-14 Thread David Dyer-Bennet
can you guess? wrote: > ... > I > >> was running the card in RAID0 and getting random >> corrupted bytes on >> reads that went away when I switched to JBOD. >> > > Then it kind of sounds like a card problem rather than a cable problem. > > Perhaps there's a very basic definition issue here

Re: [zfs-discuss] Yager on ZFS

2007-11-14 Thread can you guess?
> can you guess? wrote: > > >> at the moment only ZFS can give this assurance, > plus > >> the ability to > >> self correct detected > >> errors. > >> > > > > You clearly aren't very familiar with WAFL (which > can do the same). > > > > ... so far as I can tell it's quite > irrelevant t

Re: [zfs-discuss] Filesystem Benchmark

2007-11-14 Thread Gary Wright
Hi Cesare, Hope you don't mind me asking but we are planning to use a CX3-20 Dell/EMC SAN connected to a T5220 server (Solaris 10). Can you tell me if you were forced to use PowerPath or have you used MPXIO/Traffic Manager. Did you use LPe11000-E (Single Channel) or LPe11002-E (dual channel) HBA

[zfs-discuss] Is ZFS stable in OpenSolaris?

2007-11-14 Thread hex.cookie
In production environment, which platform should we use? Solaris 10 U4 or OpenSolaris 70+? How should we estimate a stable edition for production? Or OpenSolaris is stable in some build? This message posted from opensolaris.org ___ zfs-discuss mail

Re: [zfs-discuss] Yager on ZFS

2007-11-14 Thread Wade . Stuart
> > On 9-Nov-07, at 2:45 AM, can you guess? wrote: > > >>> Au contraire: I estimate its worth quite > >> accurately from the undetected error rates reported > >> in the CERN "Data Integrity" paper published last > >> April (first hit if you Google 'cern "data > >> integrity"'). > >>> > While

Re: [zfs-discuss] ZFS + DB + "fragments"

2007-11-14 Thread Richard Elling
can you guess? wrote: >> For very read intensive and position sensitive >> applications, I guess >> this sort of capability might make a difference? > > No question about it. And sequential table scans in databases > are among the most significant examples, because (unlike things > like stream

[zfs-discuss] How to create ZFS pool ?

2007-11-14 Thread Boris Derzhavets
I was able to create second Solaris partition by running #fdisk /dev/rdsk/c1t0d0p0 First was NTFS (40GB) Second was SNV76 installation (40 GB) Third has been created by me. Rebooted system.Double checked by fdisk that partition exists My intent is to run:- # zpool create pool c1t0d0 Cannot

Re: [zfs-discuss] How to create ZFS pool ?

2007-11-14 Thread Tim Spriggs
Hi Boris, When you create a Solaris2 Partition under x86, Solaris "sees" the partition as a disk that you can cut into slices. You can find a list of disks available via the "format" command. A slice is much like a partition but there is a difference; that's most or all you really need to know

Re: [zfs-discuss] Filesystem Benchmark

2007-11-14 Thread Peter Tribble
On 11/14/07, Gary Wright <[EMAIL PROTECTED]> wrote: > > Hope you don't mind me asking but we are planning to use a CX3-20 Dell/EMC > SAN connected to a T5220 server (Solaris 10). Can you tell me > if you were forced to use PowerPath or have you used MPXIO/Traffic Manager. > Did you use LPe11000-E

Re: [zfs-discuss] How to create ZFS pool ?

2007-11-14 Thread A Darren Dunham
On Wed, Nov 14, 2007 at 09:40:59AM -0800, Boris Derzhavets wrote: > I was able to create second Solaris partition by running > > #fdisk /dev/rdsk/c1t0d0p0 I'm afraid that won't do you much good. Solaris only works with one "Solaris" partition at a time (on any one disk). If you have free space

Re: [zfs-discuss] How to create ZFS pool ?

2007-11-14 Thread Mike Dotson
On Wed, 2007-11-14 at 21:23 +, A Darren Dunham wrote: > On Wed, Nov 14, 2007 at 09:40:59AM -0800, Boris Derzhavets wrote: > > I was able to create second Solaris partition by running > > > > #fdisk /dev/rdsk/c1t0d0p0 > > I'm afraid that won't do you much good. > > Solaris only works with on

Re: [zfs-discuss] Yager on ZFS

2007-11-14 Thread Toby Thain
On 13-Nov-07, at 9:18 PM, A Darren Dunham wrote: > On Tue, Nov 13, 2007 at 07:33:20PM -0200, Toby Thain wrote: > Yup - that's exactly the kind of error that ZFS and WAFL do a > perhaps uniquely good job of catching. WAFL can't catch all: It's distantly isolated from th

Re: [zfs-discuss] Yager on ZFS

2007-11-14 Thread Toby Thain
On 14-Nov-07, at 12:43 AM, Jason J. W. Williams wrote: > Hi Darren, > >> Ah, your "CPU end" was referring to the NFS client cpu, not the >> storage >> device CPU. That wasn't clear to me. The same limitations would >> apply >> to ZFS (or any other filesystem) when running in support of an N

Re: [zfs-discuss] Yager on ZFS

2007-11-14 Thread Toby Thain
On 14-Nov-07, at 7:06 AM, can you guess? wrote: > ... > And how about FAULTS? hw/firmware/cable/controller/ram/... >>> >>> If you had read either the CERN study or what I >> already said about >>> it, you would have realized that it included the >> effects of such >>> faults. >> >> >> .

Re: [zfs-discuss] ZFS + DB + default blocksize

2007-11-14 Thread Jesus Cea
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Louwtjie Burger wrote: > On 11/8/07, Richard Elling <[EMAIL PROTECTED]> wrote: >> Potentially, depending on the write part of the workload, the system may >> read >> 128 kBytes to get a 16 kByte block. This is not efficient and may be >> noticeable >>

Re: [zfs-discuss] Missing zpool devices, what are the options

2007-11-14 Thread David Bustos
Quoth Mark Ashley on Mon, Nov 12, 2007 at 11:35:57AM +1100: > Is it possible to tell ZFS to forget those SE6140 LUNs ever belonged to the > zpool? I know that ZFS will have probably put some user data on them, but if > there is a possibility of recovering any of those zvols on the zpool > it'd rea

Re: [zfs-discuss] ZFS + DB + "fragments"

2007-11-14 Thread Richard Elling
can you guess? wrote: >> can you guess? wrote: >> For very read intensive and position sensitive applications, I guess this sort of capability might make a difference? >>> No question about it. And sequential table scans >>> >> in databases >> >>> ar

Re: [zfs-discuss] ZFS + DB + "fragments"

2007-11-14 Thread can you guess?
> Nathan Kroenert wrote: ... What if it did a double update: One to a > staged area, and another > > immediately after that to the 'old' data blocks. > Still always have > > on-disk consistency etc, at a cost of double the > I/O's... > > This is a non-starter. Two I/Os is worse than one. We

Re: [zfs-discuss] Yager on ZFS

2007-11-14 Thread can you guess?
... > The problem it seems to me with criticizing ZFS as > not much different > than WAFL, is that WAFL is really a networked storage > backend, not a > server operating system FS. If all you're using ZFS > for is backending > networked storage, the "not much different" criticism > holds a fair >

Re: [zfs-discuss] Yager on ZFS

2007-11-14 Thread can you guess?
> > On 14-Nov-07, at 7:06 AM, can you guess? wrote: > > > ... > > > And how about FAULTS? > hw/firmware/cable/controller/ram/... > >>> > >>> If you had read either the CERN study or what I > >> already said about > >>> it, you would have realized that it included the > >> effects of suc

Re: [zfs-discuss] Yager on ZFS

2007-11-14 Thread can you guess?
... > > >> Well single bit error rates may be rare in > normal > > >> operation hard > > >> drives, but from a systems perspective, data can > be > > >> corrupted anywhere > > >> between disk and CPU. > > > > > > The CERN study found that such errors (if they > found any at all, > > > which they c

[zfs-discuss] internal error: Bad file number

2007-11-14 Thread Manoj Nayak
Hi , I am getting following error message when I run any zfs command.I have attach the script I use to create ramdisk image for Thumper. # zfs volinit internal error: Bad file number Abort - core dumped # zpool status internal error: Bad file number Abort - core dumped # # zfs list internal e

Re: [zfs-discuss] internal error: Bad file number

2007-11-14 Thread Manoj Nayak
Hi , I am using s10u3 in x64 AMD Opteron thumper. Thanks Manoj Nayak Manoj Nayak wrote: > Hi , > > I am getting following error message when I run any zfs command.I have > attach the script I use to create ramdisk image for Thumper. > > # zfs volinit > internal error: Bad file number > Abort -

Re: [zfs-discuss] ZFS + DB + "fragments"

2007-11-14 Thread can you guess?
... But every block so rearranged > (and every tree ancestor of each such block) would > then leave an equal-sized residue in the most recent > snapshot if one existed, which gets expensive fast in > terms of snapshot space overhead (which then is > proportional to the amount of reorganization >