Re: [zfs-discuss] This is the scrub that never ends...

2009-09-10 Thread Jonathan Edwards
On Sep 9, 2009, at 9:29 PM, Bill Sommerfeld wrote: On Wed, 2009-09-09 at 21:30 +, Will Murnane wrote: Some hours later, here I am again: scrub: scrub in progress for 18h24m, 100.00% done, 0h0m to go Any suggestions? Let it run for another day. A pool on a build server I manage takes ab

Re: [zfs-discuss] Books on File Systems and File System Programming

2009-08-15 Thread Jonathan Edwards
On Aug 14, 2009, at 11:14 AM, Peter Schow wrote: On Thu, Aug 13, 2009 at 05:02:46PM -0600, Louis-Fr?d?ric Feuillette wrote: I saw this question on another mailing list, and I too would like to know. And I have a couple questions of my own. == Paraphrased from other list == Does anyone have a

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-04 Thread Jonathan Edwards
On Jul 4, 2009, at 11:57 AM, Bob Friesenhahn wrote: This brings me to the absurd conclusion that the system must be rebooted immediately prior to each use. see Phil's later email .. an export/import of the pool or a remount of the filesystem should clear the page cache - with mmap'd files

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-04 Thread Jonathan Edwards
On Jul 4, 2009, at 12:03 AM, Bob Friesenhahn wrote: % ./diskqual.sh c1t0d0 130 MB/sec c1t1d0 130 MB/sec c2t202400A0B83A8A0Bd31 13422 MB/sec c3t202500A0B83A8A0Bd31 13422 MB/sec c4t600A0B80003A8A0B096A47B4559Ed0 191 MB/sec c4t600A0B80003A8A0B096E47B456DAd0 192 MB/sec c4t600A0B80003A8A0B00

Re: [zfs-discuss] cannot mount '/tank/home': directory is not empty

2009-06-10 Thread Jonathan Edwards
i've seen a problem where periodically a 'zfs mount -a' and sometimes a 'zpool import ' can create what appears to be a race condition on nested mounts .. that is .. let's say that i have: FS mountpoint pool/export pool/fs1

Re: [zfs-discuss] ZFS and SNDR..., now I'm confused.

2009-03-06 Thread Jonathan Edwards
On Mar 6, 2009, at 8:58 AM, Andrew Gabriel wrote: Jim Dunham wrote: ZFS the filesystem is always on disk consistent, and ZFS does maintain filesystem consistency through coordination between the ZPL (ZFS POSIX Layer) and the ZIL (ZFS Intent Log). Unfortunately for SNDR, ZFS caches a lot o

Re: [zfs-discuss] replace same sized disk fails with too small error

2009-01-22 Thread Jonathan Edwards
not quite .. it's 16KB at the front and 8MB back of the disk (16384 sectors) for the Solaris EFI - so you need to zero out both of these of course since these drives are <1TB you i find it's easier to format to SMI (vtoc) .. with format -e (choose SMI, label, save, validate - then choose EFI

Re: [zfs-discuss] Largest (in number of files) ZFS instance tested

2008-07-11 Thread Jonathan Edwards
On Jul 11, 2008, at 4:59 PM, Bob Friesenhahn wrote: >> >> Has anyone tested a ZFS file system with at least 100 million + >> files? >> What were the performance characteristics? > > I think that there are more issues with file fragmentation over a long > period of time than the sheer number of

Re: [zfs-discuss] ZFS volume export to USB-2 or Firewire?

2008-04-09 Thread Jonathan Edwards
On Apr 9, 2008, at 11:46 AM, Bob Friesenhahn wrote: > On Wed, 9 Apr 2008, Ross wrote: >> >> Well the first problem is that USB cables are directional, and you >> don't have the port you need on any standard motherboard. That > > Thanks for that info. I did not know that. > >> Adding iSCSI suppor

Re: [zfs-discuss] ZFS I/O algorithms

2008-03-20 Thread Jonathan Edwards
On Mar 20, 2008, at 2:00 PM, Bob Friesenhahn wrote: > On Thu, 20 Mar 2008, Jonathan Edwards wrote: >> >> in that case .. try fixing the ARC size .. the dynamic resizing on >> the ARC >> can be less than optimal IMHO > > Is a 16GB ARC size not considered

Re: [zfs-discuss] ZFS I/O algorithms

2008-03-20 Thread Jonathan Edwards
On Mar 20, 2008, at 11:07 AM, Bob Friesenhahn wrote: > On Thu, 20 Mar 2008, Mario Goebbels wrote: > >>> Similarly, read block size does not make a >>> significant difference to the sequential read speed. >> >> Last time I did a simple bench using dd, supplying the record size as >> blocksize to it

Re: [zfs-discuss] zfs backups to tape

2008-03-16 Thread Jonathan Edwards
On Mar 14, 2008, at 3:28 PM, Bill Shannon wrote: > What's the best way to backup a zfs filesystem to tape, where the size > of the filesystem is larger than what can fit on a single tape? > ufsdump handles this quite nicely. Is there a similar backup program > for zfs? Or a general tape manageme

Re: [zfs-discuss] [dtrace-discuss] periodic ZFS disk accesses

2008-03-01 Thread Jonathan Edwards
On Mar 1, 2008, at 7:22 PM, Roch Bourbonnais wrote: > That's not entirely accurate. I believe ZFS does lead to > bdev_strategy being called and io:::start > will fire for ZFS I/Os. The problem is that a ZFS I/O can be > servicing a number of ZFS operations on a > number of different files (whi

Re: [zfs-discuss] periodic ZFS disk accesses

2008-03-01 Thread Jonathan Edwards
On Mar 1, 2008, at 4:14 PM, Bill Shannon wrote: > Ok, that's much better! At least I'm getting output when I touch > files > on zfs. However, even though zpool iostat is reporting activity, the > above program isn't showing any file accesses when the system is idle. > > Any ideas? assuming th

Re: [zfs-discuss] periodic ZFS disk accesses

2008-03-01 Thread Jonathan Edwards
On Mar 1, 2008, at 3:41 AM, Bill Shannon wrote: > Running just plain "iosnoop" shows accesses to lots of files, but none > on my zfs disk. Using "iosnoop -d c1t1d0" or "iosnoop -m /export/ > home/shannon" > shows nothing at all. I tried /usr/demo/dtrace/iosnoop.d too, still > nothing. hi Bil

Re: [zfs-discuss] Can ZFS be event-driven or not?

2008-02-27 Thread Jonathan Edwards
On Feb 27, 2008, at 8:36 AM, Uwe Dippel wrote: > As much as ZFS is revolutionary, it is far away from being the > 'ultimate file system', if it doesn't know how to handle event- > driven snapshots (I don't like the word), backups, versioning. As > long as a high-level system utility needs to

Re: [zfs-discuss] rename(2) (mv(1)) between ZFS filesystems in the same zpool

2007-12-29 Thread Jonathan Edwards
On Dec 29, 2007, at 2:33 AM, Jonathan Loran wrote: > Hey, here's an idea: We snapshot the file as it exists at the time of > the mv in the old file system until all referring file handles are > closed, then destroy the single file snap. I know, not easy to > implement, but that is the correct b

Re: [zfs-discuss] Yager on ZFS

2007-12-05 Thread Jonathan Edwards
On Dec 6, 2007, at 00:03, Anton B. Rang wrote: >> what are you terming as "ZFS' incremental risk reduction"? > > I'm not Bill, but I'll try to explain. > > Compare a system using ZFS to one using another file system -- say, > UFS, XFS, or ext3. > > Consider which situations may lead to data los

Re: [zfs-discuss] Yager on ZFS

2007-12-05 Thread Jonathan Edwards
apologies in advance for prolonging this thread .. i had considered taking this completely offline, but thought of a few people at least who might find this discussion somewhat interesting .. at the least i haven't seen any mention of Merkle trees yet as the nerd in me yearns for On Dec 5,

Re: [zfs-discuss] Yager on ZFS

2007-12-05 Thread Jonathan Edwards
On Dec 5, 2007, at 17:50, can you guess? wrote: >> my personal-professional data are important (this is >> my valuation, and it's an assumption you can't >> dispute). > > Nor was I attempting to: I was trying to get you to evaluate ZFS's > incremental risk reduction *quantitatively* (and if yo

Re: [zfs-discuss] Modify fsid/guid of dataset for NFS failover

2007-11-12 Thread Jonathan Edwards
On Nov 10, 2007, at 23:16, Carson Gaspar wrote: > Mattias Pantzare wrote: > >> As the fsid is created when the file system is created it will be the >> same when you mount it on a different NFS server. Why change it? >> >> Or are you trying to match two different file systems? Then you also >> ha

Re: [zfs-discuss] Count objects/inodes

2007-11-10 Thread Jonathan Edwards
Hey Bill: what's an object here? or do we have a mapping between "objects" and block pointers? for example a zdb -bb might show: th37 # zdb -bb rz-7 Traversing all blocks to verify nothing leaked ... No leaks (block sum matches space maps exactly) bp count: 47

Re: [zfs-discuss] Distribued ZFS

2007-10-21 Thread Jonathan Edwards
On Oct 20, 2007, at 20:23, Vincent Fox wrote: > To my mind ZFS has a serious deficiency for JBOD usage in a high- > availability clustered environment. > > Namely, inability to tie spare drives to a particular storage group. > > Example in clustering HA setups you would would want 2 SAS JBOD >

Re: [zfs-discuss] df command in ZFS?

2007-10-18 Thread Jonathan Edwards
On Oct 18, 2007, at 13:26, Richard Elling wrote: > > Yes. It is true that ZFS redefines the meaning of available space. > But > most people like compression, snapshots, clones, and the pooling > concept. > It may just be that you want zfs list instead, df is old-school :-) exactly - i'm not

Re: [zfs-discuss] df command in ZFS?

2007-10-18 Thread Jonathan Edwards
On Oct 18, 2007, at 11:57, Richard Elling wrote: > David Runyon wrote: >> I was presenting to a customer at the EBC yesterday, and one of the >> people at the meeting said using df in ZFS really drives him crazy >> (no, >> that's all the detail I have). Any ideas/suggestions? > > Filter it. T

Re: [zfs-discuss] Sun 6120 array again

2007-10-01 Thread Jonathan Edwards
SCSI based, but solid and cheap enclosures if you don't care about support: http://search.ebay.com/search/search.dll?satitle=Sun+D1000 On Oct 1, 2007, at 12:15, Andy Lubel wrote: > I gave up. > > The 6120 I just ended up not doing zfs. And for our 6130 since we > don't > have santricity or t

Re: [zfs-discuss] ZFS & array NVRAM cache?

2007-09-26 Thread Jonathan Edwards
On Sep 26, 2007, at 14:10, Torrey McMahon wrote: > You probably don't have to create a LUN the size of the NVRAM > either. As > long as its dedicated to one LUN then it should be pretty quick. The > 3510 cache, last I checked, doesn't do any per LUN segmentation or > sizing. Its a simple front

Re: [zfs-discuss] ZFS & array NVRAM cache?

2007-09-26 Thread Jonathan Edwards
On Sep 25, 2007, at 19:57, Bryan Cantrill wrote: > > On Tue, Sep 25, 2007 at 04:47:48PM -0700, Vincent Fox wrote: >> It seems like ZIL is a separate issue. > > It is very much the issue: the seperate log device work was done > exactly > to make better use of this kind of non-volatile memory.

Re: [zfs-discuss] The ZFS-Man.

2007-09-21 Thread Jonathan Edwards
On Sep 21, 2007, at 14:57, eric kustarz wrote: >> Hi. >> >> I gave a talk about ZFS during EuroBSDCon 2007, and because it won >> the >> the best talk award and some find it funny, here it is: >> >> http://youtube.com/watch?v=o3TGM0T1CvE >> >> a bit better version is here: >> >> http:

Re: [zfs-discuss] ZFS/WAFL lawsuit

2007-09-06 Thread Jonathan Edwards
On Sep 6, 2007, at 14:48, Nicolas Williams wrote: >> Exactly the articles point -- rulings have consequences outside of >> the >> original case. The intent may have been to store logs for web server >> access (logical and prudent request) but the ruling states that >> RAM albeit >> working m

Re: [zfs-discuss] Samba with ZFS ACL

2007-09-04 Thread Jonathan Edwards
On Sep 4, 2007, at 12:09, MC wrote: > For everyone else: > > http://blogs.sun.com/timthomas/entry/ > samba_and_swat_in_solaris#comments > > "It looks like nevada 70b will be the next Solaris Express > Developer Edition (SXDE) which should also drop shortly and should > also have the ZFS ACL

Re: [zfs-discuss] ZFS raid is very slow???

2007-07-07 Thread Jonathan Edwards
On Jul 7, 2007, at 06:14, Orvar Korvar wrote: When I copy that file from ZFS to /dev/null I get this output: real0m0.025s user0m0.002s sys 0m0.007s which can't be correct. Is it wrong of me to use "time cp fil fil2" when measuring disk performance? well you're reading and writin

Re: [zfs-discuss] Re: shareiscsi is cool, but what about sharefc or sharescsi?

2007-06-01 Thread Jonathan Edwards
On Jun 1, 2007, at 18:37, Richard L. Hamilton wrote: Can one use a spare SCSI or FC controller as if it were a target? we'd need an FC or SCSI target mode driver in Solaris .. let's just say we used to have one, and leave it mysteriously there. smart idea though! --- .je

Re: [zfs-discuss] Re: Re: Lots of overhead with ZFS - what am I doing wrong?

2007-05-15 Thread Jonathan Edwards
On May 15, 2007, at 13:13, Jürgen Keil wrote: Would you mind also doing: ptime dd if=/dev/dsk/c2t1d0 of=/dev/null bs=128k count=1 to see the raw performance of underlying hardware. This dd command is reading from the block device, which might cache dataand probably splits requests into

Re: [zfs-discuss] Issue with adding existing EFI disks to a zpool

2007-05-05 Thread Jonathan Edwards
On May 5, 2007, at 09:34, Mario Goebbels wrote: I spend yesterday all day evading my data of one of the Windows disks, so that I can add it to the pool. Using mount-ntfs, it's a pain due to its slowness. But once I finished, I thought "Cool, let's do it". So I added the disk using the zero

Re: [zfs-discuss] 6410 expansion shelf

2007-03-27 Thread Jonathan Edwards
right on for optimizing throughput on solaris .. a couple of notes though (also mentioned in the QFS manuals): - on x86/x64 you're just going to have an sd.conf so just increase the max_xfer_size for all with a line at the bottom like: sd_max_xfer_size=0x80; (note: if you look at

Re: [zfs-discuss] Re: Perforce on ZFS

2007-02-20 Thread Jonathan Edwards
On Feb 20, 2007, at 15:05, Krister Johansen wrote: what's the minimum allocation size for a file in zfs? I get 1024B by my calculation (1 x 512B block allocation (minimum) + 1 x 512B inode/ znode allocation) since we never pack file data in the inode/znode. Is this a problem? Only if you're t

Re: [zfs-discuss] Re: Perforce on ZFS

2007-02-20 Thread Jonathan Edwards
Roch what's the minimum allocation size for a file in zfs? I get 1024B by my calculation (1 x 512B block allocation (minimum) + 1 x 512B inode/ znode allocation) since we never pack file data in the inode/znode. Is this a problem? Only if you're trying to pack a lot files small byte fil

Re: Re[2]: [zfs-discuss] se3510 and ZFS

2007-02-06 Thread Jonathan Edwards
On Feb 6, 2007, at 11:46, Robert Milkowski wrote: Does anybody know how to tell se3510 not to honor write cache flush commands? JE> I don't think you can .. DKIOCFLUSHWRITECACHE *should* tell the array JE> to flush the cache. Gauging from the amount of calls that zfs makes to JE>

Re: [zfs-discuss] se3510 and ZFS

2007-02-06 Thread Jonathan Edwards
On Feb 6, 2007, at 06:55, Robert Milkowski wrote: Hello zfs-discuss, It looks like when zfs issues write cache flush commands se3510 actually honors it. I do not have right now spare se3510 to be 100% sure but comparing nfs/zfs server with se3510 to another nfs/ufs server with se3510 w

Re: [zfs-discuss] Which label a ZFS/ZPOOL device has ? VTOC or EFI ?

2007-02-04 Thread Jonathan Edwards
On Feb 3, 2007, at 02:31, dudekula mastan wrote: After creating the ZFS file system on a VTOC labeled disk, I am seeing the following warning messages. Feb 3 07:47:00 scoobyb Corrupt label; wrong magic number Feb 3 07:47:00 scoobyb scsi: [ID 107833 kern.warning] WARNING: / scsi_vhci/[

Re: [zfs-discuss] Project Proposal: Availability Suite

2007-02-02 Thread Jonathan Edwards
On Feb 2, 2007, at 15:35, Nicolas Williams wrote: Unlike traditional journalling replication, a continuous ZFS send/recv scheme could deal with resource constraints by taking a snapshot and throttling replication until resources become available again. Replication throttling would mean losing s

Re: [zfs-discuss] Re: ZFS or UFS - what to do?

2007-01-29 Thread Jonathan Edwards
On Jan 29, 2007, at 14:17, Jeffery Malloch wrote: Hi Guys, SO... From what I can tell from this thread ZFS if VERY fussy about managing writes,reads and failures. It wants to be bit perfect. So if you use the hardware that comes with a given solution (in my case an Engenio 6994) to ma

Re: [zfs-discuss] ZFS or UFS - what to do?

2007-01-29 Thread Jonathan Edwards
On Jan 26, 2007, at 09:16, Jeffery Malloch wrote: Hi Folks, I am currently in the midst of setting up a completely new file server using a pretty well loaded Sun T2000 (8x1GHz, 16GB RAM) connected to an Engenio 6994 product (I work for LSI Logic so Engenio is a no brainer). I have config

Re: [zfs-discuss] multihosted ZFS

2007-01-26 Thread Jonathan Edwards
On Jan 26, 2007, at 13:52, Marion Hakanson wrote: [EMAIL PROTECTED] said: . . . realize that the pool is now in use by the other host. That leads to two systems using the same zpool which is not nice. Is there any solution to this problem, or do I have to get Sun Cluster 3.2 if I want to

Re: [zfs-discuss] Re: Thumper Origins Q

2007-01-25 Thread Jonathan Edwards
On Jan 25, 2007, at 17:30, Albert Chin wrote: On Thu, Jan 25, 2007 at 02:24:47PM -0600, Al Hopper wrote: On Thu, 25 Jan 2007, Bill Sommerfeld wrote: On Thu, 2007-01-25 at 10:16 -0500, Torrey McMahon wrote: So there's no way to treat a 6140 as JBOD? If you wanted to use a 6140 with ZFS, an

Re: [zfs-discuss] Re: Thumper Origins Q

2007-01-25 Thread Jonathan Edwards
On Jan 25, 2007, at 14:34, Bill Sommerfeld wrote: On Thu, 2007-01-25 at 10:16 -0500, Torrey McMahon wrote: So there's no way to treat a 6140 as JBOD? If you wanted to use a 6140 with ZFS, and really wanted JBOD, your only choice would be a RAID 0 config on the 6140? Why would you want to

Re: [zfs-discuss] Re: Thumper Origins Q

2007-01-25 Thread Jonathan Edwards
On Jan 25, 2007, at 10:16, Torrey McMahon wrote: Albert Chin wrote: On Wed, Jan 24, 2007 at 10:19:29AM -0800, Frank Cusack wrote: On January 24, 2007 10:04:04 AM -0800 Bryan Cantrill <[EMAIL PROTECTED]> wrote: On Wed, Jan 24, 2007 at 09:46:11AM -0800, Moazam Raja wrote: Well, he did sa

Re: [zfs-discuss] Thumper Origins Q

2007-01-24 Thread Jonathan Edwards
On Jan 24, 2007, at 12:41, Bryan Cantrill wrote: well, "Thumper" is actually a reference to Bambi You'd have to ask Fowler, but certainly when he coined it, "Bambi" was the last thing on anyone's mind. I believe Fowler's intention was "one that thumps" (or, in the unique parlance of a

Re: [zfs-discuss] Re: ZFS direct IO

2007-01-24 Thread Jonathan Edwards
On Jan 24, 2007, at 06:54, Roch - PAE wrote: [EMAIL PROTECTED] writes: Note also that for most applications, the size of their IO operations would often not match the current page size of the buffer, causing additional performance and scalability issues. Thanks for mentioning this, I forgo

Re: [zfs-discuss] Re: Thumper Origins Q

2007-01-24 Thread Jonathan Edwards
On Jan 24, 2007, at 09:25, Peter Eriksson wrote: too much of our future roadmap, suffice it to say that one should expect much, much more from Sun in this vein: innovative software and innovative hardware working together to deliver world-beating systems with undeniable economics. Yes p

Re: [zfs-discuss] Re: ZFS direct IO

2007-01-23 Thread Jonathan Edwards
Roch I've been chewing on this for a little while and had some thoughts On Jan 15, 2007, at 12:02, Roch - PAE wrote: Jonathan Edwards writes: On Jan 5, 2007, at 11:10, Anton B. Rang wrote: DIRECT IO is a set of performance optimisations to circumvent shortcomings of a given files

Re: [zfs-discuss] Multiple Read one Writer Filesystem

2007-01-14 Thread Jonathan Edwards
On Jan 14, 2007, at 21:37, Wee Yeh Tan wrote: On 1/15/07, Torrey McMahon <[EMAIL PROTECTED]> wrote: Mike Papper wrote: > > The alternative I am considering is to have a single filesystem > available to many clients using a SAN (iSCSI in this case). However > only one client would mount the ZFS

Re: [zfs-discuss] Solid State Drives?

2007-01-11 Thread Jonathan Edwards
On Jan 11, 2007, at 15:42, Erik Trimble wrote: On Thu, 2007-01-11 at 10:35 -0800, Richard Elling wrote: The product was called Sun PrestoServ. It was successful for benchmarking and such, but unsuccessful in the market because: + when there is a failure, your data is spread across

Re: [zfs-discuss] Re: ZFS direct IO

2007-01-05 Thread Jonathan Edwards
On Jan 5, 2007, at 11:10, Anton B. Rang wrote: DIRECT IO is a set of performance optimisations to circumvent shortcomings of a given filesystem. Direct I/O as generally understood (i.e. not UFS-specific) is an optimization which allows data to be transferred directly between user data bu

Re: [zfs-discuss] Thoughts on ZFS Secure Delete - without using Crypto

2006-12-20 Thread Jonathan Edwards
On Dec 20, 2006, at 04:41, Darren J Moffat wrote: Bill Sommerfeld wrote: There also may be a reason to do this when confidentiality isn't required: as a sparse provisioning hack.. If you were to build a zfs pool out of compressed zvols backed by another pool, then it would be very convenient i

Re: [zfs-discuss] Re: Re[2]: ZFS in a SAN environment

2006-12-20 Thread Jonathan Edwards
On Dec 20, 2006, at 00:37, Anton B. Rang wrote: "INFORMATION: If a member of this striped zpool becomes unavailable or develops corruption, Solaris will kernel panic and reboot to protect your data." OK, I'm puzzled. Am I the only one on this list who believes that a kernel panic, inste

Re: [zfs-discuss] ZFS in a SAN environment

2006-12-19 Thread Jonathan Edwards
On Dec 19, 2006, at 10:15, Torrey McMahon wrote: Darren J Moffat wrote: Jonathan Edwards wrote: On Dec 19, 2006, at 07:17, Roch - PAE wrote: Shouldn't there be a big warning when configuring a pool with no redundancy and/or should that not require a -f flag ? why? what i

Re: [zfs-discuss] Thoughts on ZFS Secure Delete - without using Crypto

2006-12-19 Thread Jonathan Edwards
On Dec 18, 2006, at 11:54, Darren J Moffat wrote: [EMAIL PROTECTED] wrote: Rather than bleaching which doesn't always remove all stains, why can't we use a word like "erasing" (which is hitherto unused for filesystem use in Solaris, AFAIK) and this method doesn't remove all stains from t

Re: [zfs-discuss] Thoughts on ZFS Secure Delete - without using Crypto

2006-12-19 Thread Jonathan Edwards
On Dec 19, 2006, at 08:59, Darren J Moffat wrote: Darren Reed wrote: If/when ZFS supports this then it would be nice to also be able to have Solaris bleach swap on ZFS when it shuts down or reboots. Although it may be that this option needs to be put into how we manage swap space and not speci

Re: [zfs-discuss] ZFS in a SAN environment

2006-12-19 Thread Jonathan Edwards
On Dec 19, 2006, at 07:17, Roch - PAE wrote: Shouldn't there be a big warning when configuring a pool with no redundancy and/or should that not require a -f flag ? why? what if the redundancy is below the pool .. should we warn that ZFS isn't directly involved in redundancy decisions? --- .

Re: [zfs-discuss] ZFS in a SAN environment

2006-12-19 Thread Jonathan Edwards
On Dec 18, 2006, at 17:52, Richard Elling wrote: In general, the closer to the user you can make policy decisions, the better decisions you can make. The fact that we've had 10 years of RAID arrays acting like dumb block devices doesn't mean that will continue for the next 10 years :-) I

Re: [zfs-discuss] ZFS in a SAN environment

2006-12-18 Thread Jonathan Edwards
On Dec 18, 2006, at 16:13, Torrey McMahon wrote: Al Hopper wrote: On Sun, 17 Dec 2006, Ricardo Correia wrote: On Friday 15 December 2006 20:02, Dave Burleson wrote: Does anyone have a document that describes ZFS in a pure SAN environment? What will and will not work? From some of the i

Re: [zfs-discuss] Vanity ZVOL paths?

2006-12-09 Thread Jonathan Edwards
On Dec 8, 2006, at 05:20, Jignesh K. Shah wrote: Hello ZFS Experts I have two ZFS pools zpool1 and zpool2 I am trying to create bunch of zvols such that their paths are similar except for consisent number scheme without reference to the zpools that actually belong. (This will allow me to

Re: [zfs-discuss] Re: system wont boot after zfs

2006-11-30 Thread Jonathan Edwards
rly on such devices, less-than-optimal performance might be the result. -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of [EMAIL PROTECTED] Sent: Wednesday, November 29, 2006 1:24 PM To: Jonathan Edwards Cc: David Elefante; zfs-discuss@opensolaris.org

Re: [zfs-discuss] Re: system wont boot after zfs

2006-11-30 Thread Jonathan Edwards
On Nov 29, 2006, at 13:24, [EMAIL PROTECTED] wrote: I suspect a lack of an MBR could cause some BIOS implementations to barf .. Why? Zeroed disks don't have that issue either. you're right - I was thinking that a lack of an MBR with a GPT could be causing problems, but actually it loo

Re: [zfs-discuss] Re: system wont boot after zfs

2006-11-29 Thread Jonathan Edwards
On Nov 29, 2006, at 10:41, [EMAIL PROTECTED] wrote: This is a problem since how can anyone use ZFS on a PC??? My motherboard is a newly minted AM2 w/ all the latest firmware. I disabled boot detection on the sata channels and it still refuses to boot. I had to purchase an external SATA e

Re: [zfs-discuss] Re: ZFS ACLs and Samba

2006-10-25 Thread Jonathan Edwards
On Oct 25, 2006, at 15:38, Roger Ripley wrote: IBM has contributed code for NFSv4 ACLs under AIX's JFS; hopefully Sun will not tarry in following their lead for ZFS. http://lists.samba.org/archive/samba-cvs/2006-September/070855.html I thought this was still in draft: http://ietf.org/inter

Re: [zfs-discuss] Re: Mirrored Raidz

2006-10-24 Thread Jonathan Edwards
there's 2 approaches: 1) RAID 1+Z where you mirror the individual drives across trays and then RAID-Z the whole thing 2) RAID Z+1 where you RAIDZ each tray and then mirror them I would argue that you can lose the most drives in configuration 1 and stay alive: With a simple mirrored stripe

Re: [zfs-discuss] Re: Mirrored Raidz

2006-10-24 Thread Jonathan Edwards
On Oct 24, 2006, at 12:26, Dale Ghent wrote: On Oct 24, 2006, at 12:33 PM, Frank Cusack wrote: On October 24, 2006 9:19:07 AM -0700 "Anton B. Rang" <[EMAIL PROTECTED]> wrote: Our thinking is that if you want more redundancy than RAID-Z, you should use RAID-Z with double parity, which provi

Re: [zfs-discuss] Mirrored Raidz

2006-10-24 Thread Jonathan Edwards
On Oct 24, 2006, at 04:19, Roch wrote: Michel Kintz writes: Matthew Ahrens a écrit : Richard Elling - PAE wrote: Anthony Miller wrote: Hi, I've search the forums and not found any answer to the following. I have 2 JBOD arrays each with 4 disks. I want to create create a raidz on one

Re: [zfs-discuss] Re: [osol-discuss] Cloning a disk w/ ZFS in it

2006-10-22 Thread Jonathan Edwards
you don't really need to do the prtvtoc and fmthard with the old Sun labels if you start at cylinder 0 since you're doing a bit -> bit copy with dd .. but, keep in mind: - The Sun VTOC is the first 512B and s2 *typically* should start at cylinder 0 (unless it's been redefined .. check!) - T

Re: [zfs-discuss] ZFS Usability issue : improve means of finding ZFS<->physdevice(s) mapping

2006-10-16 Thread Jonathan Edwards
On Oct 16, 2006, at 07:39, Darren J Moffat wrote: Noel Dellofano wrote: I don't understand why you can't use 'zpool status'? That will show the pools and the physical devices in each and is also a pretty basic command. Examples are given in the sysadmin docs and manpages for ZFS on the

Re: [zfs-discuss] A versioning FS

2006-10-09 Thread Jonathan Edwards
On Oct 8, 2006, at 23:54, Nicolas Williams wrote: On Sun, Oct 08, 2006 at 11:16:21PM -0400, Jonathan Edwards wrote: On Oct 8, 2006, at 22:46, Nicolas Williams wrote: You're arguing for treating FV as extended/named attributes :) kind of - but one of the problems with EAs is the inc

Re: [zfs-discuss] A versioning FS

2006-10-08 Thread Jonathan Edwards
On Oct 8, 2006, at 22:46, Nicolas Williams wrote: On Sun, Oct 08, 2006 at 10:28:06PM -0400, Jonathan Edwards wrote: On Oct 8, 2006, at 21:40, Wee Yeh Tan wrote: On 10/7/06, Ben Gollmer <[EMAIL PROTECTED]> wrote: Hmm, what about file.txt -> ._file.txt.1, ._file.txt.2, etc? If you d

Re: [zfs-discuss] A versioning FS

2006-10-08 Thread Jonathan Edwards
On Oct 8, 2006, at 21:40, Wee Yeh Tan wrote: On 10/7/06, Ben Gollmer <[EMAIL PROTECTED]> wrote: On Oct 6, 2006, at 6:15 PM, Nicolas Williams wrote: > What I'm saying is that I'd like to be able to keep multiple > versions of > my files without "echo *" or "ls" showing them to me by default. H

Re: [zfs-discuss] Re: A versioning FS

2006-10-06 Thread Jonathan Edwards
On Oct 6, 2006, at 23:42, Anton B. Rang wrote:I don't agree that version control systems solve the same problem as file versioning. I don't want to check *every change* that I make into version control -- it makes the history unwieldy. At the same time, if I make a change that turns out to work rea

Re: [zfs-discuss] A versioning FS

2006-10-06 Thread Jonathan Edwards
On Oct 6, 2006, at 21:17, Joseph Mocker wrote: Nicolas Williams wrote: On Fri, Oct 06, 2006 at 03:30:20PM -0600, Chad Leigh -- Shire.Net LLC wrote: On Oct 6, 2006, at 3:08 PM, Erik Trimble wrote: OK. So, now we're on to FV. As Nico pointed out, FV is going to need a new API. Using t

Re: [zfs-discuss] ZFS and HDS ShadowImage

2006-09-18 Thread Jonathan Edwards
On Sep 18, 2006, at 23:16, Eric Schrock wrote: Here's an example: I've three LUNs in a ZFS pool offered from my HW raid array. I take a snapshot onto three other LUNs. A day later I turn the host off. I go to the array and offer all six LUNs, the pool that was in use as well as the snapsh

Re: [zfs-discuss] ZFS and HDS ShadowImage

2006-09-18 Thread Jonathan Edwards
On Sep 18, 2006, at 14:41, Eric Schrock wrote: 2 - If you import LUNs with the same label or ID as a currently mounted pool then ZFS will no one seems to know. For example: I have a pool on two LUNS X and Y called mypool. I take a snapshot of LUN X & Y, ignoring issue #1 above for no

Re: [zfs-discuss] Re: Recommendation ZFS on StorEdge 3320 - offtopic

2006-09-08 Thread Jonathan Edwards
On Sep 8, 2006, at 14:22, Ed Gould wrote: On Sep 8, 2006, at 9:33, Richard Elling - PAE wrote: I was looking for a new AM2 socket motherboard a few weeks ago. All of the ones I looked at had 2xIDE and 4xSATA with onboard (SATA) RAID. All were less than $150. In other words, the days of ha

Re: Re[2]: [zfs-discuss] Re: Recommendation ZFS on StorEdge 3320

2006-09-05 Thread Jonathan Edwards
On Sep 5, 2006, at 06:45, Robert Milkowski wrote:Hello Wee,Tuesday, September 5, 2006, 10:58:32 AM, you wrote:WYT> On 9/5/06, Torrey McMahon <[EMAIL PROTECTED]> wrote: This is simply not true. ZFS would protect against the same type oferrors seen on an individual drive as it would on a pool made of

Re: [zfs-discuss] Re: Best Practices for StorEdge 3510 Array and ZFS

2006-08-02 Thread Jonathan Edwards
On Aug 2, 2006, at 17:03, prasad wrote: Torrey McMahon <[EMAIL PROTECTED]> wrote: Are any other hosts using the array? Do you plan on carving LUNs out of the RAID5 LD and assigning them to other hosts? There are no other hosts using the array. We need all the available space (2.45TB) on

Re: [zfs-discuss] 3510 JBOD ZFS vs 3510 HW RAID

2006-08-02 Thread Jonathan Edwards
On Aug 1, 2006, at 22:23, Luke Lonergan wrote: Torrey, On 8/1/06 10:30 AM, "Torrey McMahon" <[EMAIL PROTECTED]> wrote: http://www.sun.com/storagetek/disk_systems/workgroup/3510/index.xml Look at the specs page. I did. This is 8 trays, each with 14 disks and two active Fibre channel attac

Re: [zfs-discuss] 3510 JBOD ZFS vs 3510 HW RAID

2006-08-01 Thread Jonathan Edwards
On Aug 1, 2006, at 14:18, Torrey McMahon wrote: (I hate when I hit the Send button when trying to change windows) Eric Schrock wrote: On Tue, Aug 01, 2006 at 01:31:22PM -0400, Torrey McMahon wrote: The correct comparison is done when all the factors are taken into account. Making blank

Re: [zfs-discuss] ZFS vs. Apple XRaid

2006-08-01 Thread Jonathan Edwards
On Aug 1, 2006, at 03:43, [EMAIL PROTECTED] wrote: So what does this exercise leave me thinking? Is Linux 2.4.x really screwed up in NFS-land? This Solaris NFS replaces a Linux-based NFS server that the clients (linux and IRIX) liked just fine. Yes; the Linux NFS server and client work tog

Re: [zfs-discuss] zfs vs. vxfs

2006-07-31 Thread Jonathan Edwards
On Jul 30, 2006, at 23:44, Malahat Qureshi wrote: Is any one have a comparison between zfs vs. vxfs, I'm working on a presentation for my management on this --- That can be a tough question to answer depending on what you're looking for .. you could take the feature comparison approach like

Re: [zfs-discuss] Re: ZFS questions (hybrid HDs)

2006-07-28 Thread Jonathan Edwards
On Jun 21, 2006, at 11:05, Anton B. Rang wrote: My guess from reading between the lines of the Samsung/Microsoft press release is that there is a mechanism for the operating system to "pin" particular blocks into the cache (e.g. to speed boot) and the rest of the cache is used for write

Re: Re[2]: [zfs-discuss] Re: ZFS and Storage

2006-06-28 Thread Jonathan Edwards
On Jun 28, 2006, at 18:25, Erik Trimble wrote:On Wed, 2006-06-28 at 14:55 -0700, Jeff Bonwick wrote: Which is better -zfs raidz on hardware mirrors, or zfs mirror on hardware raid-5? The latter.  With a mirror of RAID-5 arrays, you get:(1) Self-healing data.(2) Tolerance of whole-array failure.(3)

Re: [zfs-discuss] Re: ZFS and Storage

2006-06-28 Thread Jonathan Edwards
On Jun 28, 2006, at 17:25, Erik Trimble wrote: On Wed, 2006-06-28 at 13:24 -0400, Jonathan Edwards wrote: On Jun 28, 2006, at 12:32, Erik Trimble wrote: The main reason I don't see ZFS mirror / HW RAID5 as useful is this: ZFS mirror/ RAID5: capacity = (N /

Re: [zfs-discuss] Re: ZFS and Storage

2006-06-28 Thread Jonathan Edwards
On Jun 28, 2006, at 12:32, Erik Trimble wrote:The main reason I don't see ZFS mirror / HW RAID5 as useful is this: ZFS mirror/ RAID5:      capacity =  (N / 2) -1                                     speed <<  N / 2 -1                                     minimum # disks to lose before loss of data: 

Re: [zfs-discuss] ZFS and Storage

2006-06-26 Thread Jonathan Edwards
-Does ZFS in the current version support LUN extension? With UFS, we have to zero the VTOC, and then adjust the new disk geometry. How does it look like with ZFS?The vdev can handle dynamic lun growth, but the underlying VTOC or EFI labelmay need to be zero'd and reapplied if you setup the initial

Re: [zfs-discuss] Re: disk write cache, redux

2006-06-15 Thread Jonathan Edwards
On Jun 15, 2006, at 06:23, Roch Bourbonnais - Performance Engineering wrote: Naively I'd think a write_cache should not help throughput test since the cache should fill up after which you should still be throttled by the physical drain rate. You clearly show that it helps; Anyone knows why