[zfs-discuss] oddity of slow zfs destroy

2012-06-25 Thread Philip Brown
I ran into something odd today: zfs destroy -r random/filesystem is mindbogglingly slow. But seems to me, it shouldnt be. It's slow, because the filesystem has two snapshots on it. Presumably, it's busy "rolling back" the snapshots. but I've already declared by my command line, that I DONT CAR

Re: [zfs-discuss] checking/fixing busy locks for zfs send/receive

2012-03-16 Thread Philip Brown
On Fri, Mar 16, 2012 at 3:06 PM, Brandon High wrote: > On Fri, Mar 16, 2012 at 2:35 PM, Philip Brown wrote: >> if there isnt a process visible doing this via ps, I'm wondering how >> one might check if a zfs filesystem or snapshot is rendered "busy" in >> this

[zfs-discuss] checking/fixing busy locks for zfs send/receive

2012-03-16 Thread Philip Brown
It was suggested to me by Ian Collins, that doing zfs sends and receives, can render a filesystem "busy". if there isnt a process visible doing this via ps, I'm wondering how one might check if a zfs filesystem or snapshot is rendered "busy" in this way, interfering with an unmount or destroy? I'

[zfs-discuss] zrep initial release (replication with failover)

2012-03-12 Thread Philip Brown
I'm happy to announce the first release of zrep (v0.1) http://www.bolthole.com/solaris/zrep/ This is a self-contained "single executable" tool, to implement synchronization *and* failover of an active/passive zfs filesystem pair. No configuration files needed: configuration is stored in the zfs f

[zfs-discuss] RFC for new zfs replication tool

2012-02-22 Thread Philip Brown
Please note: this is a "cross posting" of sorts, from a post I made; http://groups.google.com/group/comp.unix.solaris/browse_thread/thread/a8bd4aab3918b7a0/528dacb05c970748 It was suggested that I mention it here. so I am doing so. For convenience, here is mostly a duplicate of what I posted, with

[zfs-discuss] reliable, enterprise worthy JBODs?

2011-01-25 Thread Philip Brown
So, another hardware question :) ZFS has been touted as taking maximal advantage of disk hardware, to the point where it can be used efficiently and cost-effectively on JBODs, rather than having to throw more expensive RAID arrays at it. Only trouble is.. JBODs seem to have disappeared :( Sun/O

Re: [zfs-discuss] How well does zfs mirror handle temporary disk offlines?

2011-01-18 Thread Philip Brown
> On Tue, 2011-01-18 at 14:51 -0500, Torrey McMahon > wrote: > > ZFS's ability to handle "short-term" interruptions > depend heavily on the > underlying device driver. > > If the device driver reports the device as > "dead/missing/etc" at any > point, then ZFS is going to require a "zpool replace"

[zfs-discuss] How well does zfs mirror handle temporary disk offlines?

2011-01-18 Thread Philip Brown
Sorry if this is well known.. I tried a bunch of googles, but didnt get anywhere useful. Closest I came, was http://mail.opensolaris.org/pipermail/zfs-discuss/2009-April/028090.html but that doesnt answer my question, below, reguarding zfs mirror recovery. Details of our needs follow. We norm

Re: [zfs-discuss] ZFS on emcpower0a and labels

2011-01-10 Thread Philip
and your disk will be a "SUN" disk again. Grtz, Philip. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Liveupgrade'd to U8 and now can't boot previous U6 BE :(

2009-10-20 Thread Philip Brown
Quote: cindys "3. Boot failure from a previous BE if either #1 or #2 failure occurs." #1 or #2 were not relevant in my case. Just found I could not boot into old u7 be. I am happy with workaround as shinsui points out, so this is purely for your information. Quote: renil82 "U7 did not encount

Re: [zfs-discuss] Liveupgrade'd to U8 and now can't boot previous U6 BE :(

2009-10-17 Thread Philip Brown
same problem here on sun x2100 amd64 i started with a core installation of u7 with the only patches applied as outlined in live upgrade doco 206844 ( http://sunsolve.sun.com/search/document.do?assetkey=1-61-206844-1 ). also as stated in doco: pkgrm SUNWlucfg SUNWluu SUNWlur and then from 10/9

Re: [zfs-discuss] questions on zfs send,receive,backups

2008-11-03 Thread Philip Brown
> Ok, I think I understand. You're going to be told > that ZFS send isn't a backup (and for these purposes > I definately agree), ... Hmph. well, even for 'replication' type purposes, what I'm talking about is quite useful. Picture two remote systems, which happen to have "mostly identical" dat

Re: [zfs-discuss] questions on zfs send,receive,backups

2008-11-03 Thread Philip Brown
> If > I'm interpreting correctly, you're talking about a > couple of features, neither of which is in ZFS yet, ... > 1. The ability to restore individual files from a > snapshot, in the same way an entire snapshot is > restored - simply using the blocks that are already > stored. > > 2. The a

Re: [zfs-discuss] questions on zfs send,receive,backups

2008-10-31 Thread Philip Brown
> Ah, there is a cognitive disconnect... more below. > > The cognitive disconnect is that snapshots are > blocks, not files. > Therefore, the snapshot may contain only changed > portions of > files and blocks from a single file may be spread > across many > different snapshots. I was referring

Re: [zfs-discuss] questions on zfs send,receive,backups

2008-10-31 Thread Philip Brown
> So, when I do a zfs receive, it would be "really > nice", if there were some way for zfs to figure out, > lets say, "recieve to a snapshot of the filesystem; > then take advantage of the fact that it is a > snapshot, to NOT write on disk, the 9 unaltered files > that are in the snapshot; just all

Re: [zfs-discuss] questions on zfs send,receive,backups

2008-10-31 Thread Philip Brown
> relling wrote: > This question makes no sense to me. Perhaps you can > rephrase? > To take a really obnoxious case: lets say I have a 1 gigabyte filesystem. It has 1.5 gigabytes of physical disk allocated to it (so it is 66% full). It has 10x100meg files in it. "Something bad happens", and I

[zfs-discuss] questions on zfs send,receive,backups

2008-10-30 Thread Philip Brown
I've recently started down the road of production use for zfs, and am hitting my head on some paradigm shifts. I'd like to clarify whether my understanding is correct, and/or whether there are better ways of doing things. I have one question for replication, and one question for backups. These qu

Re: [zfs-discuss] ZFS write throttling

2008-02-15 Thread Philip Beevers
heckpoints. Is > this a concern ? That's good news. No, the loss of initial performance isn't a big problem - I'd be happy for it to go at spindle speed. Regards, -- Philip Beevers Fidessa Infrastructure Development mailt

[zfs-discuss] ZFS write throttling

2008-02-15 Thread Philip Beevers
of thing off. Of course, the ideal would be to have some way of telling ZFS not to bother keeping pages in the ARC. The latter appears to be bug 6429855. But the underlying behaviour doesn't really seem desirable; are there plans afoot to do any work on ZFS write throttling to address this kind

Re: [zfs-discuss] Question - does a snapshot of root include child

2007-12-20 Thread Philip
The recursive option creates a separate snapshot for every child filesystem, making backup management more difficult if there are many child filesystems. The ability to natively create a single snapshot/backup of an entire ZFS hierarchy of filesystems would be a very nice thing indeed. This

[zfs-discuss] ZFS children stepping on parent

2007-11-16 Thread Philip
I was doing some disaster recovery testing with ZFS, where I did a mass backup of a family of ZFS filesystems using snapshots, destroyed them, and then did a mass restore from the backups. The ZFS filesystems I was testing with had only one parent in the ZFS namespace; and the backup and restor

[zfs-discuss] Re: Eliminating double path with ZFS's volume manager

2007-01-15 Thread Philip Mötteli
> Looks like its got a half-way decent multipath > design: > http://docs.info.apple.com/article.html?path=Xsan/1.1/ > en/c3xs12.html Great, but that is with Xsan. If I don't exchange our Hitachi with an Xsan, I don't have this 'cvadmin'. This message posted from opensolaris.org __

[zfs-discuss] Re: Eliminating double path with ZFS's volume manager

2007-01-15 Thread Philip Mötteli
> Robert Milkowski wrote: > > > > 2. I belive it's definitely possible to just > correct your config under > > Mac OS without any need to use other fs or volume > manager, however > > going to zfs could be a good idea anyway > > > That implies that MacOS has some sort of native SCSI > multipathin

[zfs-discuss] Re: Re: Eliminating double path with ZFS's volumemanager

2007-01-15 Thread Philip Mötteli
> Go poke around in the multipath Xsan storage pool > properties. Specifies > how Xsan uses multiple Fibre Channel paths between > clients and storage. > This is the equiv of Veritas DMP or [whatever we now > call] Solaris MPxIO You mean, I should find some configuration file? Well, I can't find o

[zfs-discuss] Re: Eliminating double path with ZFS's volume manager

2007-01-15 Thread Philip Mötteli
Hi, > Monday, January 15, 2007, 10:44:49 AM, you wrote: > PM> Since they have installed a second path to our Hitachi SAN, my > PM> Mac OS X Server 4.8 mounts every SAN disk twice. > PM> I asked everywhere, if there's a way, to correct that. And the > PM> only answer so far was, that I need a volu

[zfs-discuss] Eliminating double path with ZFS's volume manager

2007-01-15 Thread Philip Mötteli
Hi, Since they have installed a second path to our Hitachi SAN, my Mac OS X Server 4.8 mounts every SAN disk twice. I asked everywhere, if there's a way, to correct that. And the only answer so far was, that I need a volume manager, that can be configured to consider two volumes as being ident

Re: [zfs-discuss] Importing ZFS filesystems across architectures...

2006-09-21 Thread Philip Brown
Eric Schrock wrote: If you're using EFI labels, yes (VTOC labels are not endian neutral). ZFS will automatically convert endianness from the on-disk format, and new data will be written using the native endianness, so data will be gradually be rewritten to avoid the byteswap overhead. now, whe

[zfs-discuss] zfs and Oracle ASM

2006-09-13 Thread Philip Cannata
2 questions: 1) How does zfs compare to Oracle's ASM, in particular, ASM's ability to dynamically move hot disk blocks around? 2) Is Oracle evaluating zfs to possible find ways to optimally take advantage of its capabilities? thanks phil ___ zfs-disc

Re: [zfs-discuss] Re: Clones and "rm -rf"

2006-08-03 Thread Philip Brown
Anton B. Rang wrote: I'd filed 6452505 (zfs create should set permissions on underlying mountpoint) so that this shouldn't cause problems in the future Err.. the way you have described that, seems backward to me, and violates existing expected known solaris behaviour, not to mention lo

Re: [zfs-discuss] Re: ZFS and Storage

2006-06-28 Thread Philip Brown
Erik Trimble wrote: Since the best way to get this is to use a Mirror or RAIDZ vdev, I'm assuming that the proper way to get benefits from both ZFS and HW RAID is the following: (1) ZFS mirror of HW stripes, i.e. "zpool create tank mirror hwStripe1 hwStripe2" (2) ZFS RAIDZ of HW mirrors

Re: [zfs-discuss] ZFS and Storage

2006-06-28 Thread Philip Brown
Roch wrote: Philip Brown writes: > but there may not be filesystem space for double the data. > Sounds like there is a need for a zfs-defragement-file utility perhaps? > > Or if you want to be politically cagey about naming choice, perhaps, > > zfs-seq-

Re: [zfs-discuss] ZFS and Storage

2006-06-26 Thread Philip Brown
Roch wrote: And, ifthe load can accomodate a reorder, to get top per-spindle read-streaming performance, a cp(1) of the file should do wonders on the layout. but there may not be filesystem space for double the data. Sounds like there is a need for a zfs-defragement-file utility perhaps

Re: [Fwd: Re: [zfs-discuss] Re: disk write cache, redux]

2006-06-16 Thread Philip Brown
Dana H. Myers wrote: Phil Brown wrote: hmm. well I hope sun will fix this bug, and add in the long-missing write_cache control for "regular" ata drives too. Actually, I believe such ata drives by default enable the write cache. some do, some dont. reguardless, the toggle functionality bel

Re: [zfs-discuss] Re: disk write cache, redux

2006-06-15 Thread Philip Brown
Roch wrote: Check here: http://cvs.opensolaris.org/source/xref/on/usr/src/uts/common/fs/zfs/vdev_disk.c#157 distilled version: vdev_disk_open(vdev_t *vd, uint64_t *psize, uint64_t *ashift) /*...*/ /* * If we own the whole disk, try to enable disk write caching. * We ignore error

[zfs-discuss] Re: disk write cache, redux

2006-06-14 Thread Philip Brown
I previously wrote about my scepticism on the claims that zfs selectively enables and disables write cache, to improve throughput over the usual solaris defaults prior to this point. I posted my observations that this did not seem to be happening in any meaningful way, for my zfs, on build nv3

Re: [zfs-discuss] New Feature Idea: ZFS Views ?

2006-06-07 Thread Philip Brown
Nicolas Williams wrote: On Wed, Jun 07, 2006 at 11:15:43AM -0700, Philip Brown wrote: Also, why shouldn't lofs grow similar support? aha! This to me sounds much much better. Put all the funky potentially disasterous code, in lofs, not in zfs please :-) plus that way any filesystem

Re: [zfs-discuss] New Feature Idea: ZFS Views ?

2006-06-07 Thread Philip Brown
Nicolas Williams wrote: ... Also, why shouldn't lofs grow similar support? aha! This to me sounds much much better. Put all the funky potentially disasterous code, in lofs, not in zfs please :-) plus that way any filesystem will potentially get the "benefit" of views. __

[zfs-discuss] disk write cache, redux

2006-06-02 Thread Philip Brown
hi folks... I've just been exposed to zfs directly, since I'm trying it out on "a certain 48-drive box with 4 cpus" :-) I read in the archives, the recent " hard drive write cache " thread. in which someone at sun made the claim that zfs takes advantage of the disk write cache, selectively enabl

Re: [zfs-discuss] ZFS RAM requirements?

2006-05-03 Thread Philip Beevers
Roch Bourbonnais - Performance Engineering wrote: Reported freemem will be lower when running with ZFS than say UFS. The UFS page cache is considered as freemem. ZFS will return it's 'cache' only when memory is needed. So you will operate with lower freemem but won't actually suffer fr