Re: [zfs-discuss] any update on zfs root/boot ?

2006-09-13 Thread Dick Davies
On 14/09/06, James C. McPherson <[EMAIL PROTECTED]> wrote: Hi folks, I'm in the annoying position of having to replace my rootdisk (since it's a [EMAIL PROTECTED]@$! maxtor and dying). I'm currently running with zfsroot after following Tabriz' and TimF's procedure to enable that. However, I'd li

[zfs-discuss] any update on zfs root/boot ?

2006-09-13 Thread James C. McPherson
Hi folks, I'm in the annoying position of having to replace my rootdisk (since it's a [EMAIL PROTECTED]@$! maxtor and dying). I'm currently running with zfsroot after following Tabriz' and TimF's procedure to enable that. However, I'd like to know whether there's a better way to get zfs root/boot

Re: [zfs-discuss] Re: Re: marvel cards.. as recommended

2006-09-13 Thread Joe Little
Yeah. I got the message from a few others, and we are hoping to return/buy the newer one. I've sort of surprised by the limited set of SATA RAID or JBOD cards that one can actually use. Even the one's linked to on this list sometimes aren't supported :). I need to get up and running like yesterday

Re: [zfs-discuss] Snapshots and backing store

2006-09-13 Thread David Magda
On Sep 13, 2006, at 10:52, Scott Howard wrote: It's not at all bizarre once you understand how ZFS works. I'd suggest reading through some of the documentation available at http://www.opensolaris.org/os/community/zfs/docs/ , in paricular the "Slides" available there. The presentation that 'goe

Re: [zfs-discuss] Re: Re: Re: Proposal: multiple copies of user data

2006-09-13 Thread Wee Yeh Tan
On 9/13/06, Matthew Ahrens <[EMAIL PROTECTED]> wrote: Sure, if you want *everything* in your pool to be mirrored, there is no real need for this feature (you could argue that setting up the pool would be easier if you didn't have to slice up the disk though). Not necessarily. Implementing this

Re: [zfs-discuss] Re: zfs and Oracle ASM

2006-09-13 Thread Richard Elling
Anantha N. Srirama wrote: I did a non-scientific benchmark against ASM and ZFS. Just look for my posts and you'll see it. To summarize it was a statistical tie for simple loads of around 2GB of data and we've chosen to stick with ASM for a variety of reasons not the least of which is its abili

Re: [zfs-discuss] Re: ZFS imported simultanously on 2 systems...

2006-09-13 Thread Frank Cusack
On September 13, 2006 7:07:40 PM -0700 Richard Elling <[EMAIL PROTECTED]> wrote: Dale Ghent wrote: James C. McPherson wrote: As I understand things, SunCluster 3.2 is expected to have support for HA-ZFS and until that version is released you will not be running in a supported configuration

Re: [zfs-discuss] Re: ZFS imported simultanously on 2 systems...

2006-09-13 Thread Richard Elling
Dale Ghent wrote: James C. McPherson wrote: As I understand things, SunCluster 3.2 is expected to have support for HA-ZFS and until that version is released you will not be running in a supported configuration and so any errors you encounter are *your fault alone*. Still, after reading

[zfs-discuss] Re: zfs and Oracle ASM

2006-09-13 Thread Anantha N. Srirama
I did a non-scientific benchmark against ASM and ZFS. Just look for my posts and you'll see it. To summarize it was a statistical tie for simple loads of around 2GB of data and we've chosen to stick with ASM for a variety of reasons not the least of which is its ability to rebalance when disks a

Re: [zfs-discuss] Re: ZFS imported simultanously on 2 systems...

2006-09-13 Thread Frank Cusack
On September 13, 2006 4:33:31 PM -0700 Frank Cusack <[EMAIL PROTECTED]> wrote: You'd typically have a dedicated link for heartbeat, what if that cable gets yanked or that NIC port dies. The backup system could avoid mounting the pool if zfs had its own heartbeat. What if the cluster software ha

Re: [zfs-discuss] Re: Comments on a ZFS multiple use of a pool, RFE.

2006-09-13 Thread Frank Cusack
On September 14, 2006 1:25:01 AM +0200 Daniel Rock <[EMAIL PROTECTED]> wrote: Just to clear some things up. The OP who started the whole discussion would have had the same problems with VxVM as he has now with ZFS. If you force an import of a disk group on one host while it is still active on a

Re: [zfs-discuss] Re: ZFS imported simultanously on 2 systems...

2006-09-13 Thread Frank Cusack
On September 13, 2006 6:44:44 PM +0100 Darren J Moffat <[EMAIL PROTECTED]> wrote: Frank Cusack wrote: Sounds cool! Better than depending on an out-of-band heartbeat. I disagree it sounds really really bad. If you want a high availability cluster you really need a faster interconnect than s

Re: [zfs-discuss] Re: Comments on a ZFS multiple use of a pool, RFE.

2006-09-13 Thread Daniel Rock
Anton B. Rang schrieb: The hostid solution that VxVM uses would catch this second problem, > because when A came up after its reboot, it would find that -- even > though it had created the pool -- it was not the last machine to access > it, and could refuse to automatically mount it. If the admi

Re: [zfs-discuss] Re: ZFS imported simultanously on 2 systems...

2006-09-13 Thread James C. McPherson
Frank Cusack wrote: ...[snip James McPherson's objections to PMC] I understand the objection to mickey mouse configurations, but I don't understand the objection to (what I consider) simply improving safety. ... And why should failover be limited to SC? Why shouldn't VCS be able to play? Why

Re: [zfs-discuss] Importing ZFS filesystems across architectures...

2006-09-13 Thread James Dickens
On 9/13/06, Erik Trimble <[EMAIL PROTECTED]> wrote: OK, this may seem like a stupid question (and we all know that there are such things...) I'm considering sharing a disk array (something like a 3510FC) between two different systems, a SPARC and an Opteron. Will ZFS transparently work to impor

Re: [zfs-discuss] Importing ZFS filesystems across architectures...

2006-09-13 Thread Torrey McMahon
Erik Trimble wrote: OK, this may seem like a stupid question (and we all know that there are such things...) I'm considering sharing a disk array (something like a 3510FC) between two different systems, a SPARC and an Opteron. Will ZFS transparently work to import/export pools between the two s

Re: [zfs-discuss] Importing ZFS filesystems across architectures...

2006-09-13 Thread James C. McPherson
Erik Trimble wrote: OK, this may seem like a stupid question (and we all know that there are such things...) I'm considering sharing a disk array (something like a 3510FC) between two different systems, a SPARC and an Opteron. Will ZFS transparently work to import/export pools between the two s

Re: [zfs-discuss] Importing ZFS filesystems across architectures...

2006-09-13 Thread Eric Schrock
If you're using EFI labels, yes (VTOC labels are not endian neutral). ZFS will automatically convert endianness from the on-disk format, and new data will be written using the native endianness, so data will be gradually be rewritten to avoid the byteswap overhead. - Eric On Wed, Sep 13, 2006 at

[zfs-discuss] Importing ZFS filesystems across architectures...

2006-09-13 Thread Erik Trimble
OK, this may seem like a stupid question (and we all know that there are such things...) I'm considering sharing a disk array (something like a 3510FC) between two different systems, a SPARC and an Opteron. Will ZFS transparently work to import/export pools between the two systems? That is, can

Re: [zfs-discuss] Re: Snapshots and backing store

2006-09-13 Thread Torrey McMahon
Matthew Ahrens wrote: Nicolas Dorfsman wrote: We need to think ZFS as ZFS, and not as a new filesystem ! I mean, the whole concept is different. Agreed. So. What could be the best architecture ? What is the problem? With UFS, I used to have separate metadevices/LUNs for each application.

Re: [zfs-discuss] Re: Snapshots and backing store

2006-09-13 Thread Darren Dunham
> Including performance considerations ? > For instance, if I have two Oracle Databases with two I/O profiles (TP versus > Batch)...what would be the best : > > 1) Two pools, each one on two LUNs. Each LUN distributed on n trays. > 2) One pool on one LUN. This LUN distributed on 2 x n trays. > 3)

[zfs-discuss] zfs and Oracle ASM

2006-09-13 Thread Philip Cannata
2 questions: 1) How does zfs compare to Oracle's ASM, in particular, ASM's ability to dynamically move hot disk blocks around? 2) Is Oracle evaluating zfs to possible find ways to optimally take advantage of its capabilities? thanks phil ___ zfs-disc

[zfs-discuss] Re: Bizzare problem with ZFS filesystem

2006-09-13 Thread Anantha N. Srirama
One more piece of information. I was able to ascertain the slowdown happens only when ZFS is used heavily; meaning lots of inflight I/O. This morning when the system was quiet my writes to the /u099 filesystem was excellent and it has gone south like I reported earlier. I am currently awaiting

[zfs-discuss] Re: Snapshots and backing store

2006-09-13 Thread Nicolas Dorfsman
Hi Matt, > > So. What could be the best architecture ? > > What is the problem? I/O profile isolation versus snap backing-store 'reservation' optimisation. > > With UFS, I used to have separate metadevices/LUNs for each > > application. With ZFS, I thought it would be nice to use a separate >

[zfs-discuss] how to list clones for a snapshot

2006-09-13 Thread Vladimír Kotal
Hello, Is there a way how to list all clones for given snapshot of a file- system ? e.g. I have the following snapshots: local/[EMAIL PROTECTED] local/[EMAIL PROTECTED] local/[EMAIL PROTECTED] and clone local/tuesday of local/[EMAIL PROTECTED] Now I'd like to get local/tuesday using loca

Re: [zfs-discuss] Re: Snapshots and backing store

2006-09-13 Thread Matthew Ahrens
Nicolas Dorfsman wrote: We need to think ZFS as ZFS, and not as a new filesystem ! I mean, the whole concept is different. Agreed. So. What could be the best architecture ? What is the problem? With UFS, I used to have separate metadevices/LUNs for each application. With ZFS, I thought it

[zfs-discuss] Re: when zfs enabled java

2006-09-13 Thread Mark Maybee
Jill Manfield wrote: My customer is running java on a ZFS file system. His platform is Soalris 10 x86 SF X4200. When he enabled ZFS his memory of 18 gigs drops to 2 gigs rather quickly. I had him do a # ps -e -o pid,vsz,comm | sort -n +1 and it came back: The culprit application you see is

Re: [zfs-discuss] Comments on a ZFS multiple use of a pool, RFE.

2006-09-13 Thread James Dickens
On 9/13/06, Eric Schrock <[EMAIL PROTECTED]> wrote: On Wed, Sep 13, 2006 at 02:29:55PM -0500, James Dickens wrote: > > this would not be the first time that Solaris overrided an administive > command, because its just not safe or sane to do so. For example. > > rm -rf / As I've repeated before,

Re: [zfs-discuss] Comments on a ZFS multiple use of a pool, RFE.

2006-09-13 Thread Eric Schrock
On Wed, Sep 13, 2006 at 02:29:55PM -0500, James Dickens wrote: > > this would not be the first time that Solaris overrided an administive > command, because its just not safe or sane to do so. For example. > > rm -rf / As I've repeated before, and will continue to repeat, it's not actually possi

[zfs-discuss] Re: Proposal: multiple copies of user data

2006-09-13 Thread Anton B. Rang
Is this true for single-sector, vs. single-ZFS-block, errors? (Yes, it's pathological and probably nobody really cares.) I didn't see anything in the code which falls back on single-sector reads. (It's slightly annoying that the interface to the block device drivers loses the SCSI error status,

[zfs-discuss] Re: Re: marvel cards.. as recommended

2006-09-13 Thread Anton B. Rang
A quick peek at the Linux source shows a small workaround in place for the 07 revision...maybe if you file a bug against Solaris to support this revision it might be possible to get it added, at least if that's the only issue. This message posted from opensolaris.org _

[zfs-discuss] Re: Re: marvel cards.. as recommended

2006-09-13 Thread Anton B. Rang
If I'm reading the source correctly, for the $60xx boards, the only supported revision is $09. Yours is $07, which presumably has some errata with no workaround, and which the Solaris driver refuses to support. Hope you can return it ... ? This message posted from opensolaris.org __

Re: [zfs-discuss] Re: ZFS imported simultanously on 2 systems...

2006-09-13 Thread Darren Dunham
> Still, after reading Mathias's description, it seems that the former > node is doing an implicit forced import when it boots back up. This > seems wrong to me. > > zpools should be imported only of the zpool itself says it's not already > taken, which of course would be overidden by a manual

[zfs-discuss] Re: Comments on a ZFS multiple use of a pool, RFE.

2006-09-13 Thread Anton B. Rang
I think there are at least two separate issues here. The first is that ZFS doesn't support multiple hosts accessing the same pool. That's simply a matter of telling people. UFS doesn't support multiple hosts, but it doesn't have any special features to prevent administrators from *trying* it. T

Re: [zfs-discuss] Comments on a ZFS multiple use of a pool, RFE.

2006-09-13 Thread James Dickens
On 9/13/06, Eric Schrock <[EMAIL PROTECTED]> wrote: There are several problems I can see: - This is what the original '-f' flag is for. I think a better approach is to expand the default message of 'zpool import' with more information, such as which was the last host to access the pool and

Re: [zfs-discuss] zpool always thinks it's mounted on another system

2006-09-13 Thread Rich
I do the 'zpool import -f moonside', and all is well until I reboot, at which point I must zpool import -f again.Below is zdb -l /dev/dsk/c2t0d0s0's output:LABEL 0     version=3    name='moonside'    state=0   

Re: [zfs-discuss] Re: ZFS imported simultanously on 2 systems...

2006-09-13 Thread Ceri Davies
On Wed, Sep 13, 2006 at 06:37:25PM +0100, Darren J Moffat wrote: > Dale Ghent wrote: > >On Sep 13, 2006, at 12:32 PM, Eric Schrock wrote: > > > >>Storing the hostid as a last-ditch check for administrative error is a > >>reasonable RFE - just one that we haven't yet gotten around to. > >>Claiming t

Re: [zfs-discuss] Proposal: multiple copies of user data

2006-09-13 Thread Mark Maybee
Bill Sommerfeld wrote: One question for Matt: when ditto blocks are used with raidz1, how well does this handle the case where you encounter one or more single-sector read errors on other drive(s) while reconstructing a failed drive? for a concrete example A0 B0 C0 D0 P0 A1 B1 C

Re: [zfs-discuss] Loss of compression with send/receive

2006-09-13 Thread Jeff Victor
ZFS properties (like compression) do not get sent with "zfs send". I believe there is an RFE about this. Darren Reed wrote: Using Solaris 10, Update 2 (b9a) I've just used "zfs send | zfs receive" to move some filesystems from one disk to another (I'm sure this is the quickest move I've ever

Re: [zfs-discuss] Re: ZFS imported simultanously on 2 systems...

2006-09-13 Thread Dale Ghent
On Sep 13, 2006, at 1:37 PM, Darren J Moffat wrote: That might be acceptable in some environments but that is going to cause disks to spin up. That will be very unacceptable in a laptop and maybe even in some energy conscious data centres. Introduce an option to 'zpool create'? Come to th

Re: [zfs-discuss] Loss of compression with send/receive

2006-09-13 Thread Eric Schrock
You want: 6421959 want zfs send to preserve properties ('zfs send -p') Which Matt is currently working on. - Eric On Thu, Sep 14, 2006 at 02:04:32AM +0800, Darren Reed wrote: > Using Solaris 10, Update 2 (b9a) > > I've just used "zfs send | zfs receive" to move some filesystems > from one disk

[zfs-discuss] Loss of compression with send/receive

2006-09-13 Thread Darren Reed
Using Solaris 10, Update 2 (b9a) I've just used "zfs send | zfs receive" to move some filesystems from one disk to another (I'm sure this is the quickest move I've ever done!) but in doing so, I lost "zfs set compression=on" on those filesystems. If I create the filesystems first and enable comp

Re: Re: [zfs-discuss] marvel cards.. as recommended

2006-09-13 Thread Joe Little
On 9/12/06, James C. McPherson <[EMAIL PROTECTED]> wrote: Joe Little wrote: > So, people here recommended the Marvell cards, and one even provided a > link to acquire them for SATA jbod support. Well, this is what the > latest bits (B47) say: > > Sep 12 13:51:54 vram marvell88sx: [ID 679681 kern.

Re: [zfs-discuss] zpool always thinks it's mounted on another system

2006-09-13 Thread Eric Schrock
Can you send the output of 'zdb -l /dev/dsk/c2t0d0s0' ? So you do the 'zpool import -f' and all is well, but then when you reboot, it doesn't show up, and you must import it again? Can you send the output of 'zdb -C' both before and after you do the import? Thanks, - Eric On Wed, Sep 13, 2006

Re: [zfs-discuss] Re: ZFS imported simultanously on 2 systems...

2006-09-13 Thread Darren J Moffat
Frank Cusack wrote: Sounds cool! Better than depending on an out-of-band heartbeat. I disagree it sounds really really bad. If you want a high availability cluster you really need a faster interconnect than spinning rust which is probably the slowest interface we have now! -- Darren J Mof

[zfs-discuss] zpool always thinks it's mounted on another system

2006-09-13 Thread Rich
Hi zfs-discuss,I was running Solaris 11, b42 on x86, and I tried upgrading to b44. I didn't have space on the root for live_upgrade, so I booted from disc to upgrade, but it failed on every attempt, so I ended up blowing away / and doing a clean b44 install. Now the zpool that was attached to that

Re: [zfs-discuss] Re: ZFS imported simultanously on 2 systems...

2006-09-13 Thread Frank Cusack
On September 13, 2006 1:28:47 PM -0400 Dale Ghent <[EMAIL PROTECTED]> wrote: On Sep 13, 2006, at 12:32 PM, Eric Schrock wrote: Storing the hostid as a last-ditch check for administrative error is a reasonable RFE - just one that we haven't yet gotten around to. Claiming that it will solve the c

Re: [zfs-discuss] Re: ZFS imported simultanously on 2 systems...

2006-09-13 Thread Darren J Moffat
Dale Ghent wrote: On Sep 13, 2006, at 12:32 PM, Eric Schrock wrote: Storing the hostid as a last-ditch check for administrative error is a reasonable RFE - just one that we haven't yet gotten around to. Claiming that it will solve the clustering problem oversimplifies the problem and will lead

Re: [zfs-discuss] Re: ZFS imported simultanously on 2 systems...

2006-09-13 Thread Dale Ghent
On Sep 13, 2006, at 12:32 PM, Eric Schrock wrote: Storing the hostid as a last-ditch check for administrative error is a reasonable RFE - just one that we haven't yet gotten around to. Claiming that it will solve the clustering problem oversimplifies the problem and will lead to people who think

Re: [zfs-discuss] Comments on a ZFS multiple use of a pool, RFE.

2006-09-13 Thread Eric Schrock
There are several problems I can see: - This is what the original '-f' flag is for. I think a better approach is to expand the default message of 'zpool import' with more information, such as which was the last host to access the pool and when. The point of '-f' is that you have recognized

Re: [zfs-discuss] Proposal: multiple copies of user data

2006-09-13 Thread Torrey McMahon
Bart Smaalders wrote: Torrey McMahon wrote: eric kustarz wrote: I want per pool, per dataset, and per file - where all are done by the filesystem (ZFS), not the application. I was talking about a further enhancement to "copies" than what Matt is currently proposing - per file "copies", but

Re: [zfs-discuss] Snapshots and backing store

2006-09-13 Thread Torrey McMahon
Matthew Ahrens wrote: Nicolas Dorfsman wrote: Hi, There's something really bizarre in ZFS snaphot specs : "Uses no separate backing store." . Hum...if I want to mutualize one physical volume somewhere in my SAN as THE snaphots backing-store...it becomes impossible to do ! Really bad. Is there

Re: [zfs-discuss] Proposal: multiple copies of user data

2006-09-13 Thread Bart Smaalders
Torrey McMahon wrote: eric kustarz wrote: I want per pool, per dataset, and per file - where all are done by the filesystem (ZFS), not the application. I was talking about a further enhancement to "copies" than what Matt is currently proposing - per file "copies", but its more work (one thi

Re: [zfs-discuss] Re: ZFS imported simultanously on 2 systems...

2006-09-13 Thread Frank Cusack
On September 13, 2006 9:32:50 AM -0700 Eric Schrock <[EMAIL PROTECTED]> wrote: On Wed, Sep 13, 2006 at 09:14:36AM -0700, Frank Cusack wrote: Why again shouldn't zfs have a hostid written into the pool, to prevent import if the hostid doesn't match? See: 6282725 hostname/hostid should be stor

[zfs-discuss] Comments on a ZFS multiple use of a pool, RFE.

2006-09-13 Thread James Dickens
I filed this RFE earlier, since there is no way for non sun personel to see this RFE for a while I am posting it here, and asking for feedback from the community. [Fwd: CR 6470231 Created P5 opensolaris/triage-queue Add an inuse check that is inforced even if import -f is used.] Inbox Assign a

Re: [zfs-discuss] Re: Re: Proposal: multiple copies of user data

2006-09-13 Thread Gregory Shaw
On Sep 12, 2006, at 2:55 PM, Celso wrote:On 12/09/06, Celso <[EMAIL PROTECTED]> wrote: One of the great things about zfs, is that it protects not just against mechanical failure, butagainst silent data corruption. Having this availableto laptop owners seems to me to be important tomaking zfs even m

Re: [zfs-discuss] Proposal: multiple copies of user data

2006-09-13 Thread Torrey McMahon
eric kustarz wrote: I want per pool, per dataset, and per file - where all are done by the filesystem (ZFS), not the application. I was talking about a further enhancement to "copies" than what Matt is currently proposing - per file "copies", but its more work (one thing being we don't have

Re: [zfs-discuss] Re: ZFS imported simultanously on 2 systems...

2006-09-13 Thread Eric Schrock
On Wed, Sep 13, 2006 at 09:14:36AM -0700, Frank Cusack wrote: > > Why again shouldn't zfs have a hostid written into the pool, to prevent > import if the hostid doesn't match? See: 6282725 hostname/hostid should be stored in the label Keep in mind that this is not a complete clustering solution

[zfs-discuss] Re: Re: Recommendation ZFS on StorEdge 3320

2006-09-13 Thread Anton B. Rang
>With ZFS however the in-between cache is obsolete, as individual disk caches >can be used >directly. I also openly question whether even the dedicated RAID >HW is faster than the newest >CPUs in modern servers. Individual disk caches are typically in the 8-16 MB range; for 15 disks, that gives

[zfs-discuss] Re: Re: Recommendation ZFS on StorEdge 3320

2006-09-13 Thread Anton B. Rang
> just measured quickly that a 1.2Ghz sparc can do [400-500]MB/sec > of encoding (time spent in misnamed function > vdev_raidz_reconstruct) for a 3 disk raid-z group. Strange, that seems very low. Ah, I see. The current code loops through each buffer, either copying or XORing it into the parity.

Re: [zfs-discuss] Re: ZFS imported simultanously on 2 systems...

2006-09-13 Thread Frank Cusack
On September 13, 2006 6:09:50 AM -0700 Mathias F <[EMAIL PROTECTED]> wrote: [...] a product which is *not* currently multi-host-aware to behave in the same safe manner as one which is. That`s the point we figured out while testing it ;) I just wanted to have our thoughts reviewed by other ZFS

[zfs-discuss] Re: Snapshots and backing store

2006-09-13 Thread Nicolas Dorfsman
> If you want to copy your filesystems (or snapshots) > to other disks, you > can use 'zfs send' to send them to a different pool > (which may even be > on a different machine!). Oh no ! It means copy the whole filesystem. The target here is definitively to snapshot the filesystem and them backu

[zfs-discuss] Re: Snapshots and backing store

2006-09-13 Thread Nicolas Dorfsman
Well. > ZFS isn't copy-on-write in the same way that things > like ufssnap are. > ufssnap is copy-on-write in that when you write > something, it copies out > the old data and writes it somewhere else (the > backing store). ZFS doesn't > need to do this - it simply writes the new data to a > new

[zfs-discuss] Re: when zfs enabled java

2006-09-13 Thread Roch - PAE
Jill Manfield writes: > My customer is running java on a ZFS file system. His platform is Soalris > 10 x86 SF X4200. When he enabled ZFS his memory of 18 gigs drops to 2 gigs > rather quickly. I had him do a # ps -e -o pid,vsz,comm | sort -n +1 and it > came back: > > The culprit app

[zfs-discuss] Re: Re: Recommendation ZFS on StorEdge 3320

2006-09-13 Thread Anton B. Rang
> It would be interesting to have a zfs enabled HBA to offload the checksum > and parity calculations. How much of zfs would such an HBA have to > understand? That's an interesting question. For parity, it's actually pretty easy. One can envision an HBA which took a group of related write comman

Re: [zfs-discuss] Proposal: multiple copies of user data

2006-09-13 Thread Bill Sommerfeld
On Wed, 2006-09-13 at 02:30, Richard Elling wrote: > The field data I have says that complete disk failures are the exception. > I hate to leave this as a teaser, I'll expand my comments later. That matches my anecdotal experience with laptop drives; maybe I'm just lucky, or maybe I'm just paying

[zfs-discuss] Re: Re: Bizzare problem with ZFS filesystem

2006-09-13 Thread Anantha N. Srirama
I ran the DTrace script and the resulting output is rather large (1 million lines and 65MB), so I won't burden this forum with that much data. Here are the top 100 lines from the DTrace output. Let me know if you need the full output and I'll figure out a way for the group to get it. dtrace: de

Re: [zfs-discuss] Snapshots and backing store

2006-09-13 Thread Matthew Ahrens
Nicolas Dorfsman wrote: Hi, There's something really bizarre in ZFS snaphot specs : "Uses no separate backing store." . Hum...if I want to mutualize one physical volume somewhere in my SAN as THE snaphots backing-store...it becomes impossible to do ! Really bad. Is there any chance to have a "

Re: [zfs-discuss] Proposal: multiple copies of user data

2006-09-13 Thread eric kustarz
Darren J Moffat wrote: eric kustarz wrote: So it seems to me that having this feature per-file is really useful. Per-file with a POSIX filesystem is often not that useful. That is because many applications (since you mentioned a presentation StarOffice I know does this) don't update the

Re: [zfs-discuss] Re: Re: ZFS forces system to paging to the point it is

2006-09-13 Thread Mark Maybee
Robert Milkowski wrote: Hello Philippe, It was recommended to lower ncsize and I did (to default ~128K). So far it works ok for last days and staying at about 1GB free ram (fluctuating between 900MB-1,4GB). Do you think it's a long term solution or with more load and more data the problem can s

Re: [zfs-discuss] Snapshots and backing store

2006-09-13 Thread Scott Howard
On Wed, Sep 13, 2006 at 07:38:22AM -0700, Nicolas Dorfsman wrote: > There's something really bizarre in ZFS snaphot specs : "Uses no separate > backing store." . It's not at all bizarre once you understand how ZFS works. I'd suggest reading through some of the documentation available at http:/

Re: [zfs-discuss] ZFS and free space

2006-09-13 Thread Mark Maybee
Robert Milkowski wrote: Hello Mark, Monday, September 11, 2006, 4:25:40 PM, you wrote: MM> Jeremy Teo wrote: Hello, how are writes distributed as the free space within a pool reaches a very small percentage? I understand that when free space is available, ZFS will batch writes and then issu

[zfs-discuss] Snapshots and backing store

2006-09-13 Thread Nicolas Dorfsman
Hi, There's something really bizarre in ZFS snaphot specs : "Uses no separate backing store." . Hum...if I want to mutualize one physical volume somewhere in my SAN as THE snaphots backing-store...it becomes impossible to do ! Really bad. Is there any chance to have a "backing-store-fi

[zfs-discuss] Re: ZFS API (again!), need quotactl(7I)

2006-09-13 Thread Richard L. Hamilton
> On 13/09/2006, at 2:29 AM, Eric Schrock wrote: > > On Tue, Sep 12, 2006 at 07:23:00AM -0400, Jeff A. > Earickson wrote: > >> > >> Modify the dovecot IMAP server so that it can get > zfs quota > >> information > >> to be able to implement the QUOTA feature of the > IMAP protocol > >> (RFC 2087

[zfs-discuss] when zfs enabled java

2006-09-13 Thread Jill Manfield
My customer is running java on a ZFS file system. His platform is Soalris 10 x86 SF X4200. When he enabled ZFS his memory of 18 gigs drops to 2 gigs rather quickly. I had him do a # ps -e -o pid,vsz,comm | sort -n +1 and it came back: The culprit application you see is java: 507 89464 /usr/b

[zfs-discuss] Re: Re[2]: System hang caused by a "bad" snapshot

2006-09-13 Thread Ben Miller
> Hello Matthew, > Tuesday, September 12, 2006, 7:57:45 PM, you wrote: > MA> Ben Miller wrote: > >> I had a strange ZFS problem this morning. The > entire system would > >> hang when mounting the ZFS filesystems. After > trial and error I > >> determined that the problem was with one of the > 250

Re: [zfs-discuss] Memory Usage

2006-09-13 Thread Robert Milkowski
Hello Thomas, Tuesday, September 12, 2006, 7:40:25 PM, you wrote: TB> Hi, TB> We have been using zfs for a couple of months now, and, overall, really TB> like it. However, we have run into a major problem -- zfs's memory TB> requirements TB> crowd out our primary application. Ultimately, we

Re[2]: [zfs-discuss] System hang caused by a "bad" snapshot

2006-09-13 Thread Robert Milkowski
Hello Matthew, Tuesday, September 12, 2006, 7:57:45 PM, you wrote: MA> Ben Miller wrote: >> I had a strange ZFS problem this morning. The entire system would >> hang when mounting the ZFS filesystems. After trial and error I >> determined that the problem was with one of the 2500 ZFS filesystem

Re[2]: [zfs-discuss] Re: Re: ZFS forces system to paging to the point it is

2006-09-13 Thread Robert Milkowski
Hello Philippe, It was recommended to lower ncsize and I did (to default ~128K). So far it works ok for last days and staying at about 1GB free ram (fluctuating between 900MB-1,4GB). Do you think it's a long term solution or with more load and more data the problem can surface again even with cur

Re[2]: [zfs-discuss] Re: Recommendation ZFS on StorEdge 3320

2006-09-13 Thread Robert Milkowski
Hello Frank, Tuesday, September 12, 2006, 9:41:05 PM, you wrote: FC> It would be interesting to have a zfs enabled HBA to offload the checksum FC> and parity calculations. How much of zfs would such an HBA have to FC> understand? That won't be end-to-end checksuming anymore, right? That way you

[zfs-discuss] Re: Re: Re: ZFS imported simultanously on 2 systems...

2006-09-13 Thread Mathias F
[...] > a product which is *not* currently multi-host-aware to > behave in the > same safe manner as one which is. That`s the point we figured out while testing it ;) I just wanted to have our thoughts reviewed by other ZFS users. Our next steps IF the failover would have succeeded would be to cr

[zfs-discuss] zfs receive kernel panics the machine

2006-09-13 Thread Niclas Sodergard
Hi, I'm running some experiments with zfs send and receive on Solaris 10u2 between two different machines. On server 1 I have the following data/zones/app1838M 26.5G 836M /zones/app1 data/zones/[EMAIL PROTECTED] 2.35M - 832M - I have a script that creates a new snapshot and

Re: [zfs-discuss] Re: ZFS imported simultanously on 2 systems...

2006-09-13 Thread Dale Ghent
James C. McPherson wrote: As I understand things, SunCluster 3.2 is expected to have support for HA-ZFS and until that version is released you will not be running in a supported configuration and so any errors you encounter are *your fault alone*. Still, after reading Mathias's descriptio

Re: [zfs-discuss] Re: Re: ZFS imported simultanously on 2 systems...

2006-09-13 Thread James C. McPherson
Mathias F wrote: ... Yes it is, you got it ;) VxVM just notices that it's previously imported DiskGroup(s) (for ZFS this is the Pool) were failed over and doesn't try to re-acquire them. It waits for an admin action. The topic of "clustering" ZFS is not the problem atm, we just test the failover

Re: [zfs-discuss] Re: Re: ZFS imported simultanously on 2 systems...

2006-09-13 Thread Michael Schuster
Mathias F wrote: I think I get the whole picture, let me summarise: - you create a pool P and an FS on host A - Host A crashes - you import P on host B; this only works with -f, as "zpool import" otherwise refuses to do so. - now P is imported on B - host A comes back up and re-accesses P, the

[zfs-discuss] Re: Re: ZFS imported simultanously on 2 systems...

2006-09-13 Thread Mathias F
> I think I get the whole picture, let me summarise: > > - you create a pool P and an FS on host A > - Host A crashes > - you import P on host B; this only works with -f, as > "zpool import" otherwise > refuses to do so. > - now P is imported on B > - host A comes back up and re-accesses P, there

Re: [zfs-discuss] Proposal: multiple copies of user data

2006-09-13 Thread Al Hopper
On Tue, 12 Sep 2006, Matthew Ahrens wrote: > Torrey McMahon wrote: > > Matthew Ahrens wrote: > >> The problem that this feature attempts to address is when you have > >> some data that is more important (and thus needs a higher level of > >> redundancy) than other data. Of course in some situatio

Re: [zfs-discuss] Re: ZFS imported simultanously on 2 systems...

2006-09-13 Thread Zoram Thanga
Hi Mathias, Mathias F wrote: Without -f option, the ZFS can't be imported while "reserved" for the other host, even if that host is down. As I said, we are testing ZFS as a [b]replacement for VxVM[/b], which we are using atm. So as a result our tests have failed and we have to keep on using

Re: [zfs-discuss] ZFS API (again!), need quotactl(7I)

2006-09-13 Thread Boyd Adamson
On 13/09/2006, at 2:29 AM, Eric Schrock wrote: On Tue, Sep 12, 2006 at 07:23:00AM -0400, Jeff A. Earickson wrote: Modify the dovecot IMAP server so that it can get zfs quota information to be able to implement the QUOTA feature of the IMAP protocol (RFC 2087). In this case pull the zfs quo

Re: [zfs-discuss] Re: ZFS imported simultanously on 2 systems...

2006-09-13 Thread James C. McPherson
Mathias F wrote: Without -f option, the ZFS can't be imported while "reserved" for the other host, even if that host is down. This is the correct behaviour. What do you want to cause? data corruption? As I said, we are testing ZFS as a [b]replacement for VxVM[/b], which we are using atm. So a

Re: [zfs-discuss] Re: ZFS imported simultanously on 2 systems...

2006-09-13 Thread Michael Schuster
Mathias F wrote: Without -f option, the ZFS can't be imported while "reserved" for the other host, even if that host is down. As I said, we are testing ZFS as a [b]replacement for VxVM[/b], which we are using atm. So as a result our tests have failed and we have to keep on using Veritas. Tha

Re: [zfs-discuss] Proposal: multiple copies of user data

2006-09-13 Thread Tobias Schacht
On 9/13/06, Mike Gerdts <[EMAIL PROTECTED]> wrote: The only part of the proposal I don't like is space accounting. Double or triple charging for data will only confuse those apps and users that check for free space or block usage. Why exactly isn't reporting the free space divided by the "copie

Re: [zfs-discuss] Proposal: multiple copies of user data

2006-09-13 Thread Mike Gerdts
On 9/13/06, Richard Elling <[EMAIL PROTECTED]> wrote: >> * Mirroring offers slightly better redundancy, because one disk from >>each mirror can fail without data loss. > > Is this use of slightly based upon disk failure modes? That is, when > disks fail do they tend to get isolated areas of

[zfs-discuss] Re: ZFS imported simultanously on 2 systems...

2006-09-13 Thread Mathias F
Without -f option, the ZFS can't be imported while "reserved" for the other host, even if that host is down. As I said, we are testing ZFS as a [b]replacement for VxVM[/b], which we are using atm. So as a result our tests have failed and we have to keep on using Veritas. Thanks for all your an

[zfs-discuss] 'zfs mirror as backup' status?

2006-09-13 Thread Dick Davies
Since we were just talking about resilience on laptops, I wondered if it there had been any progress in sorting some of the glitches that were involved in: http://www.opensolaris.org/jive/thread.jspa?messageID=25144戸 ? -- Rasputin :: Jack of All Trades - Master of Nuns http://number9.hellooperat

Re: [zfs-discuss] ZFS imported simultanously on 2 systems...

2006-09-13 Thread Gregory Shaw
A question: You're forcing the import of the pool on the other host.  That disregards any checks, similar to a forced import of a veritas disk group. Does the same thing happen if you try to import the pool without the force option?On Sep 13, 2006, at 1:44 AM, Mathias F wrote:Hi,we are testing ZFS

Re: [zfs-discuss] Re: ZFS imported simultanously on 2 systems...

2006-09-13 Thread Thomas Wagner
On Wed, Sep 13, 2006 at 12:28:23PM +0200, Michael Schuster wrote: > Mathias F wrote: > >Well, we are using the -f parameter to test failover functionality. > >If one system with mounted ZFS is down, we have to use the force to mount > >it on the failover system. > >But when the failed system com

Re: [zfs-discuss] Re: ZFS imported simultanously on 2 systems...

2006-09-13 Thread Michael Schuster
Mathias F wrote: Well, we are using the -f parameter to test failover functionality. If one system with mounted ZFS is down, we have to use the force to mount it on the failover system. But when the failed system comes online again, it remounts the ZFS without errors, so it is mounted simultano

[zfs-discuss] Re: ZFS imported simultanously on 2 systems...

2006-09-13 Thread Mathias F
Well, we are using the -f parameter to test failover functionality. If one system with mounted ZFS is down, we have to use the force to mount it on the failover system. But when the failed system comes online again, it remounts the ZFS without errors, so it is mounted simultanously on both nodes.

Re: [zfs-discuss] Proposal: multiple copies of user data

2006-09-13 Thread Darren J Moffat
eric kustarz wrote: So it seems to me that having this feature per-file is really useful. Per-file with a POSIX filesystem is often not that useful. That is because many applications (since you mentioned a presentation StarOffice I know does this) don't update the file in place. Instead th

  1   2   >