Re: [zfs-discuss] metadata inconsistency?

2006-07-17 Thread Matthew Ahrens
On Thu, Jul 06, 2006 at 12:46:57AM -0700, Patrick Mauritz wrote: > Hi, > after some unscheduled reboots (to put it lightly), I've got an interesting > setup on my notebook's zfs partition: > setup: simple zpool, no raid or mirror, a couple of zfs partitions, one zvol > for swap. /foo is one such

Re: [zfs-discuss] Big JBOD: what would you do?

2006-07-17 Thread Richard Elling
[stirring the pot a little...] Jim Mauro wrote: I agree with Greg - For ZFS, I'd recommend a larger number of raidz luns, with a smaller number of disks per LUN, up to 6 disks per raidz lun. For 6 disks, 3x2-way RAID-1+0 offers better resiliency than RAID-Z or RAID-Z2. For 3-5 disks, RAID-Z2

Re: [zfs-discuss] zpool unavailable after reboot

2006-07-17 Thread Eric Schrock
On Tue, Jul 18, 2006 at 10:10:33AM +1000, Nathan Kroenert wrote: > Jeff - > > That sounds like a great idea... > > Another idea might to be have a zpool create announce the 'availability' > of any given configuration, and output the Single points of failure. > > # zpool create mypool a b

Re: [zfs-discuss] zpool unavailable after reboot

2006-07-17 Thread Nathan Kroenert
Jeff - That sounds like a great idea... Another idea might to be have a zpool create announce the 'availability' of any given configuration, and output the Single points of failure. # zpool create mypool a b c NOTICE: This pool has no redundancy. Without hardware redund

Re: [zfs-discuss] Big JBOD: what would you do?

2006-07-17 Thread Jim Mauro
I agree with Greg - For ZFS, I'd recommend a larger number of raidz luns, with a smaller number of disks per LUN, up to 6 disks per raidz lun. This will more closely align with performance best practices, so it would be cool to find common ground in terms of a sweet-spot for performance and RA

[zfs-discuss] Fun with ZFS and iscsi volumes

2006-07-17 Thread Jason Hoffman
Hi Everyone, I thought I'd share some benchmarking and playing around that we had done with making zpools from "disks" that were iscsi volumes. The numbers are representative of 6 benchmarking rounds per. The interesting finding at least for us was the filebench varmail (50:50 reads-write

Re: [zfs-discuss] ZFS needs a viable backup mechanism

2006-07-17 Thread Matthew Ahrens
On Fri, Jul 07, 2006 at 04:00:38PM -0400, Dale Ghent wrote: > Add an option to zpool(1M) to dump the pool config as well as the > configuration of the volumes within it to an XML file. This file > could then be "sucked in" to zpool at a later date to recreate/ > replicate the pool and its volu

Re[2]: [zfs-discuss] Large device support

2006-07-17 Thread J.P. King
I take it you already have solved the problem. Yes, my problems went away once my device supported the extended SCSI instruction set. Julian -- Julian King Computer Officer, University of Cambridge, Unix Support ___ zfs-discuss mailing list zfs-di

Re: [zfs-discuss] Big JBOD: what would you do?

2006-07-17 Thread Gregory Shaw
To maximize the throughput, I'd go with 8 5-disk raid-z{2} luns.   Using that configuration, a full-width stripe write should be a single operation for each controller.In production, the application needs would probably dictate the resulting disk layout.  If the application doesn't need tons of i/o

[zfs-discuss] Big JBOD: what would you do?

2006-07-17 Thread Richard Elling
ZFS fans, I'm preparing some analyses on RAS for large JBOD systems such as the Sun Fire X4500 (aka Thumper). Since there are zillions of possible permutations, I need to limit the analyses to some common or desirable scenarios. Naturally, I'd like your opinions. I've already got a few scenario

Fwd: Re[3]: [zfs-discuss] zpool status and CKSUM errors

2006-07-17 Thread Robert Milkowski
Hi. Sorry for forward but maybe this will be more visible that way. I really think something strange is going on here and it's virtually impossible that I have a problem with hardware and get CKSUM errors (many of them) only for ditto blocks. This is a forwarded message From: Robert

Re: [zfs-discuss] Re: zpool unavailable after reboot

2006-07-17 Thread eric kustarz
Mikael Kjerrman wrote: Jeff, thanks for your answer, and I almost wish I did type it wrong (the easy explanation that I messed up :-) but from what I can tell I did get it right --- zpool commands I ran --- bash-3.00# grep zpool /.bash_history zpool zpool create data raidz c1t0d0 c1t1d0 c1t

Re: [zfs-discuss] Proposal: delegated administration

2006-07-17 Thread Mark Shellenbaum
James Dickens wrote: On 7/17/06, Mark Shellenbaum <[EMAIL PROTECTED]> wrote: The following is the delegated admin model that Matt and I have been working on. At this point we are ready for your feedback on the proposed model. -Mark PERMISSION GRANTING zfs allow [-l] [-d] <"ever

Re: [zfs-discuss] Proposal: delegated administration

2006-07-17 Thread Mark Shellenbaum
Glenn Skinner wrote: The following is a nit-level comment, so I've directed it onl;y to you, rather than to the entire list. Date: Mon, 17 Jul 2006 09:57:35 -0600 From: Mark Shellenbaum <[EMAIL PROTECTED]> Subject: [zfs-discuss] Proposal: delegated administration The following i

Re[2]: [zfs-discuss] Large device support

2006-07-17 Thread Robert Milkowski
Hello J.P., Monday, July 17, 2006, 3:57:01 PM, you wrote: >> Well if in fact sd/ssd with EFI labels still have limit to 2TB than >> create SMI label with one slice representing whole disk and then put >> zfs on that slice. Eventually manually turn on write cache then. JPK> How do you suggest tha

Re: [zfs-discuss] ZFS bechmarks w/8 disk raid - Quirky results, any thoughts?

2006-07-17 Thread James Dickens
On 7/17/06, Jonathan Wheeler <[EMAIL PROTECTED]> wrote: Hi All, I've just built an 8 disk zfs storage box, and I'm in the testing phase before I put it into production. I've run into some unusual results, and I was hoping the community could offer some suggestions. I've bascially made the swit

Re: [zfs-discuss] Proposal: delegated administration

2006-07-17 Thread James Dickens
On 7/17/06, Mark Shellenbaum <[EMAIL PROTECTED]> wrote: The following is the delegated admin model that Matt and I have been working on. At this point we are ready for your feedback on the proposed model. -Mark PERMISSION GRANTING zfs allow [-l] [-d] <"everyone"|user|group> [,..

Re: [zfs-discuss] Proposal: delegated administration

2006-07-17 Thread Nicolas Williams
On Mon, Jul 17, 2006 at 10:11:35AM -0700, Matthew Ahrens wrote: > > I want root to create a new filesystem for a new user under > > the /export/home filesystem, but then have that user get the > > right privs via inheritance rather than requiring root to run > > a set of zfs commands. > > In that

Re: [zfs-discuss] Proposal: delegated administration

2006-07-17 Thread Mark Shellenbaum
Bart Smaalders wrote: Matthew Ahrens wrote: On Mon, Jul 17, 2006 at 10:00:44AM -0700, Bart Smaalders wrote: So as administrator what do I need to do to set /export/home up for users to be able to create their own snapshots, create dependent filesystems (but still mounted underneath their /expor

Re: [zfs-discuss] Large device support

2006-07-17 Thread Torrey McMahon
Or if you have the right patches ... http://blogs.sun.com/roller/page/torrey?entry=really_big_luns Cindy Swearingen wrote: Hi Julian, Can you send me the documentation pointer that says 2 TB isn't supported on the Solaris 10 6/06 release? The 2 TB limit was lifted in the Solaris 10 1/06 rele

Re: [zfs-discuss] Proposal: delegated administration

2006-07-17 Thread Bart Smaalders
Matthew Ahrens wrote: On Mon, Jul 17, 2006 at 10:00:44AM -0700, Bart Smaalders wrote: So as administrator what do I need to do to set /export/home up for users to be able to create their own snapshots, create dependent filesystems (but still mounted underneath their /export/home/usrname)? In ot

Re: [zfs-discuss] Proposal: delegated administration

2006-07-17 Thread Matthew Ahrens
On Mon, Jul 17, 2006 at 10:00:44AM -0700, Bart Smaalders wrote: > >>So as administrator what do I need to do to set > >>/export/home up for users to be able to create their own > >>snapshots, create dependent filesystems (but still mounted > >>underneath their /export/home/usrname)? > >> > >>In oth

Re: [zfs-discuss] Proposal: delegated administration

2006-07-17 Thread Mark Shellenbaum
Bart Smaalders wrote: Matthew Ahrens wrote: On Mon, Jul 17, 2006 at 09:44:28AM -0700, Bart Smaalders wrote: Mark Shellenbaum wrote: PERMISSION GRANTING zfs allow -c [,...] -c "Create" means that the permission will be granted (Locally) to the creator on any newly-created descendant file

Re: [zfs-discuss] Proposal: delegated administration

2006-07-17 Thread Bart Smaalders
Matthew Ahrens wrote: On Mon, Jul 17, 2006 at 09:44:28AM -0700, Bart Smaalders wrote: Mark Shellenbaum wrote: PERMISSION GRANTING zfs allow -c [,...] -c "Create" means that the permission will be granted (Locally) to the creator on any newly-created descendant filesystems. ALLOW EXA

Re: [zfs-discuss] ZFS bechmarks w/8 disk raid - Quirky results, any thoughts?

2006-07-17 Thread Al Hopper
On Mon, 17 Jul 2006, Roch wrote: > > Sorry to plug my own blog but have you had a look at these ? > > http://blogs.sun.com/roller/page/roch?entry=when_to_and_not_to (raidz) > http://blogs.sun.com/roller/page/roch?entry=the_dynamics_of_zfs > > Also, my thinking is that raid-z is probabl

Re: [zfs-discuss] Proposal: delegated administration

2006-07-17 Thread Matthew Ahrens
On Mon, Jul 17, 2006 at 09:44:28AM -0700, Bart Smaalders wrote: > Mark Shellenbaum wrote: > >PERMISSION GRANTING > > > > zfs allow -c [,...] > > > >-c "Create" means that the permission will be granted (Locally) to the > >creator on any newly-created descendant filesystems. > > > >ALLOW EXAMPL

Re: [zfs-discuss] Proposal: delegated administration

2006-07-17 Thread Darren J Moffat
Mark Shellenbaum wrote: The following is the delegated admin model that Matt and I have been working on. At this point we are ready for your feedback on the proposed model. Overall this looks really good. I might have some detailed comments after a third reading, but I think it certainly co

Re: [zfs-discuss] Proposal: delegated administration

2006-07-17 Thread Bart Smaalders
Mark Shellenbaum wrote: The following is the delegated admin model that Matt and I have been working on. At this point we are ready for your feedback on the proposed model. -Mark PERMISSION GRANTING zfs al

Re: [zfs-discuss] ZFS bechmarks w/8 disk raid - Quirky results, any thoughts?

2006-07-17 Thread Richard Elling
Dana H. Myers wrote: Jonathan Wheeler wrote: ---Sequential Output ---Sequential Input-- --Random-- -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks--- mirror MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU /sec %CPU 8 disk 8196

Re: [zfs-discuss] zpool unavailable after reboot

2006-07-17 Thread Richard Elling
I too have seen this recently, due to a partially failed drive. When I physically removed the drive, ZFS figured everything out and I was back up and running. Alas, I have been unable to recreate. There is a bug lurking here, if someone has a more clever way to test, we might be able to nail it d

Re: [zfs-discuss] zvol of files for Oracle?

2006-07-17 Thread Roch
Robert Milkowski writes: > Hello zfs-discuss, > > What would you rather propose for ZFS+ORACLE - zvols or just files > from the performance standpoint? > > > -- > Best regards, > Robert mailto:[EMAIL PROTECTED] > ht

Re: [zfs-discuss] ZFS bechmarks w/8 disk raid - Quirky results, any thoughts?

2006-07-17 Thread Roch
Sorry to plug my own blog but have you had a look at these ? http://blogs.sun.com/roller/page/roch?entry=when_to_and_not_to (raidz) http://blogs.sun.com/roller/page/roch?entry=the_dynamics_of_zfs Also, my thinking is that raid-z is probably more friendly when the config contains

[zfs-discuss] Proposal: delegated administration

2006-07-17 Thread Mark Shellenbaum
The following is the delegated admin model that Matt and I have been working on. At this point we are ready for your feedback on the proposed model. -Mark PERMISSION GRANTING zfs allow [-l] [-d] <"everyone"|user|group> [,...] \ zfs allow [-l] [-d] -u [,..

Re: [zfs-discuss] ZFS bechmarks w/8 disk raid - Quirky results, any thoughts?

2006-07-17 Thread Dana H. Myers
Jonathan Wheeler wrote: I'm not a ZFS expert - I'm just an enthusiastic user inside Sun. Here are some brief observations: > Bonnie > ---Sequential Output ---Sequential Input-- > --Random-- > -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- > --See

[zfs-discuss] Re: half duplex read/write operations to disk sometimes?

2006-07-17 Thread Roch
Hi Sean, You suffer from an extreme bout of 6429205 each zpool needs to monitor it's throughput and throttle heavy writers When this is fixed, your responsiveness will be better. Note to Mark, Sean is more than willing to test any fix we would have for this... -r

Re: [zfs-discuss] Large device support

2006-07-17 Thread J.P. King
On Mon, 17 Jul 2006, Cindy Swearingen wrote: Hi Julian, Can you send me the documentation pointer that says 2 TB isn't supported on the Solaris 10 6/06 release? As per my original post: http://docs.sun.com/app/docs/doc/817-5093/6mkisoq1k?a=view#disksconcepts-17 This doesn't say which version

Re: [zfs-discuss] zpool unavailable after reboot

2006-07-17 Thread Al Hopper
On Mon, 17 Jul 2006, Darren J Moffat wrote: > Jeff Bonwick wrote > > zpool create data unreplicated A B C > > > > The extra typing would be annoying, but would make it almost impossible > > to get the wrong behavior by accident. > > I think that is a very good idea from a usability view point.

[zfs-discuss] Re: Mirroring better with checksums?

2006-07-17 Thread Anton B. Rang
Well, it's not related to RAID-Z at all, but yes, mirroring is better with ZFS. The checksums allow bad data on either side of the mirror to be detected, so if for some reason one disk is sometimes losing or damaging a write, the other disk can provide the good data (and ZFS can tell which is co

Re: [zfs-discuss] zpool unavailable after reboot

2006-07-17 Thread Darren J Moffat
Jeff Bonwick wrote zpool create data unreplicated A B C The extra typing would be annoying, but would make it almost impossible to get the wrong behavior by accident. I think that is a very good idea from a usability view point. It is better to have to type a few more chars to explic

Re: [zfs-discuss] Large device support

2006-07-17 Thread Cindy Swearingen
Hi Julian, Can you send me the documentation pointer that says 2 TB isn't supported on the Solaris 10 6/06 release? The 2 TB limit was lifted in the Solaris 10 1/06 release, as described here: http://docs.sun.com/app/docs/doc/817-5093/6mkisoq1j?a=view#ftzen Thanks, Cindy J.P. King wrote:

Re: [zfs-discuss] Large device support

2006-07-17 Thread J.P. King
Well if in fact sd/ssd with EFI labels still have limit to 2TB than create SMI label with one slice representing whole disk and then put zfs on that slice. Eventually manually turn on write cache then. Well, in fact it turned out that the firmware on the device needed upgrading to support the a

Re: [zfs-discuss] Large device support

2006-07-17 Thread J.P. King
Well if in fact sd/ssd with EFI labels still have limit to 2TB than create SMI label with one slice representing whole disk and then put zfs on that slice. Eventually manually turn on write cache then. How do you suggest that I create a slice representing the whole disk? format (with or without

Re: [zfs-discuss] Enabling compression/encryption on a populated filesystem

2006-07-17 Thread Luke Scharf
Darren J Moffat wrote: Buth the reason thing is how do you tell the admin "its done now the filesystem is safe".   With compression you don't generally care if some old stuff didn't compress (and with the current implementation it has to compress a certain amount or it gets written uncompressed

Re: [zfs-discuss] Large device support

2006-07-17 Thread Robert Milkowski
Hello J.P., Monday, July 17, 2006, 2:15:56 PM, you wrote: JPK> Possibly not the right list, but the only appropriate one I knew about. JPK> I have a Solaris box (just reinstalled to Sol 10 606) with a 3.19TB device JPK> hanging off it, attatched by fibre. JPK> Solaris refuses to see this device

Re: [zfs-discuss] Re: zvol Performance

2006-07-17 Thread Neil Perrin
This is change request: 6428639 large writes to zvol synchs too much, better cut down a little which I have a fix for, but it hasn't been put back. Neil. Jürgen Keil wrote On 07/17/06 04:18,: Further testing revealed that it wasn't an iSCSI performance issue but a zvol issue. Testing on a SA

[zfs-discuss] ZFS bechmarks w/8 disk raid - Quirky results, any thoughts?

2006-07-17 Thread Jonathan Wheeler
Hi All, I've just built an 8 disk zfs storage box, and I'm in the testing phase before I put it into production. I've run into some unusual results, and I was hoping the community could offer some suggestions. I've bascially made the switch to Solaris on the promises of ZFS alone (yes I'm that

[zfs-discuss] Large device support

2006-07-17 Thread J.P. King
Possibly not the right list, but the only appropriate one I knew about. I have a Solaris box (just reinstalled to Sol 10 606) with a 3.19TB device hanging off it, attatched by fibre. Solaris refuses to see this device except as a 1.19 TB device. Documentation that I have found (http://docs.

[zfs-discuss] Re: howto reduce ?zfs introduced? noise

2006-07-17 Thread Thomas Maier-Komor
Thanks Robert, that's exactly what I was looking for. I will try it when I come back home tomorrow. Is it possible to set this value in /etc/system, too? Cheers, Tom This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@

[zfs-discuss] Re: zpool unavailable after reboot

2006-07-17 Thread Mikael Kjerrman
Jeff, thanks for your answer, and I almost wish I did type it wrong (the easy explanation that I messed up :-) but from what I can tell I did get it right --- zpool commands I ran --- bash-3.00# grep zpool /.bash_history zpool zpool create data raidz c1t0d0 c1t1d0 c1t2d0 c1t3d0 c1t4d0 c2t0d0 c2

[zfs-discuss] Re: zvol Performance

2006-07-17 Thread Jürgen Keil
> Further testing revealed > that it wasn't an iSCSI performance issue but a zvol > issue. Testing on a SATA disk locally, I get these > numbers (sequentual write): > > UFS: 38MB/s > ZFS: 38MB/s > Zvol UFS: 6MB/s > Zvol Raw: ~6MB/s > > ZFS is nice and fast but Zvol performance just drops > off

Re: [zfs-discuss] zpool unavailable after reboot

2006-07-17 Thread Michael Schuster - Sun Microsystems
Jeff Bonwick wrote: One option -- I confess up front that I don't really like it -- would be to make 'unreplicated' an explicit replication type (in addition to mirror and raidz), so that you couldn't get it by accident: zpool create data unreplicated A B C > The extra typing would b

Re: [zfs-discuss] zpool unavailable after reboot

2006-07-17 Thread Jeff Bonwick
> I have a 10 disk raidz pool running Solaris 10 U2, and after a reboot > the whole pool became unavailable after apparently loosing a diskdrive. > [...] > NAMESTATE READ WRITE CKSUM > dataUNAVAIL 0 0 0 insufficient replicas > c1t0d0ON

[zfs-discuss] zpool unavailable after reboot

2006-07-17 Thread Mikael Kjerrman
Hi, so it happened... I have a 10 disk raidz pool running Solaris 10 U2, and after a reboot the whole pool became unavailable after apparently loosing a diskdrive. (The drive is seemingly ok as far as I can tell from other commands) --- bootlog --- Jul 17 09:57:38 expprd fmd: [ID 441519 daemon

[zfs-discuss] zvol Performance

2006-07-17 Thread Ben Rockwood
Hello, I'm curious if anyone would mind sharing their experiences with zvol's. I recently started using zvol as an iSCSI backend and was supprised by the performance I was getting. Further testing revealed that it wasn't an iSCSI performance issue but a zvol issue. Testing on a SATA disk l