Re: [zfs-discuss] zfs: zvols minor #'s changing and causing probs w/ volumes

2006-10-31 Thread Edward Pilatowicz
i think that this fix may be being backported as part of the brandz project backport, but i don't think anyone is backporting it outside of that. you might want to add a new call record and open a subCR if you need this to be backported. the workaround is just what you've already discovered. dele

Re: [zfs-discuss] zfs: zvols minor #'s changing and causing probs w/ volumes

2006-10-31 Thread David I Radden
Thanks Ed.  The ticket shows the customer running Solaris 10.  Do you know if the fix will be incorporated in an S10 update or patch?   Or possibly an S10 workaround made available? Thanks again! Dave Radden x74861 --- Edward Pilatowicz wrote On 10/31/06 18:53,: if your running solaris

Re: [zfs-discuss] zfs: zvols minor #'s changing and causing probs w/ volumes

2006-10-31 Thread Edward Pilatowicz
if your running solaris 10 or an early nevada build then it's possible your hitting this bug (which i fixed in build 35): 4976415 devfsadmd for zones could be smarter when major numbers change if you're running a recent nevada build then this could be a new issue. so what version of sola

Re: [zfs-discuss] ZFS Performance Question

2006-10-31 Thread eric kustarz
Jay Grogan wrote: Ran 3 test using mkfile to create a 6GB on a ufs and ZFS file system. command ran mkfile -v 6gb /ufs/tmpfile Test 1 UFS mounted LUN (2m2.373s) Test 2 UFS mounted LUN with directio option (5m31.802s) Test 3 ZFS LUN (Single LUN in a pool) (3m13.126s) Sunfire V120 1 Qlogic 2

Re: [zfs-discuss] ZFS thinks my 7-disk pool has imaginary disks

2006-10-31 Thread Matthew Ahrens
Rince wrote: Hi all, I recently created a RAID-Z1 pool out of a set of 7 SCSI disks, using the following command: # zpool create magicant raidz c5t0d0 c5t1d0 c5t2d0 c5t3d0 c5t4d0 c5t5d0 c5t6d0 It worked fine, but I was slightly confused by the size yield (99 GB vs the 116 GB I had on my o

Re[4]: [zfs-discuss] Re: ZFS Performance Question

2006-10-31 Thread Robert Milkowski
Hello Luke, Wednesday, November 1, 2006, 12:59:49 AM, you wrote: LL> Robert, LL> On 10/31/06 3:55 PM, "Robert Milkowski" <[EMAIL PROTECTED]> wrote: >> Right now with S10U3 beta with over 40 disks I can get only about >> 1.6GB/s peak. LL> That's decent - is that the number reported by "zpool io

Re: Re[2]: [zfs-discuss] Re: ZFS Performance Question

2006-10-31 Thread Luke Lonergan
Robert, On 10/31/06 3:55 PM, "Robert Milkowski" <[EMAIL PROTECTED]> wrote: > Right now with S10U3 beta with over 40 disks I can get only about > 1.6GB/s peak. That's decent - is that the number reported by "zpool iostat"? In that case then I think 1GB = 1024^4, my GB measurements are roughly "b

Re[2]: [zfs-discuss] Re: ZFS Performance Question

2006-10-31 Thread Robert Milkowski
Hello Luke, Wednesday, November 1, 2006, 12:13:28 AM, you wrote: LL> Robert, LL> On 10/31/06 3:10 PM, "Robert Milkowski" <[EMAIL PROTECTED]> wrote: >> Even then I would try first to test with more real load on ZFS as it >> can turn out that ZFS performs better anyway. Despite problems with >> l

Re: Re[2]: [zfs-discuss] ZFS Performance Question

2006-10-31 Thread Luke Lonergan
Robert, On 10/31/06 3:12 PM, "Robert Milkowski" <[EMAIL PROTECTED]> wrote: > Almost definitely not true. I did some simple test today with U3 beta > on thumper and still can observe "jumping" writes with sequential > 'dd'. We crossed posts. There are some firmware issues with the Hitachi disks

Re: [zfs-discuss] Re: ZFS Performance Question

2006-10-31 Thread Luke Lonergan
Robert, On 10/31/06 3:10 PM, "Robert Milkowski" <[EMAIL PROTECTED]> wrote: > Even then I would try first to test with more real load on ZFS as it > can turn out that ZFS performs better anyway. Despite problems with > large sequential writings I find ZFS to perform better in many more > complex s

Re[2]: [zfs-discuss] ZFS Performance Question

2006-10-31 Thread Robert Milkowski
Hello Luke, Tuesday, October 31, 2006, 6:09:23 PM, you wrote: LL> Robert, LL> >> I belive it's not solved yet but you may want to try with >> latest nevada and see if there's a difference. LL> It's fixed in the upcoming Solaris 10 U3 and also in Solaris Express LL> post build 47 I think. Al

Re: [zfs-discuss] Re: ZFS Performance Question

2006-10-31 Thread Robert Milkowski
Hello Jay, Tuesday, October 31, 2006, 7:09:12 PM, you wrote: JG> Thanks Robert, I was hoping something like that hard turned up JG> allot of what I will need to use ZFS for will be sequential writes at this time. JG> Even then I would try first to test with more real load on ZFS as it can tur

[zfs-discuss] zfs: zvols minor #'s changing and causing probs w/ volumes

2006-10-31 Thread Jason Gallagher - Sun Microsystems
Team, **Please respond to me and my coworker listed in the Cc, since neither one of us are on this alias** QUICK PROBLEM DESCRIPTION: Cu created a dataset which contains all the zvols for a particular zone. The zone is then given access to all the zvols in the dataset using a match statement in

Re: [zfs-discuss] zfs sharenfs inheritance

2006-10-31 Thread Robert Thurlow
Robert Petkus wrote: When using sharenfs, do I really need to NFS export the parent zfs filesystem *and* all of its children? For example, if I have /zfshome /zfshome/user1 /zfshome/user1+n it seems to me like I need to mount each of these exported filesystems individually on the NFS client. T

Re: [zfs-discuss] zfs sharenfs inheritance

2006-10-31 Thread Darren . Reed
Robert Petkus wrote: Folks, When using sharenfs, do I really need to NFS export the parent zfs filesystem *and* all of its children? For example, if I have /zfshome /zfshome/user1 /zfshome/user1+n it seems to me like I need to mount each of these exported filesystems individually on the NFS cli

Re: [zfs-discuss] Re: ZFS Performance Question

2006-10-31 Thread Chad Leigh -- Shire.Net LLC
On Oct 31, 2006, at 11:09 AM, Jay Grogan wrote: Thanks Robert, I was hoping something like that hard turned up allot of what I will need to use ZFS for will be sequential writes at this time. I don't know what it is worth, but I was using iozone www.iozone.org/> on my ZFS on top of Areca R

[zfs-discuss] zfs sharenfs inheritance

2006-10-31 Thread Robert Petkus
Folks, When using sharenfs, do I really need to NFS export the parent zfs filesystem *and* all of its children? For example, if I have /zfshome /zfshome/user1 /zfshome/user1+n it seems to me like I need to mount each of these exported filesystems individually on the NFS client. This scheme doesn'

[zfs-discuss] Re: ZFS Automatic Device Error Notification?

2006-10-31 Thread Wes Williams
Thanks Richard, this seems to be exactly what I was looking for. This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ZFS Automatic Device Error Notification?

2006-10-31 Thread Richard Elling - PAE
There are several ways to do this. Two of the most popular are syslog and SNMP. syslog works, just like it always did (or didn't). For more details on FMA and how it works with SNMP traps, see the conversations on the OpenSolaris fault management community, http://www.opensolaris.org/os

[zfs-discuss] Re: ZFS Performance Question

2006-10-31 Thread Jay Grogan
Thanks Robert, I was hoping something like that hard turned up allot of what I will need to use ZFS for will be sequential writes at this time. This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://m

RE: [zfs-discuss] ZFS Performance Question

2006-10-31 Thread Luke Lonergan
Robert, > I belive it's not solved yet but you may want to try with > latest nevada and see if there's a difference. It's fixed in the upcoming Solaris 10 U3 and also in Solaris Express post build 47 I think. - Luke ___ zfs-discuss mailing list zfs

Re: [zfs-discuss] ZFS Automatic Device Error Notification?

2006-10-31 Thread Matty
On 10/31/06, Wes Williams <[EMAIL PROTECTED]> wrote: Okay, so now that I'm planning to build my NAS using ZFS, I now need to devise or learn of a preexisting method to receive notification of ZFS handled errors on a remote > > machine. For example, if a disk fails and I don't regularly login o

Re: [zfs-discuss] Re: ZFS Performance Question

2006-10-31 Thread Richard Elling - PAE
Jay Grogan wrote: To answer your question "Yes I did expect the same or better performance than standard UFS" based on all the hype and to quote Sun "Blazing performance ZFS is based on a transactional object model that removes most of the traditional constraints on the order of issuing I/Os, w

Re: Re[2]: [zfs-discuss] thousands of ZFS file systems

2006-10-31 Thread Cyril Plisko
On 10/31/06, Robert Milkowski <[EMAIL PROTECTED]> wrote: Hello Cyril, Tuesday, October 31, 2006, 8:30:50 AM, you wrote: CP> On 10/30/06, Robert Milkowski <[EMAIL PROTECTED]> wrote: >> >> >> 1. rebooting server could take several hours right now with so many file system >> >>I belive this p

Re[2]: [zfs-discuss] thousands of ZFS file systems

2006-10-31 Thread Robert Milkowski
Hello Cyril, Tuesday, October 31, 2006, 8:30:50 AM, you wrote: CP> On 10/30/06, Robert Milkowski <[EMAIL PROTECTED]> wrote: >> >> >> 1. rebooting server could take several hours right now with so many file >> system >> >>I belive this problem is being addressed right now CP> Well, I've done

[zfs-discuss] Re: ZFS Performance Question

2006-10-31 Thread Jay Grogan
To answer your question "Yes I did expect the same or better performance than standard UFS" based on all the hype and to quote Sun "Blazing performance ZFS is based on a transactional object model that removes most of the traditional constraints on the order of issuing I/Os, which results in huge

[zfs-discuss] ZFS Automatic Device Error Notification?

2006-10-31 Thread Wes Williams
Okay, so now that I'm planning to build my NAS using ZFS, I now need to devise or learn of a preexisting method to receive notification of ZFS handled errors on a remote machine. For example, if a disk fails and I don't regularly login or SSH into the ZFS server, I'd like an email or some oth

[zfs-discuss] Re: ZFS Automatic Device Error Notification?

2006-10-31 Thread Wes Williams
> I use the smartmontools smartd daemon to email me > when disk drives are > about to fail. If you are interested in configuring > smartd to send > email notifications prior to a disk failing, check > out the following > blog post: > > http://prefetch.net/blog/index.php/2006/01/05/using-sm > artd-

Re: [zfs-discuss] thousands of ZFS file systems

2006-10-31 Thread Roch - PAE
Erblichs writes: > Hi, > > My suggestion is direct any command output to a file > that may print thous of lines. > > I have not tried that number of FSs. So, my first > suggestion is to have alot of phys mem installed. I seem to recall 64K per FS and being worked on t

Re: [zfs-discuss] ZFS Performance Question

2006-10-31 Thread Robert Milkowski
Hello Jay, Tuesday, October 31, 2006, 3:31:54 AM, you wrote: JG> Ran 3 test using mkfile to create a 6GB on a ufs and ZFS file system. JG> command ran mkfile -v 6gb /ufs/tmpfile JG> Test 1 UFS mounted LUN (2m2.373s) JG> Test 2 UFS mounted LUN with directio option (5m31.802s) JG> Test 3 ZFS LUN

[zfs-discuss] Re: recover zfs data from a crashed system?

2006-10-31 Thread Larry Becke
I was doing some experimentation of my own, using SCSI attached JBOD. I built a test zpool spanning 7 drives (raidz) on S10U2. The 7 disks were split between 3 controllers. I then started replacing the 18GB drives with 36GB drives, one at a time, and watched it rebuild the zpool, growing as it