Re: [zfs-discuss] ZFS and Storage

2006-06-26 Thread Mika Borner
>The vdev can handle dynamic lun growth, but the underlying VTOC or >EFI label >may need to be zero'd and reapplied if you setup the initial vdev on >a slice. If >you introduced the entire disk to the pool you should be fine, but I >believe you'll >still need to offline/online the pool. Fin

Re: [zfs-discuss] ZFS and Storage

2006-06-26 Thread Richard Elling
Olaf Manczak wrote: Eric Schrock wrote: On Mon, Jun 26, 2006 at 05:26:24PM -0600, Gregory Shaw wrote: You're using hardware raid. The hardware raid controller will rebuild the volume in the event of a single drive failure. You'd need to keep on top of it, but that's a given in the case of eit

Re: [zfs-discuss] ZFS and Storage

2006-06-26 Thread Jonathan Edwards
-Does ZFS in the current version support LUN extension? With UFS, we have to zero the VTOC, and then adjust the new disk geometry. How does it look like with ZFS?The vdev can handle dynamic lun growth, but the underlying VTOC or EFI labelmay need to be zero'd and reapplied if you setup the initial

Re: [zfs-discuss] ZFS and Storage

2006-06-26 Thread Bart Smaalders
Gregory Shaw wrote: On Tue, 2006-06-27 at 09:09 +1000, Nathan Kroenert wrote: How would ZFS self heal in this case? > You're using hardware raid. The hardware raid controller will rebuild the volume in the event of a single drive failure. You'd need to keep on top of it, but that's a given

Re: [zfs-discuss] ZFS and Storage

2006-06-26 Thread Olaf Manczak
Eric Schrock wrote: On Mon, Jun 26, 2006 at 05:26:24PM -0600, Gregory Shaw wrote: You're using hardware raid. The hardware raid controller will rebuild the volume in the event of a single drive failure. You'd need to keep on top of it, but that's a given in the case of either hardware or softw

Re: [zfs-discuss] ZFS and Storage

2006-06-26 Thread Eric Schrock
On Mon, Jun 26, 2006 at 05:26:24PM -0600, Gregory Shaw wrote: > > You're using hardware raid. The hardware raid controller will rebuild > the volume in the event of a single drive failure. You'd need to keep > on top of it, but that's a given in the case of either hardware or > software raid. T

[zfs-discuss] Re: ZFS and Storage

2006-06-26 Thread Nathanael Burton
> If you've got hardware raid-5, why not just run > regular (non-raid) > pools on top of the raid-5? > > I wouldn't go back to JBOD. Hardware arrays offer a > number of > advantages to JBOD: > - disk microcode management > - optimized access to storage > - large write cache

Re: [zfs-discuss] ZFS and Storage

2006-06-26 Thread Philip Brown
Roch wrote: And, ifthe load can accomodate a reorder, to get top per-spindle read-streaming performance, a cp(1) of the file should do wonders on the layout. but there may not be filesystem space for double the data. Sounds like there is a need for a zfs-defragement-file utility perhaps

Re: [zfs-discuss] ZFS and Storage

2006-06-26 Thread Gregory Shaw
On Tue, 2006-06-27 at 09:09 +1000, Nathan Kroenert wrote: > On Tue, 2006-06-27 at 02:27, Gregory Shaw wrote: > > On Jun 26, 2006, at 1:15 AM, Mika Borner wrote: > > > > > What we need, would be the feature to use JBODs. > > > > > > > If you've got hardware raid-5, why not just run regular (non-ra

Re: [zfs-discuss] ZFS and Storage

2006-06-26 Thread Nathan Kroenert
On Tue, 2006-06-27 at 02:27, Gregory Shaw wrote: > On Jun 26, 2006, at 1:15 AM, Mika Borner wrote: > > > What we need, would be the feature to use JBODs. > > > > If you've got hardware raid-5, why not just run regular (non-raid) > pools on top of the raid-5? > > I wouldn't go back to JBOD. H

Re: [zfs-discuss] Bandwidth disparity between NFS and ZFS

2006-06-26 Thread Chris Csanady
On 6/26/06, Neil Perrin <[EMAIL PROTECTED]> wrote: Robert Milkowski wrote On 06/25/06 04:12,: > Hello Neil, > > Saturday, June 24, 2006, 3:46:34 PM, you wrote: > > NP> Chris, > > NP> The data will be written twice on ZFS using NFS. This is because NFS > NP> on closing the file internally uses f

Re: [zfs-discuss] Solaris 10 6/06 now available for download

2006-06-26 Thread Gary Combs
I just downloaded sol-10-u2-ga-sparc-dvd-iso-a.zip. Try again. Gary Larry Wake wrote: Shannon Roddy wrote: Noel Dellofano wrote:   Solaris 10u2 was released today.  You can now download it from here: http://www.sun.com/software/solaris/get.jsp  

[zfs-discuss] Solaris 10 6/06 now available for download

2006-06-26 Thread Larry Wake
Shannon Roddy wrote: Noel Dellofano wrote: Solaris 10u2 was released today. You can now download it from here: http://www.sun.com/software/solaris/get.jsp Seems the download links are dead except for x86-64. No Sparc downloads. There were some problems getting the links set u

Re: [zfs-discuss] status question regarding sol10u2

2006-06-26 Thread Dennis Clarke
> Noel Dellofano wrote: >> Solaris 10u2 was released today. You can now download it from here: >> >> http://www.sun.com/software/solaris/get.jsp > > Seems the download links are dead except for x86-64. No Sparc downloads. > Everything works perfectly. $ ls -1 sol-10-u2-ga-sparc-lang-iso.zip so

Re: [zfs-discuss] status question regarding sol10u2

2006-06-26 Thread Nicholas Senedzuk
I had the same problem.On 6/26/06, Shannon Roddy <[EMAIL PROTECTED]> wrote: Noel Dellofano wrote:> Solaris 10u2 was released today.  You can now download it from here:>> http://www.sun.com/software/solaris/get.jsp Seems the download links are dead except for x86-64.  No Sparc downloads.

Re: [zfs-discuss] status question regarding sol10u2

2006-06-26 Thread Shannon Roddy
Noel Dellofano wrote: > Solaris 10u2 was released today. You can now download it from here: > > http://www.sun.com/software/solaris/get.jsp Seems the download links are dead except for x86-64. No Sparc downloads. ___ zfs-discuss mailing list zfs-discu

Re: [zfs-discuss] status question regarding sol10u2

2006-06-26 Thread Noel Dellofano
Solaris 10u2 was released today. You can now download it from here: http://www.sun.com/software/solaris/get.jsp Noel Joe Little wrote: So, if I recall from this list, a mid-june release to the web was expected for S10U2. I'm about to do some final production testing, and I was wondering i

Re: [zfs-discuss] ZFS and Storage

2006-06-26 Thread Darren Dunham
> > -Does ZFS in the current version support LUN extension? With UFS, we > > have to zero the VTOC, and then adjust the new disk geometry. How does > > it look like with ZFS? > > I don't understand what you're asking. What problem is solved by > zeroing the vtoc? When the underlying storage in

Re: [zfs-discuss] Bandwidth disparity between NFS and ZFS

2006-06-26 Thread Neil Perrin
Robert Milkowski wrote On 06/25/06 04:12,: Hello Neil, Saturday, June 24, 2006, 3:46:34 PM, you wrote: NP> Chris, NP> The data will be written twice on ZFS using NFS. This is because NFS NP> on closing the file internally uses fsync to cause the writes to be NP> committed. This causes the ZI

Re: [zfs-discuss] Re: where has all my space gone? (with zfs mountroot + b38)

2006-06-26 Thread Tabriz
James C. McPherson wrote: James C. McPherson wrote: Jeff Bonwick wrote: 6420204 root filesystem's delete queue is not running The workaround for this bug is to issue to following command... # zfs set readonly=off / This will cause the delete queue to start up and should flush your queue. Than

Re: [zfs-discuss] ZFS and Storage

2006-06-26 Thread Gregory Shaw
On Jun 26, 2006, at 1:15 AM, Mika Borner wrote: Hi Now that Solaris 10 06/06 is finally downloadable I have some questions about ZFS. -We have a big storage sytem supporting RAID5 and RAID1. At the moment, we only use RAID5 (for non-solaris systems as well). We are thinking about using

Re: [zfs-discuss] Re: ZFS Wiki?

2006-06-26 Thread Jeff Victor
Mike Gerdts wrote: On 6/25/06, Nathan Kroenert <[EMAIL PROTECTED]> wrote: Now, looking forward a bit, where does the ZFS integration with zones documentation belong? Some of it will appear in the next update to the Sun BluePrint "Solaris Containers Architecture Technology Guide." How abo

Re: [zfs-discuss] Re: Re: ZFS Wiki?

2006-06-26 Thread Jeff Victor
A lesson we learned with Solaris Zones applies here to ZFS. Accomplishing high-level goals, e.g. "prepare an appropriate environment for application XYZ installation (Zones)" or "prepare an appropriate filesystem for application XYZ data (ZFS)" is different than it was before Solaris 10. For Zo

Re: [zfs-discuss] ZFS and Storage

2006-06-26 Thread Roch
About: -I've read the threads about zfs and databases. Still I'm not 100% convenienced about read performance. Doesn't the fragmentation of the large database files (because of the concept of COW) impact read-performance? I do need to get back to this thread. The way I am currently loo

Re: [zfs-discuss] Bandwidth disparity between NFS and ZFS (Solved)

2006-06-26 Thread Chris Csanady
I don't know how I missed it, but there are periodic commit requests by the NFS client. These occur often enough that the data ends up being written twice as you have suggested. In any case, this is really annoying, as dd certainly isn't requesting this behavior. Perhaps the clients are just st

[zfs-discuss] zfs list -o usage info missing 'name'

2006-06-26 Thread Gavin Maltby
Hi Probbaly been reported a while back, but 'zfs list -o' does not list the rather useful (and obvious) 'name' property, and nor does the manpage at a quick read. snv_42. # zfs list -o missing argument for 'o' option usage: list [-rH] [-o property[,property]...] [-t type[,type]...]

Re: Re: [zfs-discuss] 15 minute fdsync problem and ZFS: Solved

2006-06-26 Thread Roch
So if you have a single thread doing open/write/close of 8K files and get 1.25MB/sec, that tells me you have something like a 6ms I/O latency. Which look reasonable also. What does iostat -x svc_t (client side) says ? 400ms seems high for the workload _and_ doesn't match my formula, so I don't li

[zfs-discuss] ZFS and Storage

2006-06-26 Thread Mika Borner
Hi Now that Solaris 10 06/06 is finally downloadable I have some questions about ZFS. -We have a big storage sytem supporting RAID5 and RAID1. At the moment, we only use RAID5 (for non-solaris systems as well). We are thinking about using ZFS on those LUNs instead of UFS. As ZFS on Hardware RAID5