Re: [zfs-discuss] hard drive write cache

2006-05-26 Thread Gregory Shaw
On Fri, 2006-05-26 at 17:40, Bart Smaalders wrote: > Gregory Shaw wrote: > > I had a question to the group: > > In the different ZFS discussions in zfs-discuss, I've seen a > > recurring theme of disabling write cache on disks. I would think that > > the performance increase of using write ca

Re: [zfs-discuss] ata panic

2006-05-26 Thread Marty Faltesek
It's curious that your drive is not using DMA. Please append ::msgbuf output and if you can provide access to the core that would be even better. On Fri, 2006-05-26 at 18:55 -0400, Rob Logan wrote: > `mv`ing files from a zfs dir to another zfs filesystem > in the same pool will panic a 8 sata zr

Re: [zfs-discuss] ZFS mirror and read policy; kstat I/O values for zfs

2006-05-26 Thread Jeff Bonwick
> You are almost certainly running in to this known bug: > > 630 reads from mirror are not spread evenly Right. FYI, we fixed this in build 38. Jeff ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailma

Re: [zfs-discuss] hard drive write cache

2006-05-26 Thread Neil Perrin
ZFS enables the write cache and flushes it when committing transaction groups; this insures that all of a transaction group appears or does not appear on disk. It also flushes the disk write cache before returning from every synchronous request (eg fsync, O_DSYNC). This is done after writing o

Re: [zfs-discuss] hard drive write cache

2006-05-26 Thread Chris Csanady
On 5/26/06, Bart Smaalders <[EMAIL PROTECTED]> wrote: There are two failure modes associated with disk write caches: Failure modes aside, is there any benefit to a write cache when command queueing is available? It seems that the primary advantage is in allowing old ATA hardware to issue writ

Re: [zfs-discuss] ZFS mirror and read policy; kstat I/O values for zfs

2006-05-26 Thread Matthew Ahrens
On Fri, May 26, 2006 at 09:40:57PM +0200, Daniel Rock wrote: > So you can see the second disk of each mirror pair (c4tXd0) gets almost no > I/O. How does ZFS decide from which mirror device to read? You are almost certainly running in to this known bug: 630 reads from mirror are not

Re: [zfs-discuss] hard drive write cache

2006-05-26 Thread Bart Smaalders
Gregory Shaw wrote: I had a question to the group: In the different ZFS discussions in zfs-discuss, I've seen a recurring theme of disabling write cache on disks. I would think that the performance increase of using write cache would be an advantage, and that write cache should be enabled.

[zfs-discuss] ata panic

2006-05-26 Thread Rob Logan
`mv`ing files from a zfs dir to another zfs filesystem in the same pool will panic a 8 sata zraid http://supermicro.com/Aplus/motherboard/Opteron/nForce/H8DCE.cfm system with ::status debugging crash dump vmcore.3 (64-bit) from zfs operating system: 5.11 opensol-20060523 (i86pc) panic message: a

Re: [zfs-discuss] How's zfs RAIDZ fualt-tolerant ???

2006-05-26 Thread Nicolas Williams
On Sat, May 27, 2006 at 08:29:05AM +1000, grant beattie wrote: > is raidz double parity optional or mandatory? Backwards compatibility dictates that it will be optional. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org

Re: [zfs-discuss] hard drive write cache

2006-05-26 Thread Ed Nadolski
Gregory Shaw wrote: In recent Linux distributions, when the kernel shuts down, the kernel will force the scsi drives to flush their write cache. I don't know if solaris does the same but I think not, due to the ongoing focus of solaris and disabling write cache. The Solaris sd(7D) S

Re: [zfs-discuss] How's zfs RAIDZ fualt-tolerant ???

2006-05-26 Thread grant beattie
On Fri, May 26, 2006 at 10:33:34AM -0700, Eric Schrock wrote: > RAID-Z is single-fault tolerant. If if you take out two disks, then you > no longer have the required redundancy to maintain your data. Build 42 > should contain double-parity RAID-Z, which will allow you to sustain two > simulatane

[zfs-discuss] hard drive write cache

2006-05-26 Thread Gregory Shaw
I had a question to the group: In the different ZFS discussions in zfs-discuss, I've seen a recurring theme of disabling write cache on disks. I would think that the performance increase of using write cache would be an advantage, and that write cache should be enabled. Realistically, I ca

Re: [zfs-discuss] How's zfs RAIDZ fualt-tolerant ???

2006-05-26 Thread Eric Schrock
It will be backported to an S10 Update, but it won't make U2. Expect in U3. - Eric On Fri, May 26, 2006 at 09:32:42AM -1000, David J. Orman wrote: > > RAID-Z is single-fault tolerant. If if you take out two disks, > > then you > > no longer have the required redundancy to maintain your data.

[zfs-discuss] ZFS mirror and read policy; kstat I/O values for zfs

2006-05-26 Thread Daniel Rock
Hi, after some testing with ZFS I noticed that read requests are not scheduled even to the drives but the first one gets predominately selected: My pool is setup as follows: NAMESTATE READ WRITE CKSUM tpc ONLINE 0 0 0 mirrorONLI

Re: [zfs-discuss] How's zfs RAIDZ fualt-tolerant ???

2006-05-26 Thread David J. Orman
> RAID-Z is single-fault tolerant. If if you take out two disks, > then you > no longer have the required redundancy to maintain your data. > Build 42 > should contain double-parity RAID-Z, which will allow you to > sustain two > simulataneous disk failures without dataloss. I'm not sure if t

Re: [zfs-discuss] How's zfs RAIDZ fualt-tolerant ???

2006-05-26 Thread Eric Schrock
RAID-Z is single-fault tolerant. If if you take out two disks, then you no longer have the required redundancy to maintain your data. Build 42 should contain double-parity RAID-Z, which will allow you to sustain two simulataneous disk failures without dataloss. For an overview of ZFS fault toler

[zfs-discuss] Re: How's zfs RAIDZ fualt-tolerant ???

2006-05-26 Thread axa
>raidz is like raid 5, so you can survive the death of one disk, not 2. >I would recomend you configure the 12 disks into, 2 raidz groups, >then you can survive the death of one drive from each group. This is >what i did on my system Hi James , Thank you very much. ;-) I'll configure 2 raidz grou

[zfs-discuss] export/import

2006-05-26 Thread Gregory Shaw
Hi. In my testing, I had a pool on external storage: zpool status t6140_d0 pool: t6140_d0 state: ONLINE scrub: none requested config: NAMESTATE READ WRITE CKSUM t6140_d0ONLINE 0 0 0 c9t2d2ONLINE 0 0 0 In my testing, I

Re: [zfs-discuss] Sequentiality & direct access to a file

2006-05-26 Thread Roch Bourbonnais - Performance Engineering
Scott Dickson writes: > How does (or does) ZFS maintain sequentiality of the blocks of a file. > If I mkfile on a clean UFS, I likely will get contiguous blocks for my > file, right? A customer I talked to recently has a desire to access you would get up to maxcontig worth of sequential b

Re: [zfs-discuss] How's zfs RAIDZ fualt-tolerant ???

2006-05-26 Thread James Dickens
On 5/26/06, axa <[EMAIL PROTECTED]> wrote: Hi here, I've a storage with 12 SCSI disks which be configured raidz . I tried to take out 2 SCSI disks when data are writing in the raidz pool.After i token out disks in couple of mins the raidz pool crash I can't find any docs about raidz faul

[zfs-discuss] Re: user undo

2006-05-26 Thread Anton B. Rang
>Anything that attempts to append characters on the end of the filename >will run into trouble when the file name is already at NAME_MAX. One simple solution is to restrict the total length of the name to NAME_MAX, truncating the original filename as necessary to allow appending. This does int