Re: [zfs-discuss] Fwd: [ilugb] Does ZFS support Hole Punching/Discard

2009-09-08 Thread Chris Csanady
2009/9/7 Ritesh Raj Sarraf : > The Discard/Trim command is also available as part of the SCSI standard now. > > Now, if you look from a SAN perspective, you will need a little of both. > Filesystems will need to be able to deallocate blocks and then the same > should be triggered as a SCSI Trim to

Re: [zfs-discuss] Fwd: [ilugb] Does ZFS support Hole Punching/Discard

2009-09-07 Thread Chris Csanady
2009/9/7 Richard Elling : > On Sep 7, 2009, at 10:20 AM, Bob Friesenhahn wrote: > >> The purpose of the TRIM command is to allow the FLASH device to reclaim >> and erase storage at its leisure so that the writer does not need to wait >> for erasure once the device becomes full.  Otherwise the FLASH

Re: [zfs-discuss] snv_110 -> snv_121 produces checksum errors on Raid-Z pool

2009-09-02 Thread Chris Csanady
2009/9/2 Eric Sproul : > > Adam, > Is it known approximately when this bug was introduced?  I have a system > running > snv_111 with a large raidz2 pool and I keep running into checksum errors > though > the drives are brand new.  They are 2TB drives, but the pool is only about 14% > used (~250G/

Re: [zfs-discuss] odd slog behavior on B70

2007-11-26 Thread Chris Csanady
On Nov 26, 2007 8:41 PM, Joe Little <[EMAIL PROTECTED]> wrote: > I was playing with a Gigabyte i-RAM card and found out it works great > to improve overall performance when there are a lot of writes of small > files over NFS to such a ZFS pool. > > However, I noted a frequent situation in periods o

Re: [zfs-discuss] ZFS + DB + "fragments"

2007-11-20 Thread Chris Csanady
On Nov 19, 2007 10:08 PM, Richard Elling <[EMAIL PROTECTED]> wrote: > James Cone wrote: > > Hello All, > > > > Here's a possibly-silly proposal from a non-expert. > > > > Summarising the problem: > >- there's a conflict between small ZFS record size, for good random > > update performance, and

Re: [zfs-discuss] Can you create a degraded raidz vdev?

2007-06-14 Thread Chris Csanady
if it is a good idea. (I don't have any UFS filesystems.) Perhaps someone else can comment on this? Still, you will have no redundancy during this operation, so I hope you have backups. Chris Csanady ___ zfs-discuss mailing list z

Re: [zfs-discuss] Thoughts on CF/SSDs [was: ZFS - Use h/w raid or not?Thoughts.Considerations.]

2007-06-02 Thread Chris Csanady
On 6/2/07, Richard Elling <[EMAIL PROTECTED]> wrote: Chris Csanady wrote: > On 6/1/07, Frank Cusack <[EMAIL PROTECTED]> wrote: >> On June 1, 2007 9:44:23 AM -0700 Richard Elling <[EMAIL PROTECTED]> >> wrote: >> [...] >> > Semiconductor memori

Re: [zfs-discuss] Thoughts on CF/SSDs [was: ZFS - Use h/w raid or not?Thoughts.Considerations.]

2007-06-01 Thread Chris Csanady
On 6/1/07, Frank Cusack <[EMAIL PROTECTED]> wrote: On June 1, 2007 9:44:23 AM -0700 Richard Elling <[EMAIL PROTECTED]> wrote: [...] > Semiconductor memories are accessed in parallel. Spinning disks are > accessed > serially. Let's take a look at a few examples and see what this looks > like... >

Re: [zfs-discuss] Zpool, RaidZ & how it spreads its disk load?

2007-05-07 Thread Chris Csanady
On 5/7/07, Tony Galway <[EMAIL PROTECTED]> wrote: Greetings learned ZFS geeks & guru's, Yet another question comes from my continued ZFS performance testing. This has to do with zpool iostat, and the strangeness that I do see. I've created an eight (8) disk raidz pool from a Sun 3510 fibre arra

Re: [zfs-discuss] RAID-Z resilver broken

2007-04-11 Thread Chris Csanady
On 4/11/07, Marco van Lienen <[EMAIL PROTECTED]> wrote: A colleague at work and I have followed the same steps, included running a digest on the /test/file, on a SXCE:61 build today and can confirm the exact same, and disturbing?, result. My colleague mentioned to me he has witnessed the same '

[zfs-discuss] RAID-Z resilver broken

2007-04-07 Thread Chris Csanady
In a recent message, I detailed the excessive checksum errors that occurred after replacing a disk. It seems that after a resilver completes, it leaves a large number of blocks in the pool which fail to checksum properly. Afterward, it is necessary to scrub the pool in order to correct these err

[zfs-discuss] Re: Excessive checksum errors...

2007-04-05 Thread Chris Csanady
I have some further data now, and I don't think that it is a hardware problem. Half way through the scrub, I rebooted and exchanged the controller and cable used with the "bad" disk. After restarting the scrub, it proceeded error free until about the point where it left off, and then it resumed

[zfs-discuss] Excessive checksum errors...

2007-04-04 Thread Chris Csanady
After replacing a bad disk and waiting for the resilver to complete, I started a scrub of the pool. Currently, I have the pool mounted readonly, yet almost a quarter of the I/O is writes to the new disk. In fact, it looks like there are so many checksum errors, that zpool doesn't even list them p

Re: [zfs-discuss] ZFS and Firewire/USB enclosures

2007-03-20 Thread Chris Csanady
It looks like the following bug is still open: 6424510 usb ignores DKIOCFLUSHWRITECACHE Until it is fixed, I wouldn't even consider using ZFS on USB storage. Even so, not all bridge boards (Firewire included) implement this command. Unless you can verify that it functions correctly, it is sa

Re: [zfs-discuss] Implementing fbarrier() on ZFS

2007-02-12 Thread Chris Csanady
2007/2/12, Frank Hofmann <[EMAIL PROTECTED]>: On Mon, 12 Feb 2007, Chris Csanady wrote: > This is true for NCQ with SATA, but SCSI also supports ordered tags, > so it should not be necessary. > > At least, that is my understanding. Except that ZFS doesn't talk SCSI, it t

Re: [zfs-discuss] Implementing fbarrier() on ZFS

2007-02-12 Thread Chris Csanady
2007/2/12, Frank Hofmann <[EMAIL PROTECTED]>: On Mon, 12 Feb 2007, Peter Schuller wrote: > Hello, > > Often fsync() is used not because one cares that some piece of data is on > stable storage, but because one wants to ensure the subsequent I/O operations > are performed after previous I/O opera

Re: Re: [zfs-discuss] Cheap ZFS homeserver.

2007-01-19 Thread Chris Csanady
2007/1/19, [EMAIL PROTECTED] <[EMAIL PROTECTED]>: >> "ACHI SATA ... probably look at Intel boards instead." Whats ACHI ? I didnt see anything useful on google or wikipedia ... is it a chipset ? The issue I take with intel is there chips are either grossly power hungry/hot (anything pre-pentium M

Re: [zfs-discuss] Cheap ZFS homeserver.

2007-01-18 Thread Chris Csanady
2007/1/18, . <[EMAIL PROTECTED]>: 2. What consumer level SATAII chipsets work. 4-ports onboard is fine for now since I can always add a card later. I will need at least four ports to start. pci-e cards are highly preferred since pci-x is expensive and going to become rarer. (mark my words) S

Re: [zfs-discuss] snv_51 hangs

2006-11-14 Thread Chris Csanady
Thank you all for the very quick and informative responses. If it happens again, I will try to get a core out of it. Chris ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] snv_51 hangs

2006-11-14 Thread Chris Csanady
I have experienced two hangs so far with snv_51. I was running snv_46 until recently, and it was rock solid, as were earlier builds. Is there a way for me to force a panic? It is an x86 machine, with only a serial console. Chris ___ zfs-discuss maili

Re: [zfs-discuss] Re: Dead drives and ZFS

2006-11-14 Thread Chris Csanady
On 11/14/06, Rainer Heilke <[EMAIL PROTECTED]> wrote: This makes sense for the most part (and yes, I think it should be done by the file system, not a manual grovelling through vdev labels). I agree, this should be done with a new command, as has been suggested. However, what I was suggesting

Re: Re: [zfs-discuss] Re: Re: Re[2]: Re: Dead drives and ZFS

2006-11-14 Thread Chris Csanady
On 11/14/06, Robert Milkowski <[EMAIL PROTECTED]> wrote: Hello Rainer, Tuesday, November 14, 2006, 4:43:32 AM, you wrote: RH> Sorry for the delay... RH> No, it doesn't. The format command shows the drive, but zpool RH> import does not find any pools. I've also used the detached bad RH> SATA dr

Re: Re[2]: [zfs-discuss] Re: Dead drives and ZFS

2006-11-11 Thread Chris Csanady
On 11/11/06, Robert Milkowski <[EMAIL PROTECTED]> wrote: CC> The manual page for zpool offline indicates that no further attempts CC> are made to read or write the device, so the data should still be CC> there. While it does not elaborate on the result of a zpool detach, I CC> would expect it to

Re: [zfs-discuss] Re: Dead drives and ZFS

2006-11-11 Thread Chris Csanady
On 11/11/06, Rainer Heilke <[EMAIL PROTECTED]> wrote: Nope. I get "no pools available to import". I think that detaching the drive cleared any pool information/headers on the drive, which is why I can't figure out a way to get the data/pool back. Did you also export the original pool before y

Re: [zfs-discuss] Re: system hangs on POST after giving zfs a drive

2006-10-12 Thread Chris Csanady
On 10/12/06, John Sonnenschein <[EMAIL PROTECTED]> wrote: well, it's an SiS 960 board, and it appears my only option to turn off probing of the drives is to enable RAID mode (which makes them inacessable by the OS) I think the option is in the standard CMOS setup section, and allows you to set

Re: [zfs-discuss] system hangs on POST after giving zfs a drive

2006-10-11 Thread Chris Csanady
On 10/11/06, John Sonnenschein <[EMAIL PROTECTED]> wrote: As it turns out now, something about the drive is causing the machine to hang on POST. It boots fine if the drive isn't connected, and if I hot plug the drive after the machine boots, it works fine, but the computer simply will not boo

Re: [zfs-discuss] Metaslab alignment on RAID-Z

2006-09-26 Thread Chris Csanady
On 9/26/06, Richard Elling - PAE <[EMAIL PROTECTED]> wrote: Chris Csanady wrote: > What I have observed with the iosnoop dtrace script is that the > first disks aggregate the single block writes, while the last disk(s) > are forced to do numerous writes every other sector. If y

[zfs-discuss] Metaslab alignment on RAID-Z

2006-09-26 Thread Chris Csanady
I believe I have tracked down the problem discussed in the "low disk performance thread." It seems that an alignment issue will cause small file/block performance to be abysmal on a RAID-Z. metaslab_ff_alloc() seems to naturally align all allocations, and so all blocks will be aligned to asize o

Re: [zfs-discuss] Re: Re: low disk performance

2006-09-23 Thread Chris Csanady
On 9/22/06, Gino Ruopolo <[EMAIL PROTECTED]> wrote: Update ... iostat output during "zpool scrub" extended device statistics device r/sw/s Mr/s Mw/s wait actv svc_t %w %b sd34 2.0 395.20.10.6 0.0 34.8 87.7 0 100 sd3521.0 312.2

Re: [zfs-discuss] ZFS uses 1.1GB more space, reports conflicting information...

2006-09-04 Thread Chris Csanady
On 9/4/06, UNIX admin <[EMAIL PROTECTED]> wrote: [Solaris 10 6/06 i86pc] I recently used a set of 6 disks in a MultiPack to create a RAIDZ volume. Then I proceeded to do zfs set sharenfs=root=a.b.c.d:a.b.c.e space ("space" is how I named the ZFS pool). Is this really how you set the sharenf

Re: [zfs-discuss] Bandwidth disparity between NFS and ZFS

2006-06-26 Thread Chris Csanady
On 6/26/06, Neil Perrin <[EMAIL PROTECTED]> wrote: Robert Milkowski wrote On 06/25/06 04:12,: > Hello Neil, > > Saturday, June 24, 2006, 3:46:34 PM, you wrote: > > NP> Chris, > > NP> The data will be written twice on ZFS using NFS. This is because NFS > NP> on closing the file internally uses f

Re: [zfs-discuss] Bandwidth disparity between NFS and ZFS (Solved)

2006-06-26 Thread Chris Csanady
s are just stingy with their memory, but for whatever reason, it is unfortunate for ZFS on the server. Chris On 6/24/06, Chris Csanady <[EMAIL PROTECTED]> wrote: On 6/24/06, Neil Perrin <[EMAIL PROTECTED]> wrote: > > The data will be written twice on ZFS using NFS. This is becau

Re: [zfs-discuss] Bandwidth disparity between NFS and ZFS

2006-06-24 Thread Chris Csanady
On 6/24/06, Neil Perrin <[EMAIL PROTECTED]> wrote: The data will be written twice on ZFS using NFS. This is because NFS on closing the file internally uses fsync to cause the writes to be committed. This causes the ZIL to immediately write the data to the intent log. Later the data is also writt

[zfs-discuss] Bandwidth disparity between NFS and ZFS

2006-06-23 Thread Chris Csanady
While dd'ing to an nfs filesystem, half of the bandwidth is unaccounted for. What dd reports amounts to almost exactly half of what zpool iostat or iostat show; even after accounting for the overhead of the two mirrored vdevs. Would anyone care to guess where it may be going? (This is measured

Re: [zfs-discuss] hard drive write cache

2006-05-26 Thread Chris Csanady
On 5/26/06, Bart Smaalders <[EMAIL PROTECTED]> wrote: There are two failure modes associated with disk write caches: Failure modes aside, is there any benefit to a write cache when command queueing is available? It seems that the primary advantage is in allowing old ATA hardware to issue writ