[zfs-discuss] No write coalescing after upgrade to Solaris 11 Express

2011-04-27 Thread Matthew Anderson
Hi All, I've run into a massive performance problem after upgrading to Solaris 11 Express from oSol 134. Previously the server was performing a batch write every 10-15 seconds and the client servers (connected via NFS and iSCSI) had very low wait times. Now I'm seeing constant writes to the ar

Re: [zfs-discuss] No write coalescing after upgrade to Solaris 11 Express

2011-04-27 Thread Andrew Gabriel
Matthew Anderson wrote: Hi All, I've run into a massive performance problem after upgrading to Solaris 11 Express from oSol 134. Previously the server was performing a batch write every 10-15 seconds and the client servers (connected via NFS and iSCSI) had very low wait times. Now I'm seeing

Re: [zfs-discuss] No write coalescing after upgrade to Solaris 11 Express

2011-04-27 Thread Matthew Anderson
NAMEPROPERTY VALUE SOURCE MirrorPool sync disabled local MirrorPool/CCIT sync disabled local MirrorPool/EX01 sync disabled inherited from MirrorPool MirrorPool/EX02 sync disabled inherited from

Re: [zfs-discuss] No write coalescing after upgrade to Solaris 11 Express

2011-04-27 Thread Markus Kovero
> Sync was disabled on the main pool and then let to inherrit to everything > else. The > reason for disabled this in the first place was to fix bad NFS > write performance (even with > Zil on an X25e SSD it was under 1MB/s). > I've also tried setting the logbias to throughput and latency but t

Re: [zfs-discuss] No write coalescing after upgrade to Solaris 11 Express

2011-04-27 Thread Tomas Ögren
On 27 April, 2011 - Matthew Anderson sent me these 3,2K bytes: > Hi All, > > I've run into a massive performance problem after upgrading to Solaris 11 > Express from oSol 134. > > Previously the server was performing a batch write every 10-15 seconds and > the client servers (connected via NFS

Re: [zfs-discuss] Spare drives sitting idle in raidz2 with failed drive

2011-04-27 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Lamp Zy > > One of my drives failed in Raidz2 with two hot spares: > What zpool & zfs version are you using? What OS version? Are all the drives precisely the same size (Same make/model numb

Re: [zfs-discuss] Spare drives sitting idle in raidz2 with failed drive

2011-04-27 Thread Lamp Zy
On 04/26/2011 01:25 AM, Nikola M. wrote: On 04/26/11 01:56 AM, Lamp Zy wrote: Hi, One of my drives failed in Raidz2 with two hot spares: What are zpool/zfs versions? (zpool upgrade Ctrl+c, zfs upgrade Cttr+c). Latest zpool/zfs versions available by numerical designation in all OpenSolaris base

Re: [zfs-discuss] Spare drives sitting idle in raidz2 with failed drive

2011-04-27 Thread Brandon High
On Wed, Apr 27, 2011 at 12:51 PM, Lamp Zy wrote: > Any ideas how to identify which drive is the one that failed so I can > replace it? Try the following: # fmdump -eV # fmadm faulty -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list z

Re: [zfs-discuss] Spare drives sitting idle in raidz2 with failed drive

2011-04-27 Thread Paul Kraus
On Wed, Apr 27, 2011 at 3:51 PM, Lamp Zy wrote: > Great. So, now how do I identify which drive out of the 24 in the storage > unit is the one that failed? > > I looked on the Internet for help but the problem is that this drive > completely disappeared. Even "format" and "iostat -En" show only 23

Re: [zfs-discuss] Dedup and L2ARC memory requirements (again)

2011-04-27 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Erik Trimble > > (BTW, is there any way to get a measurement of number of blocks consumed > per zpool?  Per vdev?  Per zfs filesystem?)  *snip*. > > > you need to use zdb to see what the curr

Re: [zfs-discuss] Dedup and L2ARC memory requirements (again)

2011-04-27 Thread Tomas Ögren
On 27 April, 2011 - Edward Ned Harvey sent me these 0,6K bytes: > > From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > > boun...@opensolaris.org] On Behalf Of Erik Trimble > > > > (BTW, is there any way to get a measurement of number of blocks consumed > > per zpool?  Per vdev?  Per

Re: [zfs-discuss] No write coalescing after upgrade to Solaris 11 Express

2011-04-27 Thread zfs user
On 4/27/11 4:00 AM, Markus Kovero wrote: Sync was disabled on the main pool and then let to inherrit to everything else. The> reason for disabled this in the first place was to fix bad NFS write performance (even with> Zil on an X25e SSD it was under 1MB/s). I've also tried setting the log

Re: [zfs-discuss] Dedup and L2ARC memory requirements (again)

2011-04-27 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Neil Perrin > > No, that's not true. The DDT is just like any other ZFS metadata and can be > split over the ARC, > cache device (L2ARC) and the main pool devices. An infrequently referenced >

Re: [zfs-discuss] Dedup and L2ARC memory requirements (again)

2011-04-27 Thread Richard Elling
On Apr 27, 2011, at 9:26 PM, Edward Ned Harvey wrote: >> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- >> boun...@opensolaris.org] On Behalf Of Neil Perrin >> >> No, that's not true. The DDT is just like any other ZFS metadata and can > be >> split over the ARC, >> cache device

Re: [zfs-discuss] Dedup and L2ARC memory requirements (again)

2011-04-27 Thread Erik Trimble
OK, I just re-looked at a couple of things, and here's what I /think/ is the correct numbers. A single entry in the DDT is defined in the struct "ddt_entry" : http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/fs/zfs/sys/ddt.h#108 I just checked, and the current size of thi