Re: [zfs-discuss] Slow zfs writes

2013-02-12 Thread Ian Collins
Ram Chander wrote: Hi Roy, You are right. So it looks like re-distribution issue. Initially there were two Vdev with 24 disks ( disk 0-23 ) for close to year. After which which we added 24 more disks and created additional vdevs. The initial vdevs are filled up and so write speed declined.

Re: [zfs-discuss] ZFS monitoring

2013-02-12 Thread Pawel Jakub Dawidek
On Mon, Feb 11, 2013 at 05:39:27PM +0100, Jim Klimov wrote: > On 2013-02-11 17:14, Borja Marcos wrote: > > > > On Feb 11, 2013, at 4:56 PM, Tim Cook wrote: > > > >> The zpool iostat output has all sorts of statistics I think would be > >> useful/interesting to record over time. > > > > > > Yes, th

Re: [zfs-discuss] Slow zfs writes

2013-02-12 Thread Jim Klimov
On 2013-02-12 10:32, Ian Collins wrote: Ram Chander wrote: Hi Roy, You are right. So it looks like re-distribution issue. Initially there were two Vdev with 24 disks ( disk 0-23 ) for close to year. After which which we added 24 more disks and created additional vdevs. The initial vdevs are fi

Re: [zfs-discuss] Freeing unused space in thin provisioned zvols

2013-02-12 Thread Darren J Moffat
On 02/10/13 12:01, Koopmann, Jan-Peter wrote: Why should it? Unless you do a shrink on the vmdk and use a zfs variant with scsi unmap support (I believe currently only Nexenta but correct me if I am wrong) the blocks will not be freed, will they? Solaris 11.1 has ZFS with SCSI UNMAP suppor

Re: [zfs-discuss] ZFS monitoring

2013-02-12 Thread Borja Marcos
On Feb 12, 2013, at 11:25 AM, Pawel Jakub Dawidek wrote: > I made kstat data available on FreeBSD via 'kstat' sysctl tree: Yes, I am using the data. I wasn't sure about how getting something meaningful from it, but I've found the arcstats.pl script and I am using it as a model. Suggestions wil

Re: [zfs-discuss] Freeing unused space in thin provisioned zvols

2013-02-12 Thread Stefan Ring
>> Unless you do a shrink on the vmdk and use a zfs variant with scsi unmap >> support (I believe currently only Nexenta but correct me if I am wrong) the >> blocks will not be freed, will they? > > > Solaris 11.1 has ZFS with SCSI UNMAP support. Freeing unused blocks works perfectly well with fst

Re: [zfs-discuss] Freeing unused space in thin provisioned zvols

2013-02-12 Thread Thomas Nau
Darren On 02/12/2013 11:25 AM, Darren J Moffat wrote: On 02/10/13 12:01, Koopmann, Jan-Peter wrote: Why should it? Unless you do a shrink on the vmdk and use a zfs variant with scsi unmap support (I believe currently only Nexenta but correct me if I am wrong) the blocks will not be freed, wi

Re: [zfs-discuss] Freeing unused space in thin provisioned zvols

2013-02-12 Thread Darren J Moffat
On 02/12/13 15:07, Thomas Nau wrote: Darren On 02/12/2013 11:25 AM, Darren J Moffat wrote: On 02/10/13 12:01, Koopmann, Jan-Peter wrote: Why should it? Unless you do a shrink on the vmdk and use a zfs variant with scsi unmap support (I believe currently only Nexenta but correct me if I am

Re: [zfs-discuss] Freeing unused space in thin provisioned zvols

2013-02-12 Thread Casper . Dik
>No tools, ZFS does it automaticaly when freeing blocks when the >underlying device advertises the functionality. > >ZFS ZVOLs shared over COMSTAR advertise SCSI UNMAP as well. If a system was running something older, e.g., Solaris 11; the "free" blocks will not be marked such on the server e

Re: [zfs-discuss] Freeing unused space in thin provisioned zvols

2013-02-12 Thread Sašo Kiselkov
On 02/10/2013 01:01 PM, Koopmann, Jan-Peter wrote: > Why should it? > > I believe currently only Nexenta but correct me if I am wrong The code has been mainlined a while ago, see: https://github.com/illumos/illumos-gate/blob/master/usr/src/uts/common/io/comstar/lu/stmf_sbd/sbd.c#L3702-L3730 http

Re: [zfs-discuss] Slow zfs writes

2013-02-12 Thread Ian Collins
Jim Klimov wrote: On 2013-02-12 10:32, Ian Collins wrote: Ram Chander wrote: Hi Roy, You are right. So it looks like re-distribution issue. Initially there were two Vdev with 24 disks ( disk 0-23 ) for close to year. After which which we added 24 more disks and created additional vdevs. The in