[zfs-discuss] Hare receiving snapshots become slower?

2011-09-29 Thread Ian Collins
 I have an application that iterates through snapshots sending them to 
a remote host.  With a Solaris 10 receiver, empty snapshots are received 
in under a second, but with a Solaris 11 Express receiver, empty 
snapshots are received in 2 to three seconds.  This is becoming a real 
nuisance where I have a large number of snapshots in a filesystem that's 
unchanged.


For example:

receiving incremental stream of export/vbox@20110927_1805 into 
backup/vbox@20110927_1805

received 312B stream in 3 seconds (104B/sec)
receiving incremental stream of export/vbox@20110927_2205 into 
backup/vbox@20110927_2205

received 312B stream in 2 seconds (156B/sec)

The change looks to be increased latency, bigger snapshots still appear 
to be received at the same speed as before.


Does anyone know what has changed to cause this slowdown?

--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] All (pure) SSD pool rehash

2011-09-29 Thread Garrett D'Amore

On Sep 28, 2011, at 8:44 PM, Edward Ned Harvey wrote:

>> From: Richard Elling [mailto:richard.ell...@gmail.com]
>> 
>> Also, the default settings for the resilver throttle are set for HDDs. For
> SSDs,
>> it is a
>> good idea to change the throttle to be more aggressive.
> 
> You mean...
> Be more aggressive, resilver faster?
> or Be more aggressive, throttling the resilver?
> 
> What's the reasoning that makes you want to set it differently from a HDD?

I think he means, resilver faster.

SSDs can be driven harder, and have more IOPs so we can hit them harder with 
less impact on the overall performance.  The reason we throttle at all is to 
avoid saturating the bandwidth of the drive with resilver which would prevent 
regular operations from making progress.  Generally I believe resilver 
operations are not "bandwidth bound" in the sense of pure throughput, but are 
IOPs bound.  As SSDs have no seek time, they can handle a lot more of these 
little operations than a regular hard disk.

  - Garrett

> 
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] All (pure) SSD pool rehash

2011-09-29 Thread Zaeem Arshad
On Thu, Sep 29, 2011 at 11:33 AM, Garrett D'Amore
wrote:

>
>
> I think he means, resilver faster.
>
> SSDs can be driven harder, and have more IOPs so we can hit them harder
> with less impact on the overall performance.  The reason we throttle at all
> is to avoid saturating the bandwidth of the drive with resilver which would
> prevent regular operations from making progress.  Generally I believe
> resilver operations are not "bandwidth bound" in the sense of pure
> throughput, but are IOPs bound.  As SSDs have no seek time, they can handle
> a lot more of these little operations than a regular hard disk.
>
>  - Garrett
>
>

What's the throttling rate if I may call it that?


--
Zaeem
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Hare receiving snapshots become slower?

2011-09-29 Thread Ian Collins

 On 09/30/11 05:14 AM, erik wrote:


On Thu, 29 Sep 2011 21:13:56 +1300, Ian Collins wrote:


   I have an application that iterates through snapshots sending them to
a remote host.  With a Solaris 10 receiver, empty snapshots are received
in under a second, but with a Solaris 11 Express receiver, empty
snapshots are received in 2 to three seconds.  This is becoming a real
nuisance where I have a large number of snapshots in a filesystem that's
unchanged.

For example:

receiving incremental stream of export/vbox@20110927_1805 into
backup/vbox@20110927_1805
received 312B stream in 3 seconds (104B/sec)
receiving incremental stream of export/vbox@20110927_2205 into
backup/vbox@20110927_2205
received 312B stream in 2 seconds (156B/sec)

The change looks to be increased latency, bigger snapshots still appear
to be received at the same speed as before.

Does anyone know what has changed to cause this slowdown?


I think that's pretty much the baseline overhead required for 
validating the consistency of the snapshot and it's applicability on 
the destination pool. I have similar numbers on a little NAS dumping 
to a set of external USB disks that behave in a similar manner:


That does appear to be the case, but I was wondering why it has become 
so much worse?


I am in the process of copying some large data sets to a new server and 
the whole process it taking way longer than I expected (there are 
thousands of small snapshots).


Slowing down replication is not a good move!

--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Hare receiving snapshots become slower?

2011-09-29 Thread Ian Collins

 On 09/30/11 08:03 AM, Bob Friesenhahn wrote:

On Fri, 30 Sep 2011, Ian Collins wrote:

Slowing down replication is not a good move!

Do you prefer pool corruption? ;-)

Probably they fixed a dire bug and this is the cost of the fix.

Could be.  I think I'll raise a support case to find out why.  This is 
making it difficult for me to meet a replication guarantee.


--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] S10 version question

2011-09-29 Thread Rich Teer
Hi all,

Got a quick question: what are the latest zpool and zfs versions
supported in Solaris 10 Update 10?

TIA,

-- 
Rich Teer, Publisher
Vinylphile Magazine

www.vinylphilemag.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] S10 version question

2011-09-29 Thread Ian Collins

 On 09/30/11 11:59 AM, Rich Teer wrote:

Hi all,

Got a quick question: what are the latest zpool and zfs versions
supported in Solaris 10 Update 10?


In update 10: pool version 29, ZFS version 5.

--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] S10 version question

2011-09-29 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Ian Collins
> 
> > Got a quick question: what are the latest zpool and zfs versions
> > supported in Solaris 10 Update 10?
> >
> In update 10: pool version 29, ZFS version 5.

I don't know what the other differences are, but the first one I noticed is
the sync property.  Even if you don't zpool upgrade or zfs upgrade, just by
applying the patches to an older solaris 10, you can't disable ZIL anymore
via /etc/system.  At least not in the way that formerly worked, as described
on the evil tuning guide.  Now you use the sync property instead.  This is a
change for positive, but surprised me.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] S10 version question

2011-09-29 Thread Paul Kraus
On Thu, Sep 29, 2011 at 9:51 PM, Edward Ned Harvey
 wrote:
>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>> boun...@opensolaris.org] On Behalf Of Ian Collins
>>
>> > Got a quick question: what are the latest zpool and zfs versions
>> > supported in Solaris 10 Update 10?
>> >
>> In update 10: pool version 29, ZFS version 5.
>
> I don't know what the other differences are, but the first one I noticed is
> the sync property.  Even if you don't zpool upgrade or zfs upgrade, just by
> applying the patches to an older solaris 10, you can't disable ZIL anymore
> via /etc/system.  At least not in the way that formerly worked, as described
> on the evil tuning guide.  Now you use the sync property instead.  This is a
> change for positive, but surprised me.

Another potential difference ... I have been told by Oracle Support
(but have not yet confirmed) that just running the latest zfs code
(Solaris 10U10) will disable the aclmode property, even if you do not
upgrade the zpool version beyond 22. I expect to test this next week,
as we _need_ ACLs to work for our data.

-- 
{1-2-3-4-5-6-7-}
Paul Kraus
-> Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
-> Sound Coordinator, Schenectady Light Opera Company (
http://www.sloctheater.org/ )
-> Technical Advisor, RPI Players
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss