On Oct 16, 2011, at 3:56 AM, Jim Klimov wrote:
> 2011-09-29 17:15, Zaeem Arshad пишет:
>>
>>
>> On Thu, Sep 29, 2011 at 11:33 AM, Garrett D'Amore
>> wrote:
>>
>>
>> I think he means, resilver faster.
>>
>> SSDs can be driven harder, and have more IOPs so we can hit them harder with
>> less
2011-09-29 17:15, Zaeem Arshad ?:
On Thu, Sep 29, 2011 at 11:33 AM, Garrett D'Amore
mailto:garrett.dam...@gmail.com>> wrote:
I think he means, resilver faster.
SSDs can be driven harder, and have more IOPs so we can hit them
harder with less impact on the overall performan
On Thu, Sep 29, 2011 at 11:33 AM, Garrett D'Amore
wrote:
>
>
> I think he means, resilver faster.
>
> SSDs can be driven harder, and have more IOPs so we can hit them harder
> with less impact on the overall performance. The reason we throttle at all
> is to avoid saturating the bandwidth of the
On Sep 28, 2011, at 8:44 PM, Edward Ned Harvey wrote:
>> From: Richard Elling [mailto:richard.ell...@gmail.com]
>>
>> Also, the default settings for the resilver throttle are set for HDDs. For
> SSDs,
>> it is a
>> good idea to change the throttle to be more aggressive.
>
> You mean...
> Be mor
> From: Richard Elling [mailto:richard.ell...@gmail.com]
>
> Also, the default settings for the resilver throttle are set for HDDs. For
SSDs,
> it is a
> good idea to change the throttle to be more aggressive.
You mean...
Be more aggressive, resilver faster?
or Be more aggressive, throttling the
On Sep 27, 2011, at 6:30 PM, Fajar A. Nugraha wrote:
> On Wed, Sep 28, 2011 at 8:21 AM, Edward Ned Harvey
>> So again: Not a problem if you're making your pool out of SSD's.
>
> Big problem if your system is already using most of the available IOPSduring
> normal operation.
Resilvers are thrott
On Tue, 27 Sep 2011, Edward Ned Harvey wrote:
The problem basically applies to HDD's. By creating your pool of SSD's,
this problem should be eliminated.
This is not completely true. SSDs will help significantly but they
will still suffer from the synchronized commit of a transaction group.
On Wed, Sep 28, 2011 at 8:21 AM, Edward Ned Harvey
wrote:
> When a vdev resilvers, it will read each slab of data, in essentially time
> order, which is approximately random disk order, in order to reconstruct the
> data that must be written on the resilvering device. This creates two
> problems,
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Matt Banks
>
> Am I crazy for putting something like this into production using Solaris
10/11?
> On paper, it really seems ideal for our needs.
Do you have an objection to solaris 10/11 for so
On 9/27/2011 10:39 AM, Bob Friesenhahn wrote:
On Tue, 27 Sep 2011, Matt Banks wrote:
Also, maybe I read it wrong, but why is it that (in the previous
thread about hw raid and zpools) zpools with large numbers of
physical drives (eg 20+) were frowned upon? I know that ZFS!=WAFL
There is no co
On Tue, Sep 27, 2011 at 1:21 PM, Matt Banks wrote:
> Also, maybe I read it wrong, but why is it that (in the previous thread about
> hw raid and zpools) zpools with large numbers of physical drives (eg 20+)
> were frowned upon? I know that ZFS!=WAFL but it's so common in the
> NetApp world that I
On Tue, 27 Sep 2011, Matt Banks wrote:
Am I crazy for putting something like this into production using Solaris 10/11?
On paper, it really seems ideal for our needs.
As long as the drive firmware operates correctly, I don't see a
problem.
Also, maybe I read it wrong, but why is it that (in
I know there was a thread about this a few months ago.
However, with the costs of SSD's falling like they have, the idea of an Oracle
X4270 M2/Cisco C210 M2/IBM x3650 M3 class of machine with a 13 drive RAIDZ2
zpool (1 hot spare) is really starting to sound alluring to me/us. Especially
with so
13 matches
Mail list logo