On Mon, 19 Dec 2011, Peter Maloney wrote:
...
Thanks for the info. But I am confused by it, because when my disks
moved around randomly on reboot, it really did mess things up. The first
few times it happened, there was no issue, but when a spare took the
place of a pool disk, it messed things up
Am 19.12.2011 22:53, schrieb Dan Nelson:
> In the last episode (Dec 19), Stefan Esser said:
>> poolalloc free read write read write
>> -- - - - - - -
>> raid1 4.41T 2.21T139 72 12.3M 818K
>> raidz14.41T 2.21T139
Am 19.12.2011 17:36, schrieb Michael Reifenberger:
> Hi,
> a quick test using `dd if=/dev/zero of=/test ...` shows:
>
> dT: 10.004s w: 10.000s filter: ^a?da?.$
> L(q) ops/sr/s kBps ms/rw/s kBps ms/w %busy Name
> 0378 0 0 12.5376 36414 11.9 60.6| a
On Dec 19, 2011, at 11:53 PM, Dan Nelson wrote:
>
> Since it looks like the algorithm ends up creating two half-cold parity
> disks instead of one cold disk, I bet a 3-disk RAIDZ would exhibit even
> worse balancing, and a 5-disk set would be more even.
There were some experiments a year or two
In the last episode (Dec 19), Stefan Esser said:
> Am 19.12.2011 17:22, schrieb Dan Nelson:
> > In the last episode (Dec 19), Stefan Esser said:
> >> for quite some time I have observed an uneven distribution of load
> >> between drives in a 4 * 2TB RAIDZ1 pool. The following is an excerpt
> >> of
Am 19.12.2011 22:07, schrieb Daniel Kalchev:
> On Dec 19, 2011, at 11:00 PM, Stefan Esser wrote:
>> Well, I had dedup enabled for a few short tests. But since I have got
>> "only" 8GB of RAM and dedup seems to require an order of magnitude more
>> to be working well, I switched dedup off again afte
Am 19.12.2011 22:00, schrieb Garrett Cooper:
> On Dec 19, 2011, at 12:54 PM, Stefan Esser wrote:
>> But it seems that others do not observe the asymmetric distribution of
>> requests, which makes me wonder whether I happen to have meta data
>> arranged in such a way that it is always read from ada0
On Mon, Dec 19, 2011 at 1:07 PM, Daniel Kalchev wrote:
>
> On Dec 19, 2011, at 11:00 PM, Stefan Esser wrote:
>
>> Am 19.12.2011 19:03, schrieb Daniel Kalchev:
>>> I have observed similar behavior, even more extreme on a spool with dedup
>>> enabled. Is dedup enabled on this spool?
>>
>> Thank you
On Dec 19, 2011, at 11:00 PM, Stefan Esser wrote:
> Am 19.12.2011 19:03, schrieb Daniel Kalchev:
>> I have observed similar behavior, even more extreme on a spool with dedup
>> enabled. Is dedup enabled on this spool?
>
> Thank you for the report!
>
> Well, I had dedup enabled for a few short
Am 19.12.2011 19:03, schrieb Daniel Kalchev:
> I have observed similar behavior, even more extreme on a spool with dedup
> enabled. Is dedup enabled on this spool?
Thank you for the report!
Well, I had dedup enabled for a few short tests. But since I have got
"only" 8GB of RAM and dedup seems to
On Dec 19, 2011, at 12:54 PM, Stefan Esser wrote:
> Am 19.12.2011 18:05, schrieb Garrett Cooper:
>> On Mon, Dec 19, 2011 at 6:22 AM, Stefan Esser wrote:
>>> Hi ZFS users,
>>>
>>> for quite some time I have observed an uneven distribution of load
>>> between drives in a 4 * 2TB RAIDZ1 pool. The f
Am 19.12.2011 18:05, schrieb Garrett Cooper:
> On Mon, Dec 19, 2011 at 6:22 AM, Stefan Esser wrote:
>> Hi ZFS users,
>>
>> for quite some time I have observed an uneven distribution of load
>> between drives in a 4 * 2TB RAIDZ1 pool. The following is an excerpt of
>> a longer log of 10 second aver
Am 19.12.2011 17:36, schrieb Michael Reifenberger:
> Hi,
> a quick test using `dd if=/dev/zero of=/test ...` shows:
>
> dT: 10.004s w: 10.000s filter: ^a?da?.$
> L(q) ops/sr/s kBps ms/rw/s kBps ms/w %busy Name
> 0378 0 0 12.5376 36414 11.9 60.6| a
Am 19.12.2011 17:48, schrieb Michael Reifenberger:
> On Mon, 19 Dec 2011, Peter Maloney wrote:
>
>> Swapping disks (or even removing one depending on controller, etc. when
>> it fails) without labels can be bad.
>> eg.
>
> Since ZFS uses (and searches for) its own UUID partition signatures s
> disk
Am 19.12.2011 17:22, schrieb Dan Nelson:
> In the last episode (Dec 19), Stefan Esser said:
>> for quite some time I have observed an uneven distribution of load between
>> drives in a 4 * 2TB RAIDZ1 pool. The following is an excerpt of a longer
>> log of 10 second averages logged with gstat:
>>
>
Am 19.12.2011 16:42, schrieb Peter Maloney:
> On 12/19/2011 03:22 PM, Stefan Esser wrote:
>> So: Can anybody reproduce this distribution requests?
> I don't have a raidz1 machine, and no time to make you a special raidz1
> pool out of spare disks, but on my raidz2 I can only ever see unevenness
> w
Am 19.12.2011 15:36, schrieb Olivier Smedts:
> 2011/12/19 Stefan Esser :
>> So: Can anybody reproduce this distribution requests?
>
> Hello,
>
> Stupid question, but are your drives all exactly the same ? I noticed
> "ashift: 12" so I think you should have at least one 4k-sector drive,
> are you
I have observed similar behavior, even more extreme on a spool with dedup
enabled. Is dedup enabled on this spool?
Might be that the DDT tables somehow end up unevenly distributed to disks. My
observation was on a 6 disk raidz2.
Daniel___
freebsd-curr
On Mon, Dec 19, 2011 at 6:22 AM, Stefan Esser wrote:
> Hi ZFS users,
>
> for quite some time I have observed an uneven distribution of load
> between drives in a 4 * 2TB RAIDZ1 pool. The following is an excerpt of
> a longer log of 10 second averages logged with gstat:
>
> dT: 10.001s w: 10.000s
On Mon, 19 Dec 2011, Peter Maloney wrote:
Swapping disks (or even removing one depending on controller, etc. when
it fails) without labels can be bad.
eg.
Since ZFS uses (and searches for) its own UUID partition signatures s
disk wapping shouldn't matter as long enough disks are found.
Set vf
In the last episode (Dec 19), Stefan Esser said:
> for quite some time I have observed an uneven distribution of load between
> drives in a 4 * 2TB RAIDZ1 pool. The following is an excerpt of a longer
> log of 10 second averages logged with gstat:
>
> dT: 10.001s w: 10.000s filter: ^a?da?.$
>
Hi,
a quick test using `dd if=/dev/zero of=/test ...` shows:
dT: 10.004s w: 10.000s filter: ^a?da?.$
L(q) ops/sr/s kBps ms/rw/s kBps ms/w %busy Name
0378 0 0 12.5376 36414 11.9 60.6| ada0
0380 0 0 12.2378 36501 11.8 6
On 12/19/2011 03:22 PM, Stefan Esser wrote:
> Hi ZFS users,
>
> for quite some time I have observed an uneven distribution of load
> between drives in a 4 * 2TB RAIDZ1 pool. The following is an excerpt of
> a longer log of 10 second averages logged with gstat:
>
> dT: 10.001s w: 10.000s filter: ^
2011/12/19 Stefan Esser :
> Hi ZFS users,
>
> for quite some time I have observed an uneven distribution of load
> between drives in a 4 * 2TB RAIDZ1 pool. The following is an excerpt of
> a longer log of 10 second averages logged with gstat:
>
> dT: 10.001s w: 10.000s filter: ^a?da?.$
> L(q) o
Hi ZFS users,
for quite some time I have observed an uneven distribution of load
between drives in a 4 * 2TB RAIDZ1 pool. The following is an excerpt of
a longer log of 10 second averages logged with gstat:
dT: 10.001s w: 10.000s filter: ^a?da?.$
L(q) ops/sr/s kBps ms/rw/s kBps
25 matches
Mail list logo