Peter Grandi schrieb:
Those are as such not very meaningful. What matters most is
whether the starting physical address of each logical volume
extent is stripe aligned (and whether the filesystem makes use
of that) and then the stripe size of the parity RAID set, not
the chunk sizes in themselves
Janek Kozicki schrieb:
hold on. This might be related to raid chunk positioning with respect
to LVM chunk positioning. If they interfere there indeed may be some
performance drop. Best to make sure that those chunks are aligned together.
Interesting. I'm seeing a 20% performance drop too, with
Neil Brown schrieb:
>
> This isn't a resync, it is a data check. "Dec 2" is the first Sunday
> of the month. You probably have a crontab entries that does
>echo check > /sys/block/mdX/md/sync_action
>
> early on the first Sunday of the month. I know that Debian does this.
>
> It is good
Justin Piszcz schrieb:
>
> It rebuilds the array because 'something' is causing device
> resets/timeouts on your USB device:
>
> Dec 1 20:04:49 quassel kernel: usb 4-5.2: reset high speed USB device
> using ehci_hcd and address 4
>
> Naturally, when it is reset, the device is disconnected and t
[Please CC me on replies as I'm not subscribed]
Hello!
I've been experimenting with software RAID a bit lately, using two
external 500GB drives. One is connected via USB, one via Firewire. It is
set up as a RAID5 with LVM on top so that I can easily add more drives
when I run out of space.
About