any 2 disks failure?
--
Tomasz Chmielewski
http://wpkg.org
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
any 2 disks failure?
--
Tomasz Chmielewski
http://wpkg.org
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
KB/sec/disc.
[42949428.64] md: using maximum available idle IO bandwidth (but not
more than 20 KB/sec) for reconstruction.
[42949428.65] md: using 128k window, over a total of 779264640 blocks.
[42949428.65] md: resuming recovery of md11 from checkpoint.
--
Tomasz Chmielew
ebuilt? Spare HDD LED doesn't
blink either, so it would indicate that it's doing nothing.
Why? Is it because the rebuild status is not complete?
--
Tomasz Chmielewski
http://wpkg.org
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
t
Tomasz Chmielewski wrote:
I created RAID-10 on 4 drives, and I'm testing it a bit.
I removed one drive, and then added it again.
mdadm --detail says:
Version : 01.00.03
Creation Time : Thu Aug 10 10:15:18 2006
Raid Level : raid10
Array Size : 779264640 (743.16 GiB 7
Paul Clements wrote:
Tomasz Chmielewski wrote:
[42949428.59] md: md11: raid array is not clean -- starting
background reconstruction
[42949428.62] raid10: raid set md11 active with 4 out of 4 devices
[42949428.63] md: syncing RAID array md11
[42949428.63] md: minimum
such a
resync would be only needed in some rare situations.
When can one need to run a "daily forced resync", and in which
circumstances?
--
Tomasz Chmielewski
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL
ave
in crontab??
Indeed, the crontab entry is wrong:
# by default, run at 01:06 on the first Sunday of each month.
6 1 1-7 * 7 root [ -x /usr/share/mdadm/checkarray ] &&
/usr/share/mdadm/checkarray --cron --all --quiet
However, it will run at 01:06, on 1st-7th day of each month, and o
Mario 'BitKoenig' Holbe wrote:
Tomasz Chmielewski <[EMAIL PROTECTED]> wrote:
# by default, run at 01:06 on the first Sunday of each month.
6 1 1-7 * 7 root [ -x /usr/share/mdadm/checkarray ] &&
You have a quite old version of mdadm. This issue has been fixed in
mdadm
he number of "U" (8 for this host) letters indicate that RAID is
healthy?
Or should I count "in_sync" in "cat /sys/block/md*/md/rd*/state"?
Perhaps the two approaches are the same, though.
What's the best way to determine that the RAID is running fine?
--
Tomasz
I've looked at the Irfant and the Buffalo Logic ones as well... all
tempting. But backups are the killer. :]
There's also Thecus n5200, 800 MHz mobile Celeron, which you can supply
with 5 drives... :)
--
Tomasz Chmielewski
http://wpkg.org
-
To unsubscribe from this list: send
t
fi
And then poll the results from the nagios server (let's call it
"check_raid" nagios plugin):
#!/bin/bash
# checks state of software RAID
STATUS=$(ssh -l checkuser -i ~nagios/.ssh/checkuser.rsa $1 "cat
/tmp/raid-status.txt")
if [ "$STATUS" == "RAID s
oot. Is it normal?
--
Tomasz Chmielewski
http://wpkg.org
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
/sec/disc.
md: using maximum available idle IO bandwidth (but not more than 20
KB/sec) for reconstruction.
md: using 128k window, over a total of 312568576 blocks.
(it has so many raid levels, as I'm still experimenting with it).
--
Tomasz Chmielewski
http://wpkg.org
-
To unsubscribe
to do the balancing in general in a non-trivial task, but would
be worth spending time on.
Probably what I said before isn't very correct, as RAID-1 has no idea of
the filesystem that is on top of it; rather, it will see attempts to
access differend areas of the array?
--
Tomasz Chmiel
e it's synchronized, it
works fine.
(it has so many raid levels, as I'm still experimenting with it).
Experimentation is good!! It helps you find my bugs :-)
:)
--
Tomasz Chmielewski
http://wpkg.org
-
To unsubscribe from this list: send the line "unsubscribe linux
down and powering off the machine doesn't change anything.
--
Tomasz Chmielewski
http://wpkg.org
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
l DMA engines.
The device I use is Thecus n4100, it is "Platform: IQ31244 (XScale)",
and has 600 MHz CPU.
--
Tomasz Chmielewski
http://wpkg.org
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
Mor
Tomasz Chmielewski schrieb:
Ronen Shitrit wrote:
The resync numbers you sent, looks very promising :)
Do you have any performance numbers that you can share for these set of
patches, which shows the Rd/Wr IO bandwidth.
I have some simple tests made with hdparm, with the results I don
m later with ext3online)?
Perhaps "dd" tool would be fine if the data didn't change.
RAID-1 would be great, as it tracks changes, but I'd have to create
RAID-1 over: /dev/md10 and /dev/sdr. Wouldn't it destroy the contents of
/dev/md10?
Ideas how to synchronize t
Peter Rabbitson schrieb:
Tomasz Chmielewski wrote:
I have a RAID-10 setup of four 400 GB HDDs. As the data grows by several
GBs a day, I want to migrate it somehow to RAID-5 on separate disks in a
separate machine.
Which would be easy, if I didn't have to do it online, without stoppin
Gordon Henderson schrieb:
On Tue, 15 May 2007, Tomasz Chmielewski wrote:
I have a RAID-10 setup of four 400 GB HDDs. As the data grows by
several GBs a day, I want to migrate it somehow to RAID-5 on separate
disks in a separate machine.
Which would be easy, if I didn't have to do it o
RROR:
Allow volume managers to mirror logical volumes, also
needed for live data migration tools such as 'pvmove'.
--
Tomasz Chmielewski
http://wpkg.org
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMA
ate a degraded, 4 disk RAID-5 array with just 3
drives? I would add the 4th drive once migration from RAID-10 is done.
(I'm aware of the risks - that my degraded RAID-10 will be "vulnerable"
during the migration).
--
Tomasz Chmielewski
http://wpkg.org
-
To unsubscribe from this
itmap=/root/mybitmap \
--write-behind --raid-disks=2 /dev/localdevice --write-mostly
/dev/remotedevice
One more question - is there a way to estimate the size of the bitmap
file? Does it depend on the size of the array?
What bitmap file size can I expect for a 600 GB array?
--
T
n you
really need to, but that should add up to less than one second.
Good, I was wondering if ~200 MB left I have on a filesystem would be
enough :)
--
Tomasz Chmielewski
http://wpkg.org
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a me
ly on SMART.
I have a broken drive, which has lots of badblocks - but SMART happily
claims it's fine (short/long tests are completed without errors).
--
Tomasz Chmielewski
http://wpkg.org
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of
which.
Considering you can see HDD LEDs blinking, something like:
dd if=/dev/sdb of=/dev/null
should help you identify the disk :)
--
Tomasz Chmielewski
http://wpkg.org
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More ma
"N * 2 + 1" drives (where N is the
current number of drives in the array) just to add one drive to RAID-10
is the worst-case scenario.
--
Tomasz Chmielewski
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED
array.
Would the RAID-10 -> RAID-5 migration be as easy as in RAID-1 -> RAID-5
case?
--
Tomasz Chmielewski
http://wpkg.org
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger
Justin Piszcz schrieb:
On Fri, 24 Aug 2007, Tomasz Chmielewski wrote:
I built RAID-5 on a Debian Etch machine running 2.6.22.5 with this
command:
mdadm --create /dev/md0 --chunk=64 --level=raid5 --raid-devices=5
/dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1
After some time, it was
Tomasz Chmielewski schrieb:
(...)
Perhaps, the bitmap is needed then? I guess by default, no internal
bitmap is added?
# mdadm -X /dev/md0
Filename : /dev/md0
Magic :
mdadm: invalid bitmap magic 0x0, the bitmap file appears to be corrupted
Version : 0
ing to
/proc/mdstat, it would be 6 more hours.
--
Tomasz Chmielewski
http://wpkg.org
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Tomasz Chmielewski schrieb:
Justin Piszcz schrieb:
According to the fine manual, BITMAP CHANGES belong to the grow mode.
So, let's try to do what the manual says - try to add a bitmap to the
active array:
# mdadm --grow /dev/md0 --bitmap=internal
mdadm: failed to set internal b
raid1]
md11 : active raid1 sdd[1](W) dm-47[0]
658800640 blocks super non-persistent [2/2] [UU]
[==>..] resync = 91.8% (605314368/658800640)
finish=65.6min speed=13584K/sec
bitmap: 315/315 pages [1260KB], 1024KB chunk, file:
/root/backup-bitmap
--
Tomasz Chmi
Tomasz Chmielewski schrieb:
The date of the file is the date of creation of this array, and as I
look inside, it's basically almost empty. When looking in a hex editor,
zeroes (+bitm etc.)at the beginning, and than FF to the end.
Is it normal?
Here is some more info about the b
Tomasz Chmielewski schrieb:
Tomasz Chmielewski schrieb:
The date of the file is the date of creation of this array, and as I
look inside, it's basically almost empty. When looking in a hex
editor, zeroes (+bitm etc.)at the beginning, and than FF to the end.
Is it normal?
Here is
s; I
believe they are stable only as of 2.6.22 (before 2.6.22 snapshots
needed a lot of RAM; before 2.6.18 there were problems with snapshots
removing etc.).
Would be good to add some of that info to LVM HOWTO.
--
Tomasz Chmielewski
http://wpkg.org
-
To unsubscribe from this list: send the l
Goswin von Brederlow schrieb:
Tomasz Chmielewski <[EMAIL PROTECTED]> writes:
(...)
Yes, I tried to online resize a similar filesystem (600 MB to 1.2 TB)
and it didn't work.
At some point, resize2fs would just exit with errors.
I tried to do it several times before I figured
;nice -n 19 dm-mirror" ;)
Look into sync_speed_min and sync_speed_max in /sys/block/mdX/md.
--
Tomasz Chmielewski
http://blog.wpkg.org
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at ht
xt3) are not supported on
filesystems placed on md/dm, it's a bit of a pain.
--
Tomasz Chmielewski
http://wpkg.org
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
s it could be any of the 5 disks in the PC. Is there anyway to make it
easier to identify which disk is which?.
If the drives have any LEDs, the most reliable way would be:
dd if=/dev/drive of=/dev/null
Then look which LED is the one which blinks the most.
--
Tomasz Chmielewski
http://wpkg.
42 matches
Mail list logo