On 1.5.2011 08:52, Alexander Farber wrote:
> Hello Mark and others,
>
> On Thu, Apr 28, 2011 at 10:21 PM,
> wrote:
>> At this point, I'd run the long test on each drive, and (after coming back
>> an hour or two later, see the results.
>
> I have that dreadly warning again -
>
> /etc/c
Hello Mark and others,
On Thu, Apr 28, 2011 at 10:21 PM, wrote:
> At this point, I'd run the long test on each drive, and (after coming back
> an hour or two later, see the results.
I have that dreadly warning again -
/etc/cron.weekly/99-raid-check:
WARNING: mismatch_cnt is not
Alexander Farber wrote:
> Turned out, smartd kept saying, that it had no entries in smartd.conf.
>
> I've copied smartd.rpmnew over smartd.conf, restarted it,
> now I have (in /var/log/messages, date+hostname removed):
At this point, I'd run the long test on each drive, and (after coming back
an h
Turned out, smartd kept saying, that it had no entries in smartd.conf.
I've copied smartd.rpmnew over smartd.conf, restarted it,
now I have (in /var/log/messages, date+hostname removed):
smartd version 5.38 [x86_64-redhat-linux-gnu] Copyright (C) 2002-8 Bruce Allen
Home page is http://smartmontoo
On Thu, 2011-04-28 at 21:52 +0200, Alexander Farber wrote:
> On the 2nd try it has booted and seems to work.
Did it give an error on the first try and if so, which one ?
You should check /var/log/messages for i/o errors and check your disks
with smartctl
I have had my raid1 arrays rebuild sometim
on 4/28/2011 12:40 PM Alexander Farber spake the following:
> Thank you all, it seems to have finished - I'm rebooting.
>
> Just curious why is the State of md3 "active", while the others are "clean"?
>
If I remember right, clean means it is completely synced and not being written
to or mounted.
Alexander Farber wrote:
> On the 2nd try it has booted and seems to work.
>
> The /var/log/mcelog is (and was) empty.
To be expected - I'd expect this as a h/d error. Check your logfiles for
info from smartd
mark
___
CentOS mailing list
Cen
On the 2nd try it has booted and seems to work.
The /var/log/mcelog is (and was) empty.
# sudo cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sdb1[1] sda1[0]
1023936 blocks [2/2] [UU]
md2 : active raid1 sdb5[1] sda5[0]
277728192 blocks [2/2] [UU]
md3 : active raid1 sdb6
Thank you all, it seems to have finished - I'm rebooting.
Just curious why is the State of md3 "active", while the others are "clean"?
# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1]
md0 : active raid1 sda1[0] sdb1[1]
1023936 blocks [2/2] [UU]
md1 : active raid1 sda3[0] sdb3[1
On 04/28/2011 03:31 PM, Michel van Deventer wrote:
> Hi,
>
> On Thu, 2011-04-28 at 21:26 +0200, Alexander Farber wrote:
>> Hello, I didn't touch anything, just booted the hoster's "rescue image".
> Cool :)
>
>> # cat /proc/mdstat
>> Personalities : [linear] [raid0] [raid1]
>> md0 : active raid1 s
Hi,
On Thu, 2011-04-28 at 21:26 +0200, Alexander Farber wrote:
> Hello, I didn't touch anything, just booted the hoster's "rescue image".
Cool :)
> # cat /proc/mdstat
> Personalities : [linear] [raid0] [raid1]
> md0 : active raid1 sda1[0] sdb1[1]
> 1023936 blocks [2/2] [UU]
>
> md1 : activ
On 04/28/2011 03:26 PM, Alexander Farber wrote:
> Hello, I didn't touch anything, just booted the hoster's "rescue image".
>
> # cat /etc/mdadm.conf
> cat: /etc/mdadm.conf: No such file or directory
>
>
> # cat /proc/mdstat
> Personalities : [linear] [raid0] [raid1]
> md0 : active raid1 sda1[0]
Hi,
what is the output of 'cat /proc/mdstat' ?
A healthy raid should look something like below :
[root@janeway ~]# cat /proc/mdstat
Personalities : [raid1]
md2 : active raid1 sdb1[0] sda1[1]
256896 blocks [2/2] [UU]
md0 : active raid1 sdd1[0] sdc1[1]
1465135936 blocks [2/2] [
On 4/28/2011 2:07 PM, Alexander Farber wrote:
> Hello,
>
> since weeks I was ignoring this warning at my CentOS 5.6/64 bit machine -
>
> /etc/cron.weekly/99-raid-check:
> WARNING: mismatch_cnt is not 0 on /dev/md0
>
> in the hope that the software RAID will slowly repair itself.
>
> I als
Hello, I didn't touch anything, just booted the hoster's "rescue image".
# cat /etc/mdadm.conf
cat: /etc/mdadm.conf: No such file or directory
# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1]
md0 : active raid1 sda1[0] sdb1[1]
1023936 blocks [2/2] [UU]
md1 : active raid1 sda3[0
On 04/28/2011 03:10 PM, Alexander Farber wrote:
> Rebuild Status : 38% complete
That's potentially promising. What does 'cat /proc/mdstat' show? Did you
have to recover the array, or were you able to use /etc/mdadm.conf?
--
Digimer
E-Mail: digi...@alteeve.com
AN!Whitepapers: http://alteeve.com
Additional info (how many RAID arrays do I have??):
# mdadm -D /dev/md3
/dev/md3:
Version : 00.90
Creation Time : Sat Mar 19 22:53:25 2011
Raid Level : raid1
Array Size : 185151360 (176.57 GiB 189.59 GB)
Used Dev Size : 185151360 (176.57 GiB 189.59 GB)
Raid Devices : 2
T
17 matches
Mail list logo