I have a drbd (drbd 0.7.19) setup on Centos 3.7 with software raid
Everything is Ok but I have hundreds of messages of this type in the log
file
Jul 24 11:49:31 mail01 kernel: raid5: switching cache buffer size, 4096 --> 512
Jul 24 11:49:31 mail01 kernel: raid5: switching cache buffer size, 512 -
On 06/19/2012 11:59 PM, Dagia Dorjsuren wrote:
after node1 crashed, I run below command on node2. But it was not run.
Since this is node 2, it was secondary. Unless you have Pacemaker or
some other tool installed, it won't automatically become primary. You
need to do that manually:
drbdadm
On 06/20/2012 12:59 AM, Dagia Dorjsuren wrote:
# cat /proc/drbd
version: 8.3.11 (api:88/proto:86-96)
srcversion: 71955441799F513ACA6DA60
0: cs:WFConnection ro:Secondary/Unknown ds:Diskless/DUnknown C r-
ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0
The backup mach
Hello,
I have configured node1 and node2 as primary and secondary. If node1 is
crashed, how to mount /dev/drbd0? After node1 crashed, node2 can't run. Anybody
help me pls?
after node1 crashed, I run below command on node2. But it was not run.
# cat /proc/drbd
version: 8.3.11 (api:88/proto:86-
On 09/05/11 14:10, For@ll wrote:
> W dniu 05.09.2011 14:01, For@ll pisze:
>> Hi,
>>
>> I have two servers node1 and node2, I configured drbd + heartbeat v1.
Bad idea; use Pacemaker.
>> On
>> device drbd0 I created lvm pv nad vg. I have problem when I want promote
>> a node2 to a new primary, beca
W dniu 05.09.2011 14:01, For@ll pisze:
Hi,
I have two servers node1 and node2, I configured drbd + heartbeat v1. On
device drbd0 I created lvm pv nad vg. I have problem when I want promote
a node2 to a new primary, because the new primary node doesn't see lvm
volume group.
On the new primary nod
Hi,
I have two servers node1 and node2, I configured drbd + heartbeat v1. On
device drbd0 I created lvm pv nad vg. I have problem when I want promote
a node2 to a new primary, because the new primary node doesn't see lvm
volume group.
On the new primary node I must run vgchange -an volume_gro
Hi All,
I am running DRBD version 8.3.0 on RHEL AS 4.4. When ever I
restart the server1 automatically server2 also restarting. Could you
please advice on this issue?
Please find the drbd.conf.
[r...@drbd-one ~]# cat /etc/drbd.conf
common {
syncer {
rate
Hi,
I am using drbd with lifekeeper and postgres for database. My system
runs fine. But recently the whole system froze when I installed
sysstat-7.0.2-3.el5. There were high cpu load and whole system was
unresponsive and it was not failing over to the standby. I had to power
recycle primary to