Whoops, guess I fixated on one aspect of the problem. Specifically, the
"I/O errors on the secondary stop I/O on the primary". I was thinking
that NFS problems were affecting one or both hosts. I don't think the
master will ever deliberately disconnect from the secondary, unless a
split-brain occur
Hi,
We are using a cluster setup something like :
http://picasaweb.google.com/lh/photo/e_uAYjG-8nh7oRZzXDp5HA?feat=directlink
We are using
OpenVZ for Virtualization
DRBD with ocfs2 in dual primary mode
heartbeat + pacemaker for HA
Currently I have not added the drbd in pacemaker. But added the
Hi Jeff,
The DRBD primary just stops accepting I/O to the DRBD volume. There's no I/O
failure or timeout.
DRBD as a whole just hangs, primary and secondary, no error messages (apart
from hardware cause -
the 3ware controller on the secondary losing it's enclosure).
It's like the primary is wai
This is split brain problem. The bad thing for DRBD/heartbeat.
I think you need to give up the change in one node, and override it.
Detailed info:
http://www.drbd.org/users-guide-emb/users-guide.html
regards,
Edward
> Date: Mon, 7 Dec 2009 01:01:03 +0530
> From: unnikrishna...@g
This is were I am stuck. In my case the drbd will only detect the
split brain only after the connection between the two nodes were
established. In this case the two servers will be running and writing
separate data in the two drbd devices and no one will notice since the
drbd will not alert about s
Hi,
I'm using drbd with lv LVM. The layout is :
drbd -> lv -> vg -> pv . I'm trying to do a vgchange -ay on the underlying lv,
which is hanging and use 50% of a cpu. An echo w > /proc/sysrq-trigger gives me
an interesting thing :
Dec 7 13:35:26 z2-3 kernel: [8617743.246522] SysRq : Show Blocke
>
> I was about to install a new drbd instance, and was browsing through the
> various distribution bug pages, to see which version they distribute and if
> there are known issues with the drbd8 package in the various distributions.
>
> I noticed the following drbd8 bug in the debian package, whi
Hi,
I am having problem to utilize drbd on a Linux PowerPC server
These is my environment:
# uname -a
Linux ashlin01 2.6.18-164.el5 #1 SMP Tue Aug 18 15:58:09 EDT 2009 ppc64 ppc64
ppc64 GNU/Linux
# fdisk -l /dev/sdc
Disk /dev/sdc: 73.4 GB, 73407488000 bytes
128 heads, 32 sectors/track, 35003
On Mon, Dec 07, 2009 at 09:21:04AM -0800, Vadym Chepkov wrote:
> Hi,
>
> I am having problem to utilize drbd on a Linux PowerPC server
>
> These is my environment:
>
> # uname -a
> Linux ashlin01 2.6.18-164.el5 #1 SMP Tue Aug 18 15:58:09 EDT 2009 ppc64 ppc64
> ppc64 GNU/Linux
>
> # fdisk -l /d
On Mon, Dec 07, 2009 at 05:40:18PM +0100, Maxence DUNNEWIND wrote:
> Hi,
>
> I'm using drbd with lv LVM. The layout is :
> drbd -> lv -> vg -> pv . I'm trying to do a vgchange -ay on the underlying lv,
> which is hanging and use 50% of a cpu. An echo w > /proc/sysrq-trigger gives
> me
> an inter
Hi,
> > Anyway, the drbddevice using this lv is down, so it shouldn't block the
> > lvchange
> > (I checked with drbdsetup /dev/drbdXX show, only the syncer part is shown).
>
> care to _show_ us?
z2-3:~# drbdsetup /dev/drbd12 show
syncer {
rate61440k; # bytes/second
On Mon, Dec 07, 2009 at 10:17:10PM +0100, Maxence DUNNEWIND wrote:
> > > Anyway, the drbddevice using this lv is down, so it shouldn't block the
> > > lvchange
> > > (I checked with drbdsetup /dev/drbdXX show, only the syncer part is
> > > shown).
> >
> > care to _show_ us?
> z2-3:~# drbdsetup /
--- On Mon, 12/7/09, Lars Ellenberg wrote:
> hm. ugly.
> no one should not submit 64k bios to DRBD, as we
> unfortunately have a
> stupid badly chosen and someday to be fixed limit at 32k
> per bio.
> and we _do_ announce this limit.
I never specified any block size or modified any non-default p
> > 16: cs:NetworkFailure ro:Primary/Unknown ds:UpToDate/DUnknown C r---d
> > ns:24777340 nr:0 dw:87720268 dr:12029753 al:1082 bm:1582 lo:0 pe:23
> > ua:0 ap:23 ep:1 wo:b oos:0
>
> So "it" is probably "hanging" on this one.
>
> kernel logs of drbd16?
When I do "echo t> /proc/sysrq-trigger",
14 matches
Mail list logo