Don't you really need cluster aware LVM for that?
On 9/13/2010 3:17 PM, Digimer wrote:
I would use '/dev/sda4' to back the DRBD device. Then I would use
'/dev/drbd0' and the LVM PV. On this PV, create your VG and then you
various LVs for your VMs.
_
On 4/8/11 11:22 AM, Cristian Mammoli - Apra Sistemi wrote:
Hi, as far as I understand from the docs, DRBD as a RHCS resource is
only supported as single primary.
If I need to use it as a dual primary device for GFS or CLVM I need to
start it BEFORE cman and manage it outside of the cluster.
Rig
I have a pair of systems running RHEL6 w/ DRBD, cman/pacemaker and
clvmd. They worked happily for about 6 months on RHEL6.0. I applied
updates to the systems today taking them to RHEL6.1 (as of about a week
ago). They are running the 8.3.8.1 release of DRBD, both before and
after the RHEL kerne
On 8/3/11 5:22 AM, Florian Haas wrote:
This is a result of the RHEL 6.1 kernel containing backports of
post-2.6.32 upstream code that removed the notion of barriers from the
I/O stack. Strictly speaking, this does not break the kernel ABI so this
is no violation of Red Hat's own policy, but it's
I updated to 8.4.0 and still have a stability issue.
Outline of steps:
Stopped pacemaker/clvmd/cman and stopped drbd. Verified kernel module
was unloaded. Updated to 8.4.0, started drbd on both sides and verified
primary/primary. Ran a verify and 0 blocks out of sync.
started cman/clvmd/pace
ake my kernel back to an earlier release on the unstable boxes and
that didn't seem to do much for me.
David
On 8/15/11 10:38 AM, Dominik Epple wrote:
Hi list,
we are facing kernel panics with CentOS 6 (RHEL 6 compatible) with kernel
version 2.6.32-71.29.1.el6.x86_64, drbd version 8.
On 8/16/11 9:14 AM, Fosforo wrote:
in the past I've been in the same problem, the bug thing was the
kernel module designed for centos being used in RHEL or vice versa. It
was solved hiring support from linbit, which provided us new non-buggy
compiled kernel modules
I compiled 8.4.0 agains
On 8/18/11 7:15 AM, Gerald Brandt wrote:
Hi,
Would it be possible to setup a DRBD system, with the primary being setup now
(with heartbeat/pacemaker), and setting up the secondary a few months from now?
I have a system running like this - You will need to invest some time
(and downtime) to imp
On 8/28/11 11:44 AM, Peter Hinse wrote:
I hope I sorted it out by re-creating the cluster with cman plus
corosync and pacemaker and without the dlm-pcmk and gfs-pcmk packages.
The drbd config however remained the same. Whenever I reboot single
servers now or go into standby, no further kernel
On 8/28/11 11:46 AM, Peter Hinse wrote:
I got around the kernel panics by using cman/corosync & pacemaker and
avoid the dlm-pcmk and gfs-pcmk packages (they are no longer packaged
for RHEL 6.1 anyway).
Are you running clvmd? I spent some time yesterday rebooting my drbd
nodes with various sta
On 9/7/11 11:10 AM, Roth, Steven (ESS Software) wrote:
Greetings,
My team is seeing kernel panics when using DRBD inside an RHCS
cluster. These appear quite similar, perhaps identical, to the panics
reported by others on this list. We would greatly appreciate any
assistance with them.
A
I have used both 8.3.8-8.3.11 and 8.4.0 with RHEL6.0 and 6.1.
David
On 9/13/11 9:40 AM, claude.duplis...@gdc4s.com wrote:
Which version of DRBD can be compiled to work with Redhat 6.0?Note:I
cannot upgrade to 6.1 at thistime.
Thanks,
Claude Duplissey
210-524-8202
This message and/or
On 9/20/11 8:08 PM, Ivan Pavlenko wrote:
Hi All,
Recently I had split brain onto my cluster. There was a not a big
issue, but I still haven't found any reason of this glitch. I got in
my log dile next:
What kernel, what version of drbd?
___
drb
What did the other node log?
On 9/20/11 11:50 PM, Ivan Pavlenko wrote:
Sorry for incomplete info..
There's RHEL 5.5 with 2.6.18-194.17.1.el5 kernel and DRBD is 8.3.8
(api:88) version.
Thank you,
Ivan
___
drbd-user mailing list
drbd-user@lists.li
On 9/25/11 1:57 AM, Christian Völker wrote:
I wanted to make the secondary now the new primary with "drbdadm
primary drbd0" but it refused to do so because the primary was still
reachable by ping and network as well drbd service was running.
Did you 'drbdadm secondary drbd0' on the primary firs
On 9/25/11 8:13 AM, Christian Völker wrote:
I have to manually set my primary to act as secondary even though it
seems to be obvious because he has lost his disk? That's strange
It can still fulfill it's IO requests by sending them over the wire. I'm
not sure if you use Pacemaker's DRBD scri
You can make a node primary during sync, or even when it does not have a
backing device. Primary/secondary status has little to do with the
status of the backing device, unless you do not have at least one
UpToDate device in the resource.
Just 'drbdadm secondary ' on the current primary, and '
Are you using clustered LVM? If not, you probably should be.
You could try deactivating the VG then reactivating it and seeing if the
new primary sees the LV.
On 10/3/11 6:02 AM, mar...@marcap.de wrote:
Dear all,
I want to build an HA setup with virtual machines on a two hardware
node setu
tus available
# open 0
LV Size20.00 GB
Current LE 5120
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:5
I think it should work without cluster
On 11/1/11 7:22 AM, Andrew McGill wrote:
Hmm .. sounds like good advice ... the trouble is that the fault was only
observed after around a few hours of continuous operation, and on a workload
identical to the previous hour of workload. A previous similar workload ran
for 16 hours and produc
On 12/19/11 7:59 AM, Taylor, Jonn wrote:
version: 8.2.6 (api:88/proto:86-88)
I'd start by updating to 8.3.12.
David
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user
Actually, with RHEL you buy your entitlement for the OS from RedHat and support
for DRBD from Linbit. You can get the supported binaries for RHEL from Linbit
at that point, and RedHat/Linbit will work support cases together which involve
DRBD.
None of the RHEL 6 Add-ons get you DRBD. HA just gi
On 1/23/12 10:05 PM, Trey Dockendorf wrote:
What's odd is the sync hang only happened once I unmounted the LV
/vmstore. Reading the docs I found it mentions the resource not
having to be empty, but doesn't mention if it can be in use.
You tried to sync your DRBD resource using two LVs as the
On 1/24/12 3:56 AM, Felix Frank wrote:
On 01/24/2012 04:05 AM, Trey Dockendorf wrote:
block drbd0: open("/dev/vg_cllakvm2/lv_vmstore") failed with -16
David: Is this where you take the "already mounted bit" from?
No, he mentioned his sync hung when he umounted /vmstore - From his
steps and t
On 2/17/12 4:19 AM, Lawrence Strydom wrote:
Hi List,
I used DRBD in dual primary mode with ocfs2 for my load balancing web
server cluster. I didn't encounter any errors during setup and when I
put the web site on the DRBD device on the primary node, it replicated
without any errors. It has
On 2/17/12 6:03 AM, Lawrence Strydom wrote:
Thanks for the replies Felix and David,
OK losing data on the one node is not an issue for me at this point
but I cannot afford a repeat. I am very glad this happened now before
going live.
I shut down ocfs2 and o2cb on the secondary node and am bu
On 2/17/12 6:27 AM, Felix Frank wrote:
More specifically: You're running dual-primary. That means *any* hiccup
in your replication link will instantly split-brain your cluster as far
as DRBD is concerned.
Agreed - Searching for PingAck in syslog is my lazy way of finding out
when it actually
On 2/17/12 6:56 AM, Lawrence Strydom wrote:
Couldn't find any PingAck's in the syslog - what else could have
caused the split brain?
./syslog.2:Feb 15 13:40:35 web02 kernel: [ 16.455206] block drbd0: self
D3CCDACF6FD7FDB8:4579E80074D400D3:C117CFF0A5777F0F:0004
bits:14607528 fl
todate/uptodate before I copied the
sites across. It then sat for a couple weeks doing nothing untill this
week when dev tested on it and this hapenned.
L
On 17 February 2012 14:01, David Coulson <mailto:da...@davidcoulson.net>> wrote:
On 2/17/12 6:56 AM, Lawrence Strydom w
this case?
On 17 February 2012 14:28, David Coulson <mailto:da...@davidcoulson.net>> wrote:
Can you add NICs? There should be no requirement for DRBD
replication to happen over any particular interface, and obviously
having it's own dedicated interfaces without a switc
Can you post the drbd logs from each node, along with the configs? I'm confused
why one is uptodate/uptodate, and the other is uptodate/dunknown. Are you sure
they each connecting to each other?
David
On Feb 24, 2012, at 10:36 AM, fatcha...@gmx.de wrote:
> Hi,
>
> I´m using drbd83-8.3.12-2
http://www.drbd.org/users-guide/s-resolve-split-brain.html
On Apr 17, 2012, at 11:06 AM, Jacek Osiecki wrote:
> Hello,
>
> I am currently testing dual-master setup with DRBD+OCFS2.
> Finally I managed to get it working well on kernel 2.6.39.4, DRBD version
> 8.3.10 (userland version: 8.4.1) a
can you give the specifics of the fix? Someone else may encounter the
same issue.
On 4/28/12 2:19 PM, Marcel Kraan wrote:
i fixed the problem
there was an error in the settings
On 28 apr. 2012, at 11:54, Marcel Kraan wrote:
Hello All,
I'am marcel kraan
I lost all my users during a NFS mount
Why not just add this?
common {
startup {
wfc-timeout 60;
degr-wfc-timeout 120;
wait-after-sb;
}
}
On May 14, 2012, at 4:50 PM, John Anthony wrote:
> looking at the /etc/init.d/drbd script -
> $DRBDADM wait-con-int - is the hidden
No, it explicitly uses the IPs and ports defined in your configuration
for the resource.
What exactly do you mean by 'floating IP setup'?
On 5/30/12 8:44 PM, Chris Dickson wrote:
Hello everyone, my question is how exactly does DRBD connect to its
peer in a floating IP setup; I ask this because
Does it matter? Your data is accessible from both nodes while it syncs.
I suppose you have a risk of the primary failing until it has a full
copy on the other node, but since it is a one time, on build, thing it
isn't a big deal.
Short answer is bigger pipe, faster drives, plenty of cpu. And
Provide cat /proc/drbd from both nodes. I'd guess your split brain and one node
doesn't have uptodate disk.
Sent from my iPad
On Aug 2, 2012, at 4:52 AM, "Yount, William D"
wrote:
> I have a two node DRBD cluster. It is setup in Primary/Primary. Everything
> works good. I turned Node2 off a
You can, but it makes a pain to manage the configuration files and to
troubleshoot when the two systems don't match.
On 12/22/12 10:03 PM, Walter Robert Ditzler wrote:
hi there,
is it possible to have diffrent configuration oft he lvm structure (lv
group), but not the lv it-self?
ex:
host a
Why not just put eth1 on a different vlan?
Sent from my iPad
On Jan 4, 2013, at 7:49 PM, Gavin Henry wrote:
>> Has anyone experienced this whilst running DRBD over eth1 between two
>> CentOS 5.7 servers?
>>
>> eth1 is a private IP address, unroutable. eth0 is the public address.
>> CentOS will
Is DRBD started before clvmd? At least on RHEL6, that works for me. The
init.d script for clvmd will activate any cluster volume groups it
fined, but your drbd resource has to be running for that to work properly.
On 1/13/13 4:14 PM, Andy Dills wrote:
I'm trying to setup a gfs2 filesystem on a
What is your network between the two systems?
Feb 19 19:20:56 srv2-2 kernel: block drbd1: PingAck did not arrive in time.
That means DRBD couldn't communicate between the nodes.
David
On 3/10/13 10:59 AM, AZ 9901 wrote:
Le 5 mars 2013 à 07:21, AZ 9901 a écrit :
// I made some errors in my
network OVH uses between its 2 data-centers RBX
& SGB :
http://www.ovh.co.uk/dedicated_servers/data_centre_selection.xml
I have a 100Mbps connection between the 2 servers.
Best regards,
Ben
Le 10 mars 2013 à 16:01, David Coulson a écrit :
What is your network between the two systems?
Fe
You need to use a cluster-aware filesystem such as ocfs2 or gfs to be
able to mount a DRBD device on both boxes at the same time. it looks
like you are mounting the underlying /dev/sd* block device, not /dev/drbd1.
On 3/17/2010 11:40 AM, Cameron Smith wrote:
I am stuck.
I did in an initial in
The lower device typically holds the drbd meta data, as well as the
actual block data for the filesystem you install on the DRBD device. So,
it should not be mounted like a traditional filesystem.
On 3/18/10 12:10 PM, Cameron Smith wrote:
Ok that explains a lot. Thanks David!
It would be nice
Yes, you should not mount the lower block devices, only the /dev/drbd1
device. Take any lower devices out of /etc/fstab.
On 3/18/10 12:04 PM, Cameron Smith wrote:
I am experiencing the error:
"# drbdadm up r0
/dev/drbd1: Failure: (114) Lower device is already claimed. This
usually means it is
You have to elevate the secondary to primary on node failure. Heartbeat
is a good way to do this.
Did you read the HOW-TO?
On 3/18/10 1:51 PM, Cameron Smith wrote:
Why does data not become active on secondary when primary is down?
I have a two node setup and live data is on /dev/drbd1 on nod
46 matches
Mail list logo