Hi.
> 1) - First of all is the LVM on DRBD is really fiable in production mode ?
Yes. Definitely.
> 2) - In case of DRBD crash on primary is it tools to recovery the
> filesystem and what about the secondary ?
I'm not familiar with any use cases. DRBD does not make it a habit to
destroy your fi
Hi,
> # pvs
> /dev/drbd0: open failed: Falscher Medien-Typ
> PV VG Fmt Attr PSize PFree
> /dev/drbd1 replicated lvm2 a- 120,00G 76,05G
> /dev/md1 localvglvm2 a- 1,81T 1,67T
What's this talk of drbd0? I guess it can be ignored?
> But when setting it to 140G
On 10/26/2010 11:20 PM, Lewis Donzis wrote:
> We'd like to be able to make backups from our DRBD secondary by mounting
> the underlying filesystems. After some searching, this appears to be a
> relatively common discussion: running DRBD on top of LVM, making a
> snapshow of the backing LV on the s
> You need to run LVM on top of DRBD for that. You can have LVM - DRBD - LVM,
> but the snapshotting should occur on the top LVM.
Maybe he *should* be doing that, but either must work. Snapshotting the
backing device and mounting the fs it contains is quite possible.
Sincerely,
Felix
___
On 10/27/2010 11:04 AM, Thomas Baumann wrote:
> Hi,
>
> so just do what the message says: zero the metadata on the underlying
> storage:
>
> dd if=/dev/zero of=/dev/VolGroup00/LogVol00 bs=1M count=1
>
> This should help.
>
> Best regards,
>
> Thomas.
Hi,
it appears the OP would like his 900G
>> Also, your filesystem should have been created after
>> create-md, on the /dev/drbdX device (unless I'm gravely
>> mistaken).
>
> Yes, that makes sense, but I was mainly trying to illustrate that it was
> specifically the create-md function that "did something" to the disk to
> make it unmou
On 10/31/2010 05:53 PM, David Muir Sharnoff wrote:
> Is it okay to upgrade just one host in a drbd pair? I would like to
> try out a new kernel on one side before installing on both. They're
> currently running drbd 8.0.8 and I would like to upgrade to the drbd
> included in the debian binaries
On 11/15/2010 10:47 AM, Robert Dunkley wrote:
> Hi,
>
>
> Has anyone done this?
>
> I was planning as follows:
> drbdadm detach resource
So you're keeping it connected to the peer and go Diskless? I'm not sure
that this is a good idea.
> lvm
> lvcreate newlv (Same size on different volume gro
On 11/16/2010 04:53 PM, Robinson, Eric wrote:
> Is it possible to deploy a 3-node CRM-based cluster where:
>
> -- nodes A and C share resource R1 on /dev/drbd0
>
> -- nodes B and C share resource R2 on /dev/drbd1
>
> -- resource constraints prevent R1 from running on node B and
On 11/16/2010 05:55 PM, Robinson, Eric wrote:
>>> Is it possible to deploy a 3-node CRM-based cluster where:
>>>
>>> -- nodes A and C share resource R1 on /dev/drbd0
>>>
>>> -- nodes B and C share resource R2 on /dev/drbd1
>>>
>>> -- resource constraints prevent R1 from running on node
On 11/17/2010 01:28 PM, Steve in Tokyo wrote:
>
> Hi All,
>
> I am new to DRBD and have what is likely a silly question, I hope this is
> the appropriate place to ask and apologize if it isn't.
>
> I have built a 2 node Proxmox cluster and am using LVM over DRBD, with DRBD
> in dual primary mode
On 11/18/2010 02:37 AM, steve wrote:
> Dear Felix,
>
> Thank you for your reply. Yes, the DRBD is the PV for the LVM volume
> groups. I have multiple volumes in the volume group. One of the volumes
> I created specifically for the NFS and made it an ext3 volume. This I
> have mounted as a drive fr
On 18.11.2010 18:52, Or Gerlitz wrote:
Dario Fiumicello - Antek wrote:
You can promote an outdated secondary to primary with:
drbdadm -- --overwrite-data-of-peer primary *
but this way, once the old primary is restored you'll get a split brain.
Why is that, once the old primary is restored, and
On 18.11.2010 18:50, Antonio Anselmi wrote:
I'm wondering if could be possible run DRBD in such scenario:
snip
Is that correct?
I didn't read each single line of it, but assuming the respective
sections are identical, yes.
Note that you can save yourself some hassle and just distribute the
On 18.11.2010 23:28, Or Gerlitz wrote:
Felix Frank wrote:
Before it crashed, the old primary will have received changes that the other
node doesn't
know about. Hence split brain.
Got that, so its a "after-sb-1pri" situation and what I was suggested
to do is actually wha
On 11/25/2010 12:38 PM, Pavlos Parissis wrote:
On 25 November 2010 11:43, Lars Ellenberg wrote:
On Thu, Nov 25, 2010 at 11:32:02AM +0100, Pavlos Parissis wrote:
I guess the result of -1000 on uuid_compare is quite cryptic and
doesn't give you much information on the root cause.
Then don't f
On 12/06/2010 06:08 PM, Klaus Darilion wrote:
...
> So, why again resynchronizing almost 500Mb although the partition is not
> used at all (just mounted in a domU).
It does this based on the activity log. See
http://www.drbd.org/users-guide/s-activity-log.html for the details.
> When I tried to m
On 12/07/2010 02:09 PM, Klaus Darilion wrote:
>
>
> Am 07.12.2010 12:08, schrieb Klaus Darilion:
>>> You will need to resolve that, refer to
>>> http://www.drbd.org/users-guide/s-resolve-split-brain.html
>>
>> I did that now and try to reproduce the problem.
>
> It happened again. Here is what I
On 12/07/2010 04:39 PM, Andrew Gideon wrote:
> On Tue, 07 Dec 2010 14:29:39 +0100, Felix Frank wrote:
>
>> Node A is primary and marks some extents as hot.
>
> Is this just a matter of timing, or were they hot because the writes
> occurred while node B was down? I'
> Or could this have occurred before B was shut down?
Such was my assumption.
> Yes, but after writes occur to A while B is down, B is out of date even
> if B doesn't know it.
A knows :-)
> That makes sense to me. It's what I would expect. But it doesn't seem
> to fit what klaus.mailingli
Hi,
in the following setup
* node A Kernel 2.6.24-21-xen, DRBD 8.3.1
* node B Kernel 2.6.27.48-xen, DRBD 8.3.1
this is my scenario:
The peers are interconnected via WAN and share 7 DRBDs. Usually, the
ones on node B run in StandAlone, always Secondary for disaster recovery
purposes. Node A is al
Hi,
On 12/20/2010 05:30 PM, Marc Richter wrote:
> Hi there,
> 1)
> Do you think that plan is ok?
Yes.
> 2)
> Why does a split brain happen here?
When your old secondary becomes primary, it generated a new UUID. The
old primary will deny any knowledge of its data (well, not all, but you
saw t
On 12/21/2010 11:09 PM, gilmarli...@agrovale.com.br wrote:
>
> I again. Does anyone know please tell me in a position to use DRBD as
> two-primary,
> but will replicate the same with several VG LVMs to 2 servers. and these
> servers
> will make the first recordings on August LVM and the other serv
On 12/22/2010 11:19 AM, gilmarli...@agrovale.com.br wrote:
>
>
> The environment contains a PV VGxen called, exists within this PV 20 LV. The
> replica Drbd entire LV PV with 20 for the second primary server. In the first
> primary server and made the recording at LV 17 and the server and secon
>>
>>
>> But the result is undefined! What should DRBD write to the other member? The
>> result of the first or the second write?
>>
>> You are using a tool that permits the execution of stupid I/O streams. Good
>> for stress testing, but not good for data integrity. If you want undefined
>> data
> In any case, be sure to have (at least) RAID 1 on each node backing the
> DRBD devices to help minimize downtime. Drives fail fairly frequently...
> software RAID 1 is an inexpensive route to much better uptime. :)
If the budget isn't severely restricted, I'd also throw in an actual
RAID control
> With DRBD as a PV on top of MD, you can still allocate out that PV
> incrementally and resize the individual LVs as needed. If you put in
> bigger disks, you can just add another partition for the new space,
> create a new PV, and concatenate the exist VG in linear mode with the
> new PVs. The
On 01/06/2011 01:38 PM, trekker...@abclinuxu.cz wrote:
> Hello.
>
> I'm in this situation: Host A has a DRBD resource (primary, StandAlone)
> and I want to create a diskless peer for it in dual-primary setup on
> host B. So I do the following:
>
> A: /sbin/drbdsetup 10 net 192.168.1.191:19010 192
> The risk here, though, is with split-brains.
>
> Consider this;
>
> You have two partition on your DRBD; one each for two VMs. In the course
> of normal operation you have one VM running on NodeA and the other VM
> running on NodeB. DRBD Primary/Primary will allow this. Then though, you
> have
On 01/09/2011 05:03 PM, Bart Coninckx wrote:
> Hi all,
>
> am reading the Novell docs on DRBD and one of the ways they mention to speed
> up DRBD is using external metadata. Possibly a hard question to answer but
> could someone indicate by what degree this would enhance speed (provided
> using
On 01/10/2011 03:37 PM, Raoul Bhatia [IPAX] wrote:
> On 01/05/2011 04:15 PM, Raoul Bhatia [IPAX] wrote:
>> hi,
>>
>> On 01/05/2011 03:46 PM, Florian Haas wrote:
>>> Look at your paste. You have no node where DRBD is Secondary. What do
>>> you expect the agent to do?
>>
>> sorry - the /proc/drbd inf
Hi,
a kernel log with timestamps would be a lot more useful here.
>
> And on my secondary:
>
> block drbd1: conn( SyncTarget -> Connected ) disk( Inconsistent ->
> UpToDate )
Now you're good.
> block drbd1: helper command: /sbin/drbdadm after-resync-target minor-1
> block drbd1: helper com
On 01/10/2011 05:16 PM, netz-haut - stephan seitz wrote:
> Hi there,
>
> yesterday I did a regular manual fail-over (swap-over) to the second node of
> a primary/slave drbd cluster.
>
> This is the haresources:
> filer01 IPaddr::172.16.1.240/24/bond0 IPaddr::172.16.2.240/24/bond0 Delay::1
> drb
On 01/11/2011 03:19 PM, netz-haut - stephan seitz wrote:
>>> Could anyone shed some light on what could be wrong? Thanks!
>>
>> I believe I heard something about LVM being smart about aligning itself
>> to array stripes. If your stripe size changed, it may be possible that
>> LVM expects to find si
> You usually leave S disconnected, that's why you need a full sync to
> bring S up to speed, but normally what you would do when using stacked
> resources would be to configure S with protocol A, this is actually the
> recommendation in the drbd.org docs
> http://www.drbd.org/users-guide/s-three-n
On 01/12/2011 01:26 PM, Marc Richter wrote:
> Hi There.
>
> I'm still failing in replacing a HA - node and home someone may help me.
> I'm trying the following:
> I have two nodes which serve as a HA - NAS and are connected by DRBD. We
> have bought new Hardware and installed a new version of the
>> But the snapshot has - i suppose - no filesystem, because the LV for the KVM
>> has no one. So - i suppose - i can't
>> mount the snapshot.
>> But when i'm not able to mount it, how can it be backuped (from the host's
>> point of view) ?
>>
>>
>> Bernd
>
> I suppose you could create the snaps
On 01/14/2011 04:00 PM, Lentes, Bernd wrote:
> Felix Frank wrote:
>
>>>
>>> I suppose you could create the snapshot and then use 'dd'
>> to create a
>>> bit-level image of the LV.
>>>
>>
>> And failing that, kpartx is your
On 01/14/2011 04:45 PM, Lentes, Bernd wrote:
>
> Felix Frank wrote:
>>>>
>>> How can i create the image file kpartx uses ?
>>
>> You don't. Just create nodes for your very snapshot.
>>
>
> Ok, i try to unnderstand:
> With lvcreate -s
On 01/14/2011 05:39 PM, Lentes, Bernd wrote:
>
> Felix Frank wrote:
>
>>> Ok, i try to unnderstand:
>>> With lvcreate -s i create the snapshot. This creates an
>> entry /dev/vgxxx/lvxxx.
>>> Then i use kpartx for ... what ? Creating a second ent
Hi,
Frank, your idea is that the lv (and also the snapshot) contains one or more
partitions, depending on the guest OS setup.
It's "Felix".
For these partitions, i create nodes in /dev using kpartx -a
/dev/vgxxx/vm_snapshot.
These nodes can now be mounted. Now my backup software in the hos
On 01/14/2011 07:50 PM, netz-haut - stephan seitz wrote:
Hi there,
I've got a dumb question ...
If a dryrun shows the following output, I'ld better *NOT* execute it for real
with live services on top?
root@filer01:~# drbdadm -d adjust all
drbdsetup 1 detach
drbdsetup 1 disk /dev/sdb /dev/sdb
On 01/15/2011 06:24 PM, Pit Bull wrote:
>
> Hi all.
>
> Im using DRBD + Heartbeat on three server's
>
> 1 server + 2 server have device drbd1
> 3 server have stacked on drbd1 device drbd10
>
> drbd10 used by Hearbeat for sharing it on virtual ip
>
> config - >
> http://www.howtoforge.com/drbd-
On 01/16/2011 11:38 PM, Zack_McFly wrote:
>
> Hi,
>
> I have some issues using DRBD with two primary nodes and OCFS2. It worked
> fine at the begenning, I was able to shares files but now my nodes are both
> StandAlone and when i'v got this when I do cat /proc/drbd (on both nodes):
>
> GIT-has
On 01/18/2011 12:45 AM, Cameron Smith wrote:
Well looking further online it seems the two versions are compatible!
Can anybody here confirm that please?
Now my main two questions are:
1) Will having different size partitions between nodes cause an issue?
(my meta-data is on a separate partition
>> If both Server1 and Server2 fail, there is no reason why DRBD wouldn't
>> run StandAlone on Server3.
>
> Wait a minute, how Server 3 can by Standalone, and have resources of cluster
> (shared ip providet by
> hearbeat) if him not writed in Heartbeat conf.
Sorry - I wasn't implying that Heartbe
On 01/18/2011 04:06 PM, Yannick Warnier wrote:
> Hi all,
>
> I'm a bit new in the field. To resize partitions, is it enough to have
> the same number of MB as reported by gParted? Or is there something
> special I must do to have the exact same size?
>
> In particular, I am worried that, my disks
On 01/21/2011 11:42 PM, Nick Couchman wrote:
> I have a situation where I'd like to be able to use DRBD, but only
> between two local block devices. I'd like to present an iSCSI-based
> disk to a host and replication a local block device to that iSCSI disk
> asynchronously. The Linux software RAI
Hi,
On 01/24/2011 01:20 AM, Lew wrote:
> I've encountered some unexpected behavior with a split brain instance.
> It seems from what has occurred that the default behavior is set to roll
> back & discard changes.
>
> Recently in my sand pit, I've been manually disconnecting resources as a
> an ad
On 01/24/2011 04:08 PM, Nick Couchman wrote:
>
>> Hi,
>>
>> what level of asynchrony do you need? And (out of curiosity) why?
>>
>> Cheers,
>> Felix
>
> I'll tell you what I'm trying to do, and maybe that will answer both
> questions. I'm trying to roll my own disk-based backup solution.
> Basic
On 01/24/2011 06:15 PM, Nick Couchman wrote:
>
>> Ah, I see. So you want to retain Read-Only access even when iSCSI is
>> disconnected? That's problematic, as DRBD will probably detect possible
>> split-brains and refuse to resume synchronization. You can of course
>> discard your local backing
> common {
> protocol A;
>
> handlers {
> pri-on-incon-degr "/usr/lib/drbd/notify-pri-on-incon-degr.sh";
> pri-lost-after-sb "/usr/lib/drbd/notify-pri-lost-after-sb.sh";
> echo o > /proc/sysrq-trigger ; halt -f";
The above looks..."funny" to m
On 01/25/2011 03:47 PM, Nick Couchman wrote:
>> I
>> got you completely backwards then. No issues with your scenario as far
>> as I can see.
>>
>> About asynchrony, you may want to try Protocol B.
>>
>
> Right, but how do I accomplish this with two local disks? Since the
> iSCSI disk will appear
> The naming of the drbd devices seems to be a little picky. I'd like to
> be able to call one /dev/drbd-do-no-use or something like that, but drbd
> seems to choke on this and is fairly strict about the drbd naming
> scheme. Oh well, not a huge deal, I'll just have to be very careful.
There's a
Hi,
> date stamps on the notify-* scripts are all uniform (predating the system
> build) & I don't recall modifying them at all.
good.
> From the logs, I'm curious about the lines...
> Jan 23 15:07:16 emlsurit-v4 kernel: [ 15.044910] block drbd9: 0 KB (0 bits)
> marked out-of-sync by on disk
Hi,
On 01/27/2011 09:02 AM, Nauman Yousuf wrote:
> Dear All
>
> I am running 2 DRBD servers 1 primary and another secondary .. some how
> a heavy load is generating for some time and alsoon machine that mount
> drbd volumes.
Please share your DRBD config.
> drbd: initialised. Version: 0.7.23 (
On 01/27/2011 11:13 AM, Nauman Yousuf wrote:
> here
>
> Primary DRBD
>
> skip {
> Lustre storage cluster replication setup
> LUSTRE OSS active/active Replication
>
> }
>
> global {
> minor-count 2;
> }
>
> resource r0 {
> protocol C;
>
> startup {
> degr-wfc-timeou
On 01/29/2011 02:43 AM, Lewis Shobbrook wrote:
> That's correct no sync has taken place & it is still un-synced.
>
> ...
>
> The resource nodes are still disconnected and no override has been used to
> force the situation.
> The only commands issued have been drbdadm connect all, drbdadm connect
On 01/31/2011 07:12 AM, Muhammad Sharfuddin wrote:
> OS: SLES11 SP1 x86
> SLE HAE 11 SP1
> DRBD version: 8.3.7
> /dev/drbd0 is an oracle file system(/opt/oracle)
>
> We are having serious performance issues when using DRBD over a very
> slow but a very reliable WAN link(2.5 Mbps).
> As of now we a
> thanks ,-)
> tried that as well. the problem is, that it still completely locks
> I/O-access to the disk (only this lvm to be exact) from time to time
> while deleting. the only work around that seems to work is deleting each
> file separately in a loop and add very short sleep (eg. for i in $(fi
Hi.
> if I check the status of drbd the following response is:
>
> 0: cs: Connected ro: Primary / Primary ds: UpToDate / UpToDate C r
> ns: 640864680 nr: 135234784 dw: 776099464 dr: 1520599328 al: 836478 bm:
> 2185 lo: 0 pe: 0 ua: 0 ap: 0 ep: 1 wo: b OOS: 0
>
> I do not know if it i
On 02/09/2011 12:21 PM, ionral wrote:
>
>
>
> Felix Frank-2 wrote:
>>
>> Hi.
>>
>>> if I check the status of drbd the following response is:
>>>
>>> 0: cs: Connected ro: Primary / Primary ds: UpToDate / UpToDate C r
>>&g
Hi,
On 02/09/2011 06:58 PM, Chris Barnes wrote:
> First, I hope this is ok to post this question in this email list...
> Secondly, I have been reading the Users Guide at
> http://www.drbd.org/users-guide/ - but still have a few "before I start"
> type of questions.
>
> I am wanting to create a hi
Hi,
these tools work (together, more or less) out of the box.
What's the specific problem you need solved?
Regards,
Felix
On 02/15/2011 06:29 PM, Cristiano Bosenbecker Hellwig wrote:
> Any solution to work with iptables drbd and heartbeat?
>
> reggards,
On 02/17/2011 06:19 PM, bart.conin...@telenet.be wrote:
> mmm. so a DRBD performance problem. let's see what happens if it is back
> online.
Why wouldn't it be, say, an NFS problem?
Cheers,
Felix
>
> - "Eric Robinson" schreef:
>>
>> Hmmm... had to reboot the primary. Failover would not hap
>> If so, then kill *that* process. lsof and/or fuser can help
>> a lot with determining what's using things and killing those things.
>>
>
> Couldn't it be someone sitting in the mounted directory on an NFS client
> computer? Getting to that might be hard. Is there some way to get a hung
> nfsd
On 02/21/2011 04:03 PM, Vadym Chepkov wrote:
> On Mon, Feb 21, 2011 at 8:50 AM, Martin Miels wrote:
>> Hello all,
>>
>> I am taking first steps into cluster-territory. I have an experimental
>> CentOS 5.5 / DRBD 8.3.10 setup which works nicely. Major kudos to Mr.
>> Levrinc for his DRBD-MC appli
On 02/24/2011 02:27 PM, netz-haut - stephan seitz wrote:
> Heading back to my former question,
> Is it advisable to split huge storage into smaller chunks?
>
>
> Instead of:
>
> [ - big LV ]
> [ - big VG ] [ -> expand later ]
> [ -- big D
On 03/04/2011 10:24 PM, Brian Hirt wrote:
> Hi,
>
> I have recently set up a two node cluster (on ubuntu 10.04LTS servers) using
> drbd, nfs & pacemaker/heartbeat. Everything is working well so far. I
> have some questions about best practices for the clients that are mounting
> the volume
> Mar 7 18:31:41 db1 kernel: [ 1186.440928] e1000: eth0 NIC Link is Up
> 1000 Mbps Full Duplex, Flow Control: RX
> root@db1:~#
>
> Shouldn't the two nodes re-establish connectivity?
It looks to me like you need to "drbdadm connect" your primary once
more. It does seem strange, though. What does
> root@db1:~# drbdadm connect r0
> root@db1:~# cat /proc/drbd
> version: 8.3.7 (api:88/proto:86-91)
> GIT-hash: ea9e28dbff98e331a62bcbcc63a6135808fe2917 build by root@db1,
> 2011-03-07 15:01:39
> 0: cs:StandAlone ro:Primary/Unknown ds:UpToDate/DUnknown r
> ns:0 nr:0 dw:240977 dr:37746 al
> Which logs should I check besides the kernel ones I included in
> the first post (for both db1 and db2)? The db1 one has not changed,
> but since I've done drbdadm connect r0 in db1, db2's kernel log
> (included below on its entirety from the moment I told db1 to
> reconnect) is complaining
> I feel like I must be missing something obvious here but can't figure
> out what...
I didn't notice anything obvious to me. Before you try and reconnect,
examine the UUIDs on both sides using drbdadm get-gi.
The second block on the Primary should be equal to the first one on the
Secondary. It s
> Oh. When you say reconnect do you mean reconnecting ethernet or
> drdbadm reconnect? If the former, too late and I will have to do it
> all again. If the latter, I have not done it yet so I should be good.
> In any case, here is the output with db1's ethernet cable reconnected:
No, you're
On 03/08/2011 04:23 PM, Dennis Jacobfeuerborn wrote:
> Hi,
> I'm trying set up a redundant iscsi server using drbd but it seems I'm
> unable to get the os to recognize the partitioning of the drbd device:
>
> [root@storage2 ~]# fdisk -l /dev/drbd1
>
> Disk /dev/drbd1: 1073 MB, 1073672192 bytes
>
> While kpartx works I'm wondering why it is necessary. I would expect
> /dev/drbd1 to behave like a regular block device and the partitions to
> show up if not after writing the new table then at least after a reboot.
> Eventually I will probably go for an LVM setup but since this is
> supposed t
>> while partitioning a partition is possible and rather straight-forward,
>> it sure isn't standard practice.
>
> I wasn't actually suggesting to create "partitions in partitions" I just
> wasn't aware that the drbd device nodes are partitions and not basic
> block devices (like i.e. the /dev/xvd
Hi,
On 03/24/2011 01:10 PM, Михаил Евстратов wrote:
> Hi!
>
> I have three-node configuration.
> (Like this
> http://www.howtoforge.com/drbd-8.3-third-node-replication-with-debian-etch)
>
> When I run
>
> drbdadm --stacked down r0-U
> and
> mkfs.ext3 /dev/drbd0
uhm...did you create an fs on th
On 03/23/2011 04:18 PM, Martin Probst wrote:
> Hello List,
>
> Im using the following setup:
>
> Two Nodes:
> physical harddisks contains a LVM with one VG and 4 LV. Each LV contains a
> xen based virtual machine.
> each LV is connected to the other node by an explicied drbd configuration in
>
Hi,
> Found LVM2 physical volume signature
> 8257536 kB data area apparently used
> 8281212 kB left usable by current configuration
> ...
>Device Boot Start End Blocks Id System
> /dev/sda1 * 1 13 104391 83 Linux
> /dev/sda2
On 03/29/2011 11:37 PM, Mateusz Kalisiak wrote:
> Hello Everyone,
>
> I'd like to setup the standard MySQL-DRBD-HA architecture:
> - PRIMARY server with MySQL and mounted /dev/drbd0,
> - SECONDARY just being standalone and waiting for failover.
> Additionally, during normal cluster activity (prima
Hi,
I never built DRBD RPMs, however...
On 03/30/2011 11:41 AM, liumouwang666 wrote:
> hello,
> When I configure drbd, but I execute /etc/init.d/drbd,It appears the
> following error:
> Starting DRBD resources: Can not load the drbd module
>
> I got the the drbd-8.3.10 release, After I exec
On 03/30/2011 10:59 PM, fernando figueroa wrote:
> Hello everyone, i'm new in this list and new in drbd too.
Hi, welcome.
> I'd set up a cluster with two nodes using drbd 8.2.6 and Heartbeat
> (primario and secundario), everything works fine, if the primary node
> goes down, the second node takes
On 04/01/2011 11:19 AM, siva subramani wrote:
> Hi,
>
> I want to make drbd devices in sync in following scenario
>
> Create drbd device on top of the logical volume on two machines as follows
>
> System1:
>
> /dev/drbd1 from /dev/*VG_Node-1*/LV1
>
> System 2:
>
> /dev/sdrbd1 from /dev/*VG_No
Hi,
On 04/01/2011 11:35 AM, liumouwang666 wrote:
> Hi,
> When I execute make km-rpm, It appear the following errors.
>
> error: Failed build dependencies:
> kernel-syms is needed by drbd-km-8.3.10-1
>
> Why?
You probably failed to install the "kernel-syms" package beforehand?
If you
Hi,
On 04/05/2011 03:38 AM, Michael McGlothlin wrote:
> I'd like to link three servers together over my 10Gb network in a
> clustering filesystem with everything replicated onto each and have
> each one serving files by iSCSI to my VMware ESXi servers. I'd also
> like to have a fourth node that is
On 04/12/2011 09:32 PM, Mark Petersen wrote:
> Using drbd 8.3.10 I've seen write performance above ~165 MB/s but I haven't
> been able to get above ~225 MB/s. I believe it depends more on PCI, FSB,
> Memory, QPI, etc. speed/bottlenecks more than the inter-connect speeds
> though, especially whe
On 04/15/2011 05:36 PM, Jean-Francois Malouin wrote:
> Hi,
>
> I can't seem to compile drbd-3.8.10 on Debian/squeeze using
> module-assistant. This is with Debian kernel
> linux-image-2.6.32-5-xen-amd64 (2.6.32-31)
>
> make[3]: Entering directory
> `/usr/src/linux-headers-2.6.32-5-xen-amd64'
>
On 04/19/2011 08:52 AM, Jesse Angell wrote:
> I've done some more testing and /dev/sdc appears to be completely healthy. I
> can fsck it without an issues. No write errors appear until drbd tries to
> sync. All IO tests I've done work without issue and no errors.
>
> I replaced the raid card
On 04/19/2011 05:38 PM, Aditya bajaj wrote:
> Hello all,
>
> we have a two node cluster setup , under heavy load (created using many
> paralallel ping flood sessions) the following messages are being
> observed in syslog.
>
> Apr 19 09:49:59 err CLA-0 kernel: drbd6: PingAck did not arrive in ti
Hi,
> 0: cs:Connected ro:Secondary/Secondary ds:Diskless/Inconsistent C r
>
> ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:0
>
>
>
> root1:~# mount /dev/drbd0
>
> mount: ne peut repérer /dev/drbd0 dans /etc/fstab ou /etc/mtab
No good: Your DRBD is secondary. You
(taking this back on-list)
On 04/28/2011 05:11 PM, Edwige Odedele wrote:
> Thank for response,
>
> Do you speak French?' if yes it will be easy for me)
Barely - what I know wouldn't make things easy for anyone ;-)
>
> I didn't understand your sentence: " No good: Your DRBD is secondary.
> You
Hi,
On 04/30/2011 11:12 PM, Maxim Ianoglo wrote:
> Hello,
>
> Have two DELL PE R410, RAID 10 15K rpm Seagate SAS disks. Both servers
> are connected via 1Gb link.
>
> Testing performance with bonnie++.
> On a RAW device I get: ~240MB/s Read and ~390MB/s Write
> On DRBD device with Secondary DRB
On 05/02/2011 01:10 PM, Meisam Mohammadkhani wrote:
> Hi All,
>
> I'm new to DRBD. I'm searching around a solution for our enterprise
> application that is responsible to save(and manipulate) historical data
> of industrial devices. Now, we have two stations that works like hot
> redundant of each
On 05/05/2011 09:47 PM, Jeff Humby wrote:
> Hello all,
>
> I've been working to configure 2 virtual servers running CentOS 5.6
> Kernel 2.6.35.4. The goal is to do this twice. First I would like to
> create a db cluster then I would like to do the same with the application
>
> I have installed he
On either node: drbdsetup /dev/drbd0 syncer -r 30M
Assuming a 1GB link. Scale to your bandwidth.
On 05/06/2011 05:37 PM, Edwige Odedele wrote:
> *( you can write in English if you want)*
>
>
>
> Bonjour,
>
>
>
> J’ai un problème avec la synchronisation Drbd entre deux serveurs :
> root1(
On 05/08/2011 07:46 PM, Maxim Ianoglo wrote:
> Hello,
>
> Getting some "strange" results on write testing on DRBD Primary.
> Every time I get more data written than 1Gb link can handle.
> Get about 133 MB/s with 1Gb link saturated and both Nodes in sync.
> Also If I make a test with files of size
Hi,
> I get the feeling that someone needs to read up about caching,
> especially Linux page cache, Linux IO stack,
> and where DRBD is located in there.
never hurts.
> We typically have
> [ applications ]
> [ application and library buffers ]
> [ file systems ]
> [ page cache ]
> [
Hi,
On 05/10/2011 03:17 PM, Lyre wrote:
> Hi all:
>
> I'm concerning some details about syncing. For example, the secondary
> was disconnected while new data were wrote to the primary, and then I
> connect the secondary.
>
> Does drbd sync only the different parts or all data on the disk ? And
On 05/10/2011 05:35 PM, Bart Coninckx wrote:
> Hi,
>
> I plan to use a dual primary setup for Xen DomUs. The DRBD devices are
> created on top of LVM LVs.
> Before, I used to do this on top of iSCSI devices and for backup I just
> dd-ed the snapshotted LVM device after the DomU was saved (and th
1 - 100 of 457 matches
Mail list logo