On Thu, Mar 02, 2017 at 02:56:58PM -0500, Digimer wrote:
> >> [root@aae-a01n02 ~]# cat /proc/drbd
> >> version: 8.3.16 (api:88/proto:86-97)
> >> GIT-hash: a798fa7e274428a357657fb52f0ecf40192c1985 build by
> >> r...@rhel6-builder-production.alteeve.ca, 2015-04-05 19:59:27
> >> 0: cs:Connected ro:Pr
On 02/03/17 02:40 PM, Lars Ellenberg wrote:
> On Thu, Mar 02, 2017 at 03:07:52AM -0500, Digimer wrote:
>> Hi all,
>>
>> We had an event last night on a system that's been in production for a
>> couple of years; DRBD 8.3.16. At almost exactly midnight, both nodes
>> threw these errors:
>>
>> =
On Thu, Mar 02, 2017 at 03:07:52AM -0500, Digimer wrote:
> Hi all,
>
> We had an event last night on a system that's been in production for a
> couple of years; DRBD 8.3.16. At almost exactly midnight, both nodes
> threw these errors:
>
> =
> eb 28 03:42:01 aae-a01n01 rsyslogd: [origin soft
Hi all,
We had an event last night on a system that's been in production for a
couple of years; DRBD 8.3.16. At almost exactly midnight, both nodes
threw these errors:
=
eb 28 03:42:01 aae-a01n01 rsyslogd: [origin software="rsyslogd"
swVersion="5.8.10" x-pid="1729" x-info="http://www.rsyslo
On Tuesday 26 June 2012 at 17:35:38, Florian Haas wrote:
> On Tue, Jun 26, 2012 at 10:51 AM, Thilo Uttendorfer
>
> wrote:
> >> Libvirt configuration or qemu/kvm command line please? (don't post
> >> them here; pastebin and share the URL instead).
> >
> > thanks for your help:
> > http://pastebin.c
On Tue, Jun 26, 2012 at 10:51 AM, Thilo Uttendorfer
wrote:
>> Libvirt configuration or qemu/kvm command line please? (don't post
>> them here; pastebin and share the URL instead).
>
> thanks for your help:
> http://pastebin.com/mZxaJr5a
OK, can we see the DRBD config too, please?
Cheers,
Florian
On Monday 25 June 2012 at 17:10:51, Florian Haas wrote:
> On Fri, Jun 22, 2012 at 6:53 PM, Thilo Uttendorfer
> wrote:
> > we have a KVM cluster with each virtual machine running on top of a DRBD
> > device. After running this setup for many month we recently saw these log
> > messages on two (of
On Fri, Jun 22, 2012 at 6:53 PM, Thilo Uttendorfer
wrote:
> Hi,
>
> we have a KVM cluster with each virtual machine running on top of a DRBD
> device. After running this setup for many month we recently saw these log
> messages on two (of total 15) DRBD devices:
>
> Jun 22 17:53:01 server-v1 kerne
Hi,
we have a KVM cluster with each virtual machine running on top of a DRBD
device. After running this setup for many month we recently saw these log
messages on two (of total 15) DRBD devices:
Jun 22 17:53:01 server-v1 kernel: [4995041.347749] block drbd13:
qemu-system-x86[9395] Concurrent l
Hello,
On 01/18/2012 12:39 PM, Alessandro Bono wrote:
> Hi
>
> installing a kvm virtual machine on a drbd disk cause these logs on host
> machine
>
> [2571736.830557] block drbd0: kvm[7083] Concurrent local write detected!
> [DISCARD L] new: 48981951s +32768; pending: 48981951s +32768
> [257173
Hi
installing a kvm virtual machine on a drbd disk cause these logs on host
machine
[2571736.830557] block drbd0: kvm[7083] Concurrent local write detected!
[DISCARD L] new: 48981951s +32768; pending: 48981951s +32768
[2571736.857671] block drbd0: kvm[7083] Concurrent local write detected!
[DIS
On Wed, Dec 29, 2010 at 02:49:00PM -0700, Chris Worley wrote:
> I really think drbd is being brain-dead here.
That's very much possible.
But see
http://old.nabble.com/IET-1.4.20.2-hosting-vmfs-on-drbd-complains-about-concurrent-write-td29756710.html#a29767518
> Concurrent writes to
> the same LBA
On Mon, Jan 3, 2011 at 11:44 AM, Chris Worley wrote:
> On Mon, Jan 3, 2011 at 1:22 AM, Felix Frank wrote:
> >>>
> >> This is part of Unix file system semantics.
> >>
> >> A dead system is not the proper outcome.
> >
> > Color me ignorant, but what have *file system* semantics got to do with
> >
On Mon, Jan 3, 2011 at 1:22 AM, Felix Frank wrote:
>>>
>>>
>>> But the result is undefined! What should DRBD write to the other member? The
>>> result of the first or the second write?
>>>
>>> You are using a tool that permits the execution of stupid I/O streams. Good
>>> for stress testing, but
>>
>>
>> But the result is undefined! What should DRBD write to the other member? The
>> result of the first or the second write?
>>
>> You are using a tool that permits the execution of stupid I/O streams. Good
>> for stress testing, but not good for data integrity. If you want undefined
>> data
On Wed, Dec 29, 2010 at 3:28 PM, Dan Barker wrote:
>> From: drbd-user-boun...@lists.linbit.
>> Sent: Wednesday, December 29, 2010 4:49 PM
>>
>> I really think drbd is being brain-dead here. Concurrent writes to
>> the same LBA aren't an issue... just do it! Note the below is using a
>> primary/s
> From: drbd-user-boun...@lists.linbit.
> Sent: Wednesday, December 29, 2010 4:49 PM
>
> I really think drbd is being brain-dead here. Concurrent writes to
> the same LBA aren't an issue... just do it! Note the below is using a
> primary/secondary setup on two raw drbd devices; no GFS anywhere.
I really think drbd is being brain-dead here. Concurrent writes to
the same LBA aren't an issue... just do it! Note the below is using a
primary/secondary setup on two raw drbd devices; no GFS anywhere.
Let me use an example of two fio invocations as an example, sorry if
you don't know fio.
The
On Mon, Dec 20, 2010 at 2:06 PM, Chris Worley wrote:
> I'm using RHEL5.5/2.6.18-194.3.1.el5 and IB/SDP.
>
What version of DRBD are you using and what versions have you tried?
-JR
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbi
I've read in the archives that this is a severe error, even in a
primary/primary setup, but have seen nothing to fix it, and I see them
spew constantly whenever using DRBD, on both primary systems (with GFS
atop or not).
I'm using RHEL5.5/2.6.18-194.3.1.el5 and IB/SDP.
This seems to have eventual
20 matches
Mail list logo