Hi,
we did not solve the performance issue yet. However we could improve the
responsiveness of the system. We no longer get timeouts and reenabled
pacemaker.
The problem that lead to an unresponsive system was that Ubuntu 12.04 LTS
uses the cfq I/O-Scheduler by default. Ubuntu 10.04 LTS used t
> Dedicated replication link?
>
> Maybe the additional latency is all that kills you.
> Do you have non-volatile write cache on your IO backend?
> Did you post your drbd configuration setings already?
There is a dedicated 10GB Ethernet replication link between both nodes.
There is also a cache
Dejan Muhamedagic wrote:
> Horrible. And there's the answer to your question too. There's a
> new release planned for this week. You can also just drop the
> new exportfs (mind the permissions!).
>
I've already installed the new version. The size of rmtab went down to 3kb.
However this does not
Dejan Muhamedagic wrote:
> Hi,
>
> On Mon, May 21, 2012 at 01:36:56AM +0200, Christoph Bartoschek wrote:
>> Hi,
>>
>> we currently have the problem that when the NFS server is highly used the
>> heartbeat:exportfs monitor script fails with a timeout because it
Florian Haas wrote:
> On Mon, May 21, 2012 at 1:36 AM, Christoph Bartoschek
> wrote:
>> Hi,
>>
>> we currently have the problem that when the NFS server is highly used the
>> heartbeat:exportfs monitor script fails with a timeout because it cannot
>> write
Florian Haas wrote:
>> Thus I would expect to have a write performance of about 100 MByte/s. But
>> dd gives me only 20 MByte/s.
>>
>> dd if=/dev/zero of=bigfile.10G bs=8192 count=1310720
>> 1310720+0 records in
>> 1310720+0 records out
>> 10737418240 bytes (11 GB) copied, 498.26 s, 21.5 MB/s
>
Raoul Bhatia [IPAX] wrote:
> i haven't seen such issue during my current tests.
>
>> Is ext4 unsuitable for such a setup? Or is the linux nfs3 implementation
>> broken? Are buffers too large such that one has too wait too long for a
>> flush?
>
> Maybe I'll have the time to switch form xfs to ex
resources. Unfortunatley pacemaker cannot know that everything will be fine
after the utilization goes down.
My question is now. Is it necessary to synchronize rmtab? Shouldn't the
clients just reconnect after a timeout?
--
Christoph Barto
emmanuel segura wrote:
> Hello Christoph
>
> For make some tuning on drbd you can look this link
>
> http://www.drbd.org/users-guide/s-latency-tuning.html
>
Hi,
I do not have the impression that drbd is the problem here because a similar
setup without LVM, EXT4 and NFS above it works fine.
lush?
Thanks
Christoph Bartoschek
___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker
Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scr
Andrew Beekhof wrote:
> On Wed, May 9, 2012 at 4:38 AM, Christoph Bartoschek
> wrote:
>> Hi,
>>
>> we have a two-node pacemaker setup to serve NFS directories. Today we had
>> to put node A into standby to replace some hardware. The services went to
>> node
Dejan Muhamedagic wrote:
> On Wed, May 09, 2012 at 03:15:52PM +0200, Christoph Bartoschek wrote:
>> Dejan Muhamedagic wrote:
>>
>> > Hi,
>> >
>> > On Tue, May 08, 2012 at 09:53:53PM +0200, Christoph Bartoschek wrote:
>> >> emmanuel
Dejan Muhamedagic wrote:
> Hi,
>
> On Tue, May 08, 2012 at 09:53:53PM +0200, Christoph Bartoschek wrote:
>> emmanuel segura wrote:
>>
>> > why do you spesify twice resource_stickiness in your cluster config?
>> >
>> > can you show all your clus
emmanuel segura wrote:
> why do you spesify twice resource_stickiness in your cluster config?
>
> can you show all your cluster config?
>
> use pastebin :-)
I do not know why it is twice there.
Here is the full config http://cpp.codepad.org/XjuexaPI
I think that the task is to somehow start w
"2" \
stonith-enabled="false" \
no-quorum-policy="ignore" \
last-lrm-refresh="1336481054"
rsc_defaults $id="rsc-options" \
resource-stickiness="100" \
resource_stickiness="200"
How c
Andrew Beekhof wrote:
> Question though, do the nfs clients need to restart if the server
> component does?
I am not sure but if they are connected via TCP the connection should get
lost on a server restart.
As the service IP is not available during server restart the clients are not
notified
Christoph Bartoschek wrote:
> Andrew Beekhof wrote:
>
>> 2011/4/4 Christoph Bartoschek :
>>> Andrew Beekhof wrote:
>>>
>>>> 2011/4/1 Christoph Bartoschek
>>>> :
>>>>> Andrew Beekhof wrote:
>>>>>
>>>>
Andrew Beekhof wrote:
> 2011/4/1 Christoph Bartoschek :
>> Andrew Beekhof wrote:
>>
>>> You didn't mention a version number... I think you'll be happier with
>>> 1.1.5 (I recall fixing a similar issue).
>>
>> I see a similar problem wit
Andrew Beekhof wrote:
> You didn't mention a version number... I think you'll be happier with
> 1.1.5 (I recall fixing a similar issue).
I see a similar problem with 1.1.5. I have a two node NFS server setup.
The following happens:
1. The resources run on node A.
2. I put node A into standby.
Lars Ellenberg wrote:
> On Fri, Mar 25, 2011 at 06:39:10PM +0100, Christoph Bartoschek wrote:
>> Hi,
>>
>> I´ve already sent this mail to linux-ha but that list seems to be dead:
>
> What makes you think so?
> That you did not get a reply within 40 minutes?
>
Lars Ellenberg wrote:
> On Fri, Mar 25, 2011 at 06:39:10PM +0100, Christoph Bartoschek wrote:
>> Hi,
>>
>> I´ve already sent this mail to linux-ha but that list seems to be dead:
>
> What makes you think so?
> That you did not get a reply within 40 minutes?
>
Lars Ellenberg wrote:
> On Fri, Mar 25, 2011 at 06:39:10PM +0100, Christoph Bartoschek wrote:
>> Hi,
>>
>> I´ve already sent this mail to linux-ha but that list seems to be dead:
>
> What makes you think so?
> That you did not get a reply within 40 minutes?
>
Hi,
I´ve already sent this mail to linux-ha but that list seems to be dead:
we experiment with DRBD and pacemaker and see several times that the
DRBD part is degraded (One node is outdated or diskless or something
similar) but crm_mon just reports that the DRBD resource runs as master
and slav
23 matches
Mail list logo