can happen in a real world situation.
Regards,
Yves
Le 2013-02-06 05:33, Lars Marowsky-Bree a écrit :
On 2013-01-19T12:19:46, Yves Trudeau wrote:
Hi,
Forget this, everything is fine. An iptables rule was missing in my
failure test.
Hi Yves,
which iptables rule was missing, if I may ask
09:29:41 2013 via crm_attribute on mys002
Stack: openais
Current DC: mys001 - partition with quorum
Version: 1.1.7-6.el6-148fccfd5985c5590cc601123c6c16e966b85d14
Regards,
Yves
Le 2013-02-04 05:06, Jan Friesse a écrit :
Andrew Beekhof napsal(a):
On Thu, Jan 31, 2013 at 8:10 AM, Yves Trudeau wrot
Hi,
Is there any known memory leak issue corosync 1.4.1. I have a setup
here where corosync eats memory at a few kB a minute:
[root@mys002 mysql]# while [ 1 ]; do ps faxu | grep corosync | grep -v
grep; sleep 60; done
root 11071 0.2 0.0 624256 8840 ?Ssl 09:14 0:02 corosy
Hi,
Forget this, everything is fine. An iptables rule was missing in my
failure test.
Regards,
Yves
Le 2013-01-18 13:24, Yves Trudeau a écrit :
Hi,
learning about the paxos protocol, I realize the problem is not with
the arbitrator, but the surviving node. Here its debug output
kes a difference but the test VMs are 32 bits.
Regards,
Yves
Le 2013-01-18 11:49, Yves Trudeau a écrit :
Hi,
working on a geo-redundant setup, I uncovered a problem with booth.
In order to simplify, I did an experiment with only booth, no
pacemaker. The behavior is the same with pace
Hi,
working on a geo-redundant setup, I uncovered a problem with booth.
In order to simplify, I did an experiment with only booth, no
pacemaker. The behavior is the same with pacemaker.
Version used
git log
commit 55ab027233407fd44850f0c4905b085205d55f64
Author: Xia Li
Date
Hi,
have a look here. For various reasons, the agent in the package is
outdated.
https://github.com/jayjanssen/Percona-Pacemaker-Resource-Agents/blob/master/doc/PRM-setup-guide.rst
You'll find the howto and link to the latest agent.
Regards,
Yves
Le 2012-12-17 17:52, codey koble a écrit :
Hi,
Did you have a look at:
https://github.com/jayjanssen/Percona-Pacemaker-Resource-Agents/blob/master/doc/PRM-setup-guide.rst
That may be about what you want to accomplish.
Regards,
Yves
Le 2012-09-24 06:18, Marcin M. Jessa a écrit :
Hi.
I'm working on MySQL cluster with one master and
Hi Nathan,
I'll see what I can do correct the problem, thanks for looking at it.
Regards,
Yves
Le 2012-09-14 14:20, Nathan Bird a écrit :
I tried searching a bit and it seems like this one hasn't been reported
yet, my apologies if it has.
The RA currently issues a "RESET SLAVE" command, bu
Hi Lyle,
please enable the bash trace by creating the file
/tmp/mysql.ocf.ra.debug/log. I suspect you are missing something in
your config.
Regards,
Yves
Le 2012-07-24 11:23, l...@netcourrier.com a écrit :
Hi,
My configuration:
- centos 5.8 x86_64
- heartbeat 3.0.3
- pacemaker 1.0.12
-
Hi Mike,
my answers are embedded.
Le 2012-07-18 09:24, DENNY, MICHAEL a écrit :
Our current monitor action tests the availability of the mysql database.
However, the monitor fails if mysql is doing recovery processing. And the
recovery processing can take a long time. Do you know if th
Hi Andreas,
for my part, I looked at the stateful and drbd agents and of course
the mysql one. After a bit of pain, I got the picture right. The
tracing feature in the mysql ra is very handy.
Regards,
Yves
Le 2012-06-14 10:19, Stallmann, Andreas a écrit :
Hi!
Excuse my blindness; I fou
Hi,
please use the latest version of the agent and look here for
documentation:
https://github.com/jayjanssen/Percona-Pacemaker-Resource-Agents/blob/master/doc/PRM-setup-guide.rst
Regards,
Yves
Le 2012-05-16 08:29, Stallmann, Andreas a écrit :
Hi!
I try to get a mysql master / slave set
Hi,
the agent does a RESET SLAVE now and check for master log file
afterward. I think this is handled correctly.
Regards,
Yves
Le 2012-05-29 10:19, Stallmann, Andreas a écrit :
Hi!
MySQL has changed CHANGE MASTER TO syntax in 5.1 (IIRC), and it won't accept
an empty host argument anymo
Hi Andreas,
make sure the pid defined in pacemaker is the same has the one define
in /etc/mysql/my.cnf. Although it may not be that since you have the
replication info set. Have you verified that MySQL is indeed not
running? maybe you just forgot to cleanup the errors. The updated
version
Hi,
sorry I was in vacations... Very amusing :)
Regards,
Yves
On 12-02-10 08:04 AM, Florian Haas wrote:
On Fri, Feb 10, 2012 at 1:38 PM, Nick Khamis wrote:
May I ask where the original blog resides? The one
with the "bizerk blog comments"
http://www.lmgtfy.com/?q=percona+replication+man
Hi Mark,
nothing happens to the VIP? As it is, the agent is not monitoring
mysqld itself. In my case, I call mysqld_safe so the kill -9 would be
handled. I am out for a few days, answers will be delayed until end of
next week.
Regards,
Yves
On 12-02-09 06:04 PM, Mark Grennan wrote:
Y
Hi Andreas,
with NDB, you have 3 types of node: SQL which is mysqld, Management
running ndb_mgmd and data running ndbmtd. SQL and Management can be on
the same physical host (cohosting) but you ideally need 2 for HA, the
ndb_mgmd process is very small and low on resources. Data nodes, for H
Hi Mark,
Good work. After the insert in the user table, don't forget to
perform a "flush privileges;".
Regards,
Yves
On 12-02-08 03:10 PM, Mark Stunnenberg wrote:
Hi all,
Let's continue the discussion here, since the blog comments went berserk! ;)
Andreas, I've made a quick and dirty bl
Hi,
even if you fix the schema, sending simultaneous write to the 2
masters will cause issues with replication. You also have no
performance gain from such a setup since both masters will have to
perform all the writes anyway Best is to write only to one at a time.
The new mysql RA (not
Hi Andreas,
NDB requires a minimum of 4 nodes for HA... 2x SQL/mgmnt nodes + 2x
data nodes. The SQL/mgmnt nodes could be your tomcat servers without
problem but the data nodes must be other physical servers.
Regards,
Yves
On 12-01-30 04:46 AM, Stallmann, Andreas wrote:
Hi!
I’m on the
On 11-11-26 05:29 PM, Andreas Kurz wrote:
colocation writer_vip_coloc_master inf: ms_MySQL:Master writer_vip
default is to use role from left resource ... writer_vip never has
Master role use "writer_vip:Started"
Ok, I'll try this. I did found an effective work around though.
Regards
Hi,
I am working on the mysql RA and I am hitting a strange problem
again. I need to store a cluster wide attribute like this:
crm_attribute --type crm_config --name replication_info -s
mysql_replication -v "10.2.2.160|mysql-bin.000140|18276"
Would this cause a pengine run. Since I adde
Hi Viacheslav,
the integration of mysql-master-HA is my roadmap. It is not an ha
solution by itself although it an awesome tool to promote a master and
bring everyone to the same position. It doesn't deal with failover and
VIP. I modified the MySQL RA to deal with read and write VIPs co
locking and/or intensive
disk IO. To counter that, people use many slaves.
On Sat, Nov 12, 2011 at 2:51 PM, Florian Haas <mailto:flor...@hastexo.com>> wrote:
Hi Yves and Michael,
On 2011-11-12 19:22, Yves Trudeau wrote:
> lol... How many large databases have you ma
On 11-11-12 02:51 PM, Florian Haas wrote:
Hi Yves and Michael,
On 2011-11-12 19:22, Yves Trudeau wrote:
lol... How many large databases have you managed? Once evicted, MySQL
will be restarted by Pacemaker so all the caches will be cold.
If I may say so, before you start laughing at people on
can float to another slave in the cluster or go
offline. I thought that's what you're looking for. I can post a
config if you're interested.
Sent from my iPhone
On Nov 12, 2011, at 11:51 AM, Yves Trudeau <mailto:y.trud...@videotron.ca>> wrote:
Hi,
Setting evict
="${OCF_RESKEY_evict_outdated_slaves_default}" />
On Fri, Nov 11, 2011 at 9:57 PM, Yves Trudeau <mailto:y.trud...@videotron.ca>> wrote:
Because that's not enough, if a slave lags behind too much, I want
to remove the vip but not stop the slave.
On 11-11-11 06:12
Because that's not enough, if a slave lags behind too much, I want to
remove the vip but not stop the slave.
On 11-11-11 06:12 PM, Michael Marrotte wrote:
Why don't you simply colocate VIP's with the master/slave roles?
On Fri, Nov 11, 2011 at 5:18 PM, Yves Trudeau
Hi,
I created a fork of the resource-agents and modified the mysql RA to
supports VIP. The support for read/write VIPs is something that
currently limits the use of pacemaker to manage mysql replication. I
ran some basic tests on the modified agent and it seems behaving sanely
so far but
Here it is: http://pastebin.com/yKJ2TLMv
Regards,
Yves
On 11-11-09 12:06 PM, Florian Haas wrote:
On 2011-11-09 17:29, Yves Trudeau wrote:
Hi Florian,
the colocation rule was an attempt to separate the clone set
members. I tried with and without swappiness with no luck.
Here the new
Paddr2): Started
testvirtbox2
reader_vip_1:2 (ocf::heartbeat:IPaddr2): Started
testvirtbox3
If there is no solution, I can always use regular IPAddr RA with
negative colocation rules, that will work.
Regards,
Yves
On 11-11-08 05:29 PM, Florian Haas wrote:
On 2011-1
Hi,
I am currently working on a replication solution adding logic to the
mysql RA and I need to be able to turn ON/OFF a virtual IP based on a
node attribute (Florian suggestion). In order to achieve this, I
created a clone set of IPaddr2 RA and a location rule using the node
attribute set
On 11-10-13 06:26 AM, Lars Ellenberg wrote:
On Wed, Oct 12, 2011 at 08:08:21PM -0400, Yves Trudeau wrote:
What about referring to the git repository here:
http://www.clusterlabs.org/wiki/Get_Pacemaker#Building_from_Source
http://www.clusterlabs.org/mwiki/index.php?title=Install&diff=
Ellenberg wrote:
On Wed, Oct 12, 2011 at 05:09:45PM -0400, Yves Trudeau wrote:
Hi Florian,
On 11-10-12 04:09 PM, Florian Haas wrote:
On 2011-10-12 21:46, Yves Trudeau wrote:
Hi Florian,
sure, let me state the requirements. If those requirements can be
met, pacemaker will be much more used
Hi Lars
indeed... this is much more interesting. It should be pretty easy to
port my stuff to that RA and build a configuration that fits my
requirements.
Regards,
Yves
On 11-10-12 07:21 PM, Lars Ellenberg wrote:
On Wed, Oct 12, 2011 at 05:09:45PM -0400, Yves Trudeau wrote:
Hi Florian
I found the answer about cluster-wide attribute, very easy and pretty
elegant.
On 11-10-12 05:09 PM, Yves Trudeau wrote:
Hi Florian,
On 11-10-12 04:09 PM, Florian Haas wrote:
On 2011-10-12 21:46, Yves Trudeau wrote:
Hi Florian,
sure, let me state the requirements. If those requirements
Hi Florian,
On 11-10-12 04:09 PM, Florian Haas wrote:
On 2011-10-12 21:46, Yves Trudeau wrote:
Hi Florian,
sure, let me state the requirements. If those requirements can be
met, pacemaker will be much more used to manage MySQL replication.
Right now, although at Percona I deal with many
I am wide open to argue any Pacemaker or RA architecture/design part but
I don't want to argue the replication requirements, they are fundamental
in my mind.
Do not hesitate if you have questions.
Regards,
Yves
On 11-10-12 01:53 PM, Florian Haas wrote:
On 2011-10-12 19:36, Yves Tr
about it. Maybe Pacemaker cannot be used but that would be sad.
Regards,
Yves
On 11-10-12 12:59 PM, Florian Haas wrote:
Hi again,
On 2011-10-12 18:23, Yves Trudeau wrote:
Hi,
following my previous post to the wrong list, forwarded to the
Pacemaker list by Florian, here is the my complete
Hi,
following my previous post to the wrong list, forwarded to the
Pacemaker list by Florian, here is the my complete cluster configuration:
http://pastebin.com/zDj0MF1Z
Just to recall the original message:
I started to have issues with crm_master with Pacemaker 1.0.11. I
think I trac
41 matches
Mail list logo