arlos Molina :
> ruslan usifov writes:
>
>>
>>
>> I solve this problem!On one node in log i found follow error message.slv009
> peer is not p art of our clusterSo i stop pacemaker in that host (i use
> v1 for pacemaker):/etc/pacemaker stop
>> /etc/corosync st
2012/4/18 Andreas Kurz
> On 04/17/2012 09:31 PM, ruslan usifov wrote:
> >
> >
> > 2012/4/17 Proskurin Kirill > <mailto:k.prosku...@corp.mail.ru>>
> >
> > On 04/17/2012 03:46 PM, ruslan usifov wrote:
> >
> > 2012/4/17 Andreas
2012/4/17 Proskurin Kirill
> On 04/17/2012 03:46 PM, ruslan usifov wrote:
>
>> 2012/4/17 Andreas Kurz mailto:andr...@hastexo.com>>
>>
>>
>>On 04/14/2012 11:14 PM, ruslan usifov wrote:
>> > Hello
>> >
>&g
2012/4/17 Andreas Kurz
> On 04/14/2012 11:14 PM, ruslan usifov wrote:
> > Hello
> >
> > I remove 2 nodes from cluster, with follow sequence:
> >
> > crm_node --force -R
> > crm_node --force -R
> > cibadmin --delete --obj_type nodes --crm_xml '&
Hello
I remove 2 nodes from cluster, with follow sequence:
crm_node --force -R
crm_node --force -R
cibadmin --delete --obj_type nodes --crm_xml ''
cibadmin --delete --obj_type status --crm_xml ''
cibadmin --delete --obj_type nodes --crm_xml ''
cibadmin --delete --obj_type status --crm_xml ''
es
too:-((. So one time in cluster not any alive mons
2012/3/16 ruslan usifov
>
>
> 2012/3/16 Florian Haas
>
>> On Fri, Mar 16, 2012 at 10:13 AM, ruslan usifov
>> wrote:
>> > Hello
>> >
>> > I search a solution for scalable block device (dist
2012/3/16 Florian Haas
> On Fri, Mar 16, 2012 at 10:13 AM, ruslan usifov
> wrote:
> > Hello
> >
> > I search a solution for scalable block device (dist that can extend if we
> > add some machines to cluster). Only what i find accepten on my task is
> ceph
>
2012/3/16 Vladislav Bogdanov
> 16.03.2012 12:13, ruslan usifov wrote:
> > Hello
> >
> > I search a solution for scalable block device (dist that can extend if
> > we add some machines to cluster). Only what i find accepten on my task
> > is ceph + RDB, bu
Hello
I search a solution for scalable block device (dist that can extend if we
add some machines to cluster). Only what i find accepten on my task is ceph
+ RDB, but ceph on my test i very unstable(regulary crash of all it
daemons) + have poor integration with pacemaker. So does anybody recommend
Reason was in older version of glib2 library, it will hung on pmutex_lock
2012/3/12 ruslan usifov
> Hello
>
> It not so terrible, but...
>
> When we use UDPU transport in corosync,if call crm ra classes, it wiil
> hung, and no any suspicious entries in log. as i underst
Hello
It not so terrible, but...
When we use UDPU transport in corosync,if call crm ra classes, it wiil
hung, and no any suspicious entries in log. as i understand the realy hung
lrmadmin with the follow trace
brk(0) = 0x9b91000
brk(0x9bb2000)
2012/2/27 Andrew Beekhof
> On Sat, Feb 25, 2012 at 8:28 AM, ruslan usifov
> wrote:
> > I solve this problem!
> >
> >
> > On one node in log i found follow error message.
> >
> > slv009 peer is not p art of our cluster
> >
> > So
/pengine dir. thean restart clsuer on that node. And vuala all
begin working as expected.
But i still have question why this happens??? Why nodes begin think that
other nodes are not the part of cluster???
2012/2/24 ruslan usifov
> Hello
>
> I have 3 nodes cluster setup. After upgrade
Hello
I have 3 nodes cluster setup. After upgrade OS, i get that one node
parmanently on OFFLINE state.
OS: ubuntu 10.0.4
pacemaker: 1.1.6-9971ebba4494012a93c03b40a2c58ec0eb60f50c
on OFFLINE node i see in log follow:
Feb 24 20:27:45 slv009 crmd: [9125]: info: do_dc_release: DC role released
Hello
I install ubuntu package from this place (before that i install myself
build packages):
https://launchpad.net/~ubuntu-ha-maintainers/+archive/ppa?field.series_filter=lucid
So yes.
2012/1/3 Andrew Beekhof
> On Thu, Dec 22, 2011 at 9:19 AM, ruslan usifov
> wrote:
> > Hel
Also some times backtrace look like this:
#0 0x7f2b3b9e0464 in __lll_lock_wait () from /lib/libpthread.so.0
#1 0x7f2b3b9db5d9 in _L_lock_953 () from /lib/libpthread.so.0
#2 0x7f2b3b9db3fb in pthread_mutex_lock () from /lib/libpthread.so.0
#3 0x7f2b3c2d3cf6 in g_main_context_fin
Hello
I upgraded cluster from pacemaker 1.0.11 to pacemaker 1.1.6, and some times
on all nodes lrmd will segfault with follow backtrace
#0 0xb77f3430 in __kernel_vsyscall ()
#1 0xb73e6af9 in __lll_lock_wait () from /lib/tls/i686/cmov/libpthread.so.0
#2 0xb73e213b in _L_lock_748 () from /lib/tl
2011/7/5 Andrew Beekhof
> On Mon, Jul 4, 2011 at 11:42 PM, ruslan usifov
> wrote:
> >
> >
> > 2011/6/27 Andrew Beekhof
> >>
> >> On Tue, Jun 21, 2011 at 10:22 PM, ruslan usifov <
> ruslan.usi...@gmail.com>
> >> wrote:
> >> &g
2011/6/27 Andrew Beekhof
> On Tue, Jun 21, 2011 at 10:22 PM, ruslan usifov
> wrote:
> > No, i mean that in this constaint:
> >
> > location ms_drbd_web-U_slave_on_drbd3 ms_drbd_web-U \
> > rule role="slave" -inf: #uname ne drbd3
> >
> >
In this configuration ms_drbd_web-U is stacked resource(for backup purposes)
and it slave part must run only on dedicated host (can't migrate to other)
2011/6/27 Andrew Beekhof
> On Tue, Jun 21, 2011 at 10:22 PM, ruslan usifov
> wrote:
> > No, i mean that in this constaint:
Thank for reply and i have another question, does pacemaker have "What's new
document", where possible to see all features of the version
2011/6/21 Andrew Beekhof
> 1.1.x
>
> On Thu, Jun 2, 2011 at 8:40 PM, ruslan usifov
> wrote:
> > Hello
> >
> >
try do start actions on Slave part of
resource ms_drbd_svn-U on node storage1. And this looks like bug or perhaps
i misunderstand some things.
2011/6/21 Andrew Beekhof
> On Fri, Jun 17, 2011 at 9:31 AM, ruslan usifov
> wrote:
> > Andrew does any chance to fix this behaivour??? Now this
Andrew does any chance to fix this behaivour??? Now this constraint doesn't
work:
location ms_drbd_web_slave_on_backup0 ms_drbd_web-U \
rule $id="ms_drbd_web_slave_on_backup0-rule" $role="Slave" -inf:
#uname ne backup0
2011/6/8 ruslan usifov
> I want exact
You need follow location constraint:
location ms_drbd_web_on_storage0_or_storage1 ms_drbd_web \
rule -inf: \#uname ne storage0 and \#uname ne storage1
2011/6/9 Jelle de Jong
> On 08-06-11 23:21, Klaus Darilion wrote:
> > The service (DRBD+DB) should only run either on node1 or node2. N
a bug with role="slave"?
>
> On 06/08/2011 10:56 AM, ruslan usifov wrote:
> > i try follow:
> >
> > location ms_drbd_web-U_slave_on_drbd3 ms_drbd_web-U \
> > rule role="slave" -inf: #uname ne drbd3
> >
> > result is identical,
i try follow:
location ms_drbd_web-U_slave_on_drbd3 ms_drbd_web-U \
rule role="slave" -inf: #uname ne drbd3
result is identical, pacemaker try launch slave role on other nodes:-(((
2011/6/8 Dominik Klein
> >> but when i shutdown drbd3 host Pacemaker try start slave role on
> >> other
Hello
I have follow constraint:
location ms_drbd_web-U_slave_on_drbd3 ms_drbd_web-U \
rule role="slave" inf: #uname eq drbd3
Which as i think it prevents slave role from launch on all hosts except
drbd3, but when i shutdown drbd3 host Pacemaker try start slave role on
other host. How ca
etely.
> > >
> >
> > Thanks for clarifications. Now it better explains Ruslan's case. DRBD
> isn't
> > installed so it returns NOT_INSTALLED and that's treated as definitely
> DOWN
> > by Pacemaker when it runs status/monitor operations. iSCSI in it'
n't find necessary software and returns
> OCF_NOT_CONFIGURED, all RAs act this way. You have to install all software
> used in you cluster on all nodes even if you are not actually planning to
> run that software on some of them.
>
>
> On Tue, Jun 7, 2011 at 8:02 AM, ruslan usifov
ter you have to make sure that all RAs
> that you use can run on that 3rd node properly.
>
> On Tue, Jun 7, 2011 at 1:58 AM, ruslan usifov wrote:
>
>> Hello
>>
>> I have 3 node cluster (in future we add another one node) with follow
>> configuration:
>>
&g
Hello
I have 3 node cluster (in future we add another one node) with follow
configuration:
crm(live)configure# show
node drbd1
node drbd2
node drbd3
primitive drbd_web ocf:linbit:drbd \
params drbd_resource="web" \
op monitor interval="10s" timeout="60s"
primitive drbd_web-U ocf:l
Hello
Which split-brain recovery policy is better use, when dbrd work as master
slave?
___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker
Project Home: http://www.clusterlabs.org
Getting start
Hello
Is it possible have multiple cib in one cluster installation. I try to
describe what i want:
For example we have multiple undependable resources (as resource i mean here
ip address files system + service (like apache, nginx and etc)). Now as they
placed in one cib configuration (crm config
Hello
I have one question which pacemaker version prefer to use 1.1 or 1.0
1.0 is marked as stable, but all documentation resources refer to version
1.1. I'm little bit confusion
___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clust
hello
How can i separate RA output from common pacemaker log output?. Now i have
follow loging conf (corosync.conf):
logging {
fileline: off
function_name: on
timestamp: on
to_stderr: no
to_logfile: yes
to_syslog: no
logfile: /var/log/clust
Hello
As said here
http://www.clusterlabs.org/doc/en-US/Pacemaker/1.1/html/Pacemaker_Explained/s-intro-redundancy.html
"Pacemaker allowing several active/passive clusters to be combined and share
a common backup node" But how to implemet such configuration? Cluster form
crutch manual doesn't holds
> Hi,
>
> 03.04.2011 22:42, ruslan usifov wrote:
> > You need some tuning from both sides.
> > First, (at least some versions of) ietd needs to be blocked (-j DROP)
> > with iptables on restarts. That means, you should block all incoming
> and
> > outgoing
3 апреля 2011 г. 10:53 пользователь Vladislav Bogdanov написал:
> Hi,
>
> You need some tuning from both sides.
> First, (at least some versions of) ietd needs to be blocked (-j DROP)
> with iptables on restarts. That means, you should block all incoming and
> outgoing packets (later is more impo
Hello
I have 2 nodes which manage iscsi targets which exported for svn and web
server usage.
But I have follow problem when one of the nodes are dead (when i do
"shutdown -P now") and happens resource migration from one node to another
(ipaddr, iscsi target and so on) on iscsi initiator side I oft
I have two internet providers, with preferred one of them(main provider),
and want when main provider if down, then second provider is up, and
internet will steel work. I see for myself this like:
define 2 primitive throw "ocf:pacemaker:pingd", also define two
"ocf:heartbeat:Route" with default r
2010/12/20 Andrew Beekhof
> Actually the libxml2 guy and I figured out that problem just now.
> I can't find any memory corruption, but on the plus side, I does seem
> that 1.1 is unaffected - perhaps you could try that until I manage to
> track down the problem.
>
> Sorry I badly have understood
2010/12/11 Andrew Beekhof
> On Fri, Dec 10, 2010 at 4:59 PM, ruslan usifov
> wrote:
> > and to me what to do?
>
> Nothing yet, there looks to be some memory corruption going on.
> With that file I've been able to reproduce locally. I'll let you know
> when t
Hi
Is it possible to use pacemaker based on corosync in the cloud hosting like
amazon or soflayer?
___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker
Project Home: http://www.clusterlabs.org
G
2010/12/11 Andrew Beekhof
> On Fri, Dec 10, 2010 at 4:59 PM, ruslan usifov
> wrote:
> > and to me what to do?
>
> Nothing yet, there looks to be some memory corruption going on.
> With that file I've been able to reproduce locally. I'll let you know
> when t
and to me what to do?
2010/12/10 Andrew Beekhof
> On Fri, Dec 10, 2010 at 11:16 AM, ruslan usifov
> wrote:
> > you mean some think like this:
> >
> > Dec 07 15:14:05 storage1 crmd: [16003]: notice: save_cib_contents: Saved
> CIB
> > contents after PE
at 10:18 AM, ruslan usifov
> wrote:
> > I don't know how to see version of pacemaker, crm doesn't provide -v (or
> -V
> > or --version) option, but I got source from here
> > http://hg.clusterlabs.org/pacemaker/stable-1.0/archive/tip.tar.bz2, as
> > r
I don't know how to see version of pacemaker, crm doesn't provide -v (or -V
or --version) option, but I got source from here
http://hg.clusterlabs.org/pacemaker/stable-1.0/archive/tip.tar.bz2, as
result I download Pacemaker-1-0-b0266dd5ffa9.tar.bz2
and here is my backtrace:
gdb /usr/lib/heartbea
foollow:
order o1 inf: ms_drbd_web iscsi
After that, pacemaker go to the segfault on one node(in my case storage1).
As i understated pacemaker try to commit bad changes and fault, how can i
discard this changes?
2010/12/9 Andrew Beekhof
> On Wed, Dec 8, 2010 at 12:26 PM, ruslan usi
hello
I have 2 node cluster with follow conf:
node storage0
node storage1
primitive drbd_web ocf:linbit:drbd \
params drbd_resource="web" \
op monitor interval="30s" timeout="60s"
primitive iscsi_ip ocf:heartbeat:IPaddr2 \
params ip="192.168.17.19" nic="eth1:1" cidr_netmask
i build pacemaker from latest source and problem gone
2010/12/6 Dejan Muhamedagic
> Hi,
>
> On Mon, Dec 06, 2010 at 03:11:03PM +0300, ruslan usifov wrote:
> > hello
> >
> > I run pacemaker on ubuntu (Ubuntu 10.04.1 LTS) with corosync, i installed
> it
> > f
hello
I run pacemaker on ubuntu (Ubuntu 10.04.1 LTS) with corosync, i installed it
from apt, and my pacemaker version is:
r...@storage0:/var/log# dpkg -l | grep 'pacemaker'
ii pacemaker 1.0.8+hg15494-2ubuntu2 HA
cluster resource manager
and have follow proble
51 matches
Mail list logo