: redis-server is running
(as slave)
--
On Mon, Nov 12, 2012 at 3:37 PM, David Vossel wrote:
> ----- Original Message -
> > From: "Cal Heldenbrand"
> >
eed any more info on my config!
Thank you,
--Cal
On Fri, Nov 9, 2012 at 4:52 PM, David Vossel wrote:
>
>
> - Original Message -
> > From: "Cal Heldenbrand"
> > To: "The Pacemaker cluster resource manager" <
> pacemaker@oss.clusterlabs.org&
Thanks David!
--Cal
On Fri, Nov 9, 2012 at 4:52 PM, David Vossel wrote:
>
>
> - Original Message -
> > From: "Cal Heldenbrand"
> > To: "The Pacemaker cluster resource manager" <
> pacemaker@oss.clusterlabs.org>
> > Sent: Fri
Hi everyone,
I'm playing around with the possibility of using Pacemaker in a Redis
master/slave cluster. The difficult part of this, is that Redis will not
automatically flip itself from read-only slave mode into master mode. A
client needs to connect to the slave server and run this SLAVEOF NO
> Just integers I'm afraid.
> The full list for OCF agents is here:
>
> http://www.clusterlabs.org/doc/en-US/Pacemaker/1.1-pcs/html/Pacemaker_Explained/s-ocf-return-codes.html
> LSB return codes are slightly different.
>
Sure, that's good enough for me. Do you think it'd be possible to somehow
re
# crm_simulate -S --xml-file /tmp/memcached-test.xml
>
>
>
> On Thu, Oct 25, 2012 at 8:43 AM, Andrew Beekhof
> wrote:
> > On Thu, Oct 25, 2012 at 1:37 AM, Cal Heldenbrand
> wrote:
> >> Thanks Andrew! My first few attempts at playing around with th
Thanks Andrew! My first few attempts at playing around with the failure
states are working as expected.
A few follow-ups below:
--op-fail isn't the command you want though.
> From the man page:
>
>-i, --op-inject=value
> $rsc_$task_$interval@$node=$rc - Inject the specifie
with verbose flag. Maybe try with error for the exit code? Or
> try $stop and $error to see if it will show anything - I would expect
> something like a node fence from that.
>
> crm_simulate with -LS for me causes seg fault so I can't test :-(
>
> Jake
>
> --
(ocf::heartbeat:IPaddr2): Started m3.fbsdata.com
-
On Tue, Oct 23, 2012 at 12:27 PM, Jake Smith wrote:
>
> - Original Message -
>
> > From: "Cal Heldenbrand"
> > To: pac
Hi everyone,
I'm not able to find documentation or examples on this. If I have a cloned
primitive set across a cluster, how can I simulate a failure of a resource
on an individual node? I mainly want to see the scores on why a particular
action is taken so I can adjust my configs.
I think the -
Hi everyone,
I'm starting to get my memcached cluster setup more operational now. But
I'm running into one small problem -- when my memcached resource check
fails, the stonith primitive isn't triggered to reset the node. It only
happens when it's loaded up enough to cause corosync to fail. When
, I don't quite understand how it should
be configured.
Thanks!
--Cal
On Fri, Jul 27, 2012 at 10:56 AM, Phil Frost wrote:
> On 07/27/2012 11:48 AM, Cal Heldenbrand wrote:
>
>> Why wouldn't my mem3 failover happen if it timed out stopping the cluster
>> IP?
>>
>
med out stopping the cluster
IP?
Thank you,
--Cal
On Thu, Jul 26, 2012 at 4:09 PM, Cal Heldenbrand wrote:
> A few more questions, as I test various outage scenarios:
>
> My memcached OCF script appears to give a false positive occasionally, and
> pacemaker restarts the service. U
emcached*.
Thanks!
--Cal
On Thu, Jul 26, 2012 at 1:35 PM, Phil Frost wrote:
> On 07/26/2012 02:16 PM, Cal Heldenbrand wrote:
>
>> That seems very handy -- and I don't need to specify 3 clones? Once my
>> memcached OCF script reports a downed service, one of them
Thanks for the info Phil! I'm going to play around with my configs with
what you've recommended... but a few questions below:
can be started in any order. You don't need to specify any location
> constraints to say where memcache can run, or to keep the memcache
> instances from running multiple
Hi everybody,
I've read through the Clusters from Scratch document, but it doesn't seem
to help me very well with an N+1 (shared hot spare) style cluster setup.
My test case, is I have 3 memcache servers. Two are in primary use (hashed
50/50 by the clients) and one is a hot failover.
I hacked u
16 matches
Mail list logo