Thank you for the input. The way I originally set up the Replicated
LevelDB servers was to create 2 sets of 3 hosts:
1a 1b 1c
2a 2b 2c
So within the "1" and "2" clusters each had one master and two slaves, and
then I networked the "1" and "2" clusters together, My intent was to create
highly re
So 15 seconds sounds really low, although I'm not sure of all the various
timeout settings in NFS.
Specifically here, the timeout of concern is the release of a lock held by a
client. The higher the timeout, the less likelihood of two clients
obtaining the same lock, but the slower failover becom
I have seen a similar scenario that I described in a mail on the user list
http://activemq.2283324.n4.nabble.com/Failover-very-slow-with-kahadb-while-restart-of-master-is-fast-tp4707500.html
Do you have any idea why the failover is slower than a simple restart of
the master?
Christian
On 28.0
On Tue, Mar 1, 2016 at 7:41 AM, Tim Bain wrote:
> Another possibility: the paths that each broker uses to reach the lock file
> don't resolve to the same file in NFS.
>
In my case they resolve to the same server IP and export path.
Another possibility: the paths that each broker uses to reach the lock file
don't resolve to the same file in NFS.
On Mar 1, 2016 8:29 AM, "artnaseef" wrote:
> So something is very wrong then. NFS should *not* allow two NFS clients to
> obtain the same lock.
>
> Three possible explanations come
On Tue, Mar 1, 2016 at 7:02 AM, artnaseef wrote:
> So something is very wrong then. NFS should *not* allow two NFS clients to
> obtain the same lock.
>
> Three possible explanations come to mind:
>
> * The lock file is getting incorrectly removed (I've never seen ActiveMQ
> cause this)
> * There
As far as a broker properly cleaning up when active and losing the lock --
first off, that's a very rare scenario. With that said, there's no way to
guarantee a completely clean hand-off at that point. The cause of such a
scenario will be a drop in network communication between that broker and th
So something is very wrong then. NFS should *not* allow two NFS clients to
obtain the same lock.
Three possible explanations come to mind:
* The lock file is getting incorrectly removed (I've never seen ActiveMQ
cause this)
* There is a flaw in the NFS locking implementation itself
* The NFSv4 t
That's better than the impression I'd gotten last time I investigated the
question.
Do you get more useful information at DEBUG? And do you get the same
behavior if you wait to start 2c till 2a is fully up?
On Mon, Feb 29, 2016 at 7:08 PM, artnaseef wrote:
> Something sounds very wrong there.
On Mon, Feb 29, 2016 at 7:08 PM, artnaseef wrote:
> Something sounds very wrong there. The NFS lock file should prevent more
> than one broker writing to the store at a time.
>
> Is all of /var/log/activemq/activemq-data/ shared across all of the
> brokers?
>
Hi,
Everything under
/var/log/act
Are you sure that the code will ensure a graceful and speedy shutdown when
the broker loses a lock but stays up? Last time I looked at this
(admittedly, not in all that much detail, and I've meant to make time for
that ever since and haven't done so), I came away with the impression that
the locki
Something sounds very wrong there. The NFS lock file should prevent more
than one broker writing to the store at a time.
Is all of /var/log/activemq/activemq-data/ shared across all of the brokers?
--
View this message in context:
http://activemq.2283324.n4.nabble.com/question-for-users-of-NF
This is interesting. When i use a purposefully slow set of
3 brokers sharing an NFS 4 mount, I found that it's very
very easy to get them into a bad state.
As simple a procedure as starting them in sequence and
then restarting them in the same sequence nets me errors
like:
2016-02-29 19:17:46,
64GB is a very large server in my experience. Many use-cases do not require
this much memory, although some do. In fact, I've seen 2GB servers perform
very well - again, for specific use-cases.
As far as swapping - most Linux servers I've seen in the last 5 years
(longer really) are configured w
No idea on the mount settings - that was a while back. But again, I suspect
even the default NFS mounts settings would work.
My recommendation here - create a test setup, and perform some load tests.
Tweak settings as desired and try again. If that feels inadequate (for
example, there are conce
Hi Jim,
I am currently using in production two AMQ servers (ver 5.12) with NFSv4.
The failover from one to the other happens seamlessly without any issues. As
Art stated, NFSv4 is a must for failover to work properly. I have the
following recommendation to achieve a quick failover:
.
1) It
Hi,
Thank you for the reply. Do you happen to know what the mount settings
actually are in your setup?
Yes, we are using nfs4 for this.
Jim
On Thu, Feb 25, 2016 at 20:21 artnaseef wrote:
> I've used it successfully more than once without any specific tuning to
> NFS.
> With that said, systems
I've used it successfully more than once without any specific tuning to NFS.
With that said, systems groups maintained the filesystem, so I may simply be
unaware of the same.
Note that you'll need NFSv4 for full H/A; NFSv3 clients hold locks
indefinitely when they drop off the server's network (i
18 matches
Mail list logo