https://issues.apache.org/jira/browse/ZOOKEEPER-1356 which was closed as a
dupe of https://issues.apache.org/jira/browse/ZOOKEEPER-338 are relevant to
this... It's a zk client issue, and there are things you can do to avoid
having to reconfigure the clients while you're bouncing them (CNAMEs and
t
ne can finish.
>
>
> On Mon, Apr 14, 2014 at 12:58 PM, David DeMaagd wrote:
>
> > Correct - heavy client GC leads to numerous problems. There's
> > two things you can do:
> >
> > 1) Tune the client JVM better to get GC to a more reasonable level
> &
Correct - heavy client GC leads to numerous problems. There's
two things you can do:
1) Tune the client JVM better to get GC to a more reasonable level
2) Increase the zookeeper session timeout value (this is generally a
work-around for #1, but it can buy you time to dig into it)
--
Dave D
That information is in that node, not under it (you want a get() instead
of a get_children())...
--
Dave DeMaagd | S'aite Reliability Engineering, Y'all
ddema...@linkedin.com | 818 262 7958
(davidmontgom...@gmail.com - Thu, Feb 13, 2014 at 06:09:07AM +0800)
> Hi,
>
> I am using kafka 8.0.
>
>
You can use either the MaxLag MBean (0.8):
http://kafka.apache.org/documentation.html#monitoring
Or the ConsumerOffsetChecker (0.7 or 0.8, can't seem to find a doc
reference for it):
./kafka-run-class.sh kafka.tools.ConsumerOffsetChecker ...
--
Dave DeMaagd | S'aite Reliability Engineering,
I've also used jolokia, http://jolokia.org/, though it can get a little slow
to respond if you don't use it right. Have rolled a JMX/HTTP 'data dumper'
from scratch (can be done in a couple hundred lines of Java without too
much issue)...
--
Dave DeMaagd
ddema...@linkedin.com | 818 262 7958
(c
The danger of using a size based rollover (unless you set the size and
log rollover to be fairly high) is that in case of problems, the actual
cause of the problem might get rolled off the end by the time you get to
it (kafka can be very chatty in some kinds of failure cases). That is
probably the
hour has passed since short downtime and I still see the exception in
> kafka service logs.
>
> Thanks,
> Vadim
>
>
> On Fri, Jun 28, 2013 at 11:25 AM, David DeMaagd wrote:
>
> > Getting kafka.common.NotLeaderForPartitionException for a time after a
> >
Getting kafka.common.NotLeaderForPartitionException for a time after a
node is brought back on line (especially if it is a short downtime) is
normal - that is because the consumers have not yet completely picked up
the new leader information. If should settle shortly.
--
Dave DeMaagd
ddema...@l
The loas+found directory is part of the Linux extN filesystem semantics,
and yes, it would be a terribly idea to try to remove it - it is
automatically there at the top level of a disk mount point.
Because it being there will mess up kafka. it is a good idea to create a
subdirectory there that
I think there's really two angles to look at this from...
1) What is 'important' to monitor? Meaning, what subset of these are
important/critical for being able to tell system health (things you want
to set alerts on), what subset are nice to have for overall health and
capacity planning (things
It's worth noting that we currently run kakfa at LinkedIn with a 5G heap
(not 3G, still using the CMS GC though - should update that), and the
info on that wiki is aimed at 0.7.
We are actively working on things for 0.8 - don't have a 'this works for
us', much less a 'recommendedation' there
The zookeeper connections are persistent, so it depends on the number of
clients more than the data flow rate on the producer side. If you are
using a VIP based producer, then there is no connection from the
producer process to zookeeper at all. If you are using a zookeper based
producer, then yo
If you're using the zkCli.sh, something like this will create the
namespace:
[zk: localhost:12913(CONNECTED) 1] create /namespace ''
Created /namespace
If you're using another interface, the actual command may vary.
--
Dave DeMaagd
ddema...@linkedin.com | 818 262 7958
(casey.sybra...@six3syst
14 matches
Mail list logo