the official recommendations are here
http://kafka.apache.org/documentation.html#upgrade_10
On Fri, Sep 23, 2016 at 7:48 PM Vadim Keylis wrote:
> Hello we have a producer that is written in c language to send data to
> kafka using 0.8 protocol. We now need to upgrade since protocol has
> change
java7 is end of life. http://www.oracle.com/technetwork/java/eol-135779.html
+1
On Tue, Aug 16, 2016 at 6:43 AM Ismael Juma wrote:
> Hey Harsha,
>
> I noticed that you proposed that Storm should drop support for Java 7 in
> master:
>
> http://markmail.org/message/25do6wd3a6g7cwpe
>
> It's usef
Hi Gwen,
I have explored and tested this approach in the past. It does not work for
2 reasons:
A. the first one relates to the ZKClient implementation,
B. the second is the JVM behavior.
A. The ZKConnection [1] managed by ZKClient uses a legacy constructor of
org,apache.Zookeeper [2]. The crea
Same here at Airbnb. Moving data is the biggest operational challenge
because of the network bandwidth cannibalization.
I was hoping that rate limiting would apply to replica fetchers too.
On Sun, Jul 3, 2016 at 15:38 Tom Crayford wrote:
> Hi Charity,
>
> I'm not sure about the roadmap. The way
Any experience with G1 for Kafka? I didn't get a chance to try it out.
On Mon, Apr 11, 2016 at 3:31 AM Jakub Neubauer
wrote:
> Hi,
> Did you consider G1 collector?
> http://docs.oracle.com/javase/7/docs/technotes/guides/vm/G1.html
> Jakub N.
>
> On 11.4.2016 12:08, jinhong lu wrote:
> > Minor G
Why such a gigantic heap? 30G.
In my experience, Kafka broker does not have to deal with long-lived
objects, it's all about many small, ephemeral objects. Most of the data is
kept off heap.
We've been happy with 5G heap, 2G being for the new generation. The server
has 8 cores and 60GB of ram.
Her
I ran into the same issue today. In a production cluster, I noticed the
"Shrinking ISR for partition" log messages for a topic deleted 2 months
ago.
Our staging cluster shows the same messages for all the topics deleted in
that cluster.
Both 0.8.2
Yifan, Guozhang, did you find a way to get rid of
- This type of questions is pretty common on the ml. Personally I don't
think I could confidently described the compatibility/upgrade scenarios.
Some clarification is needed, ideally in a single place.
I think this is even more crucial as the number of new features increases
(which is exciting)
-
Hi Ismael,
could you elaborate on "newer clients don't work with older brokers
though."? doc pointers are fine.
I was under the impression that I could the 0.9 clients with 0.8 brokers.
thanks
Alexis
On Mon, Mar 21, 2016 at 2:05 AM Ismael Juma wrote:
> Hi Allen,
>
> Answers inline.
>
> On Mon
problem and trying to improve
> it; as for packaging / configs etc I think the goal of the OS Kafka package
> itself is to provide universal and simple barebone scripts for operations,
> and users can wrap / customize themselves as they need.
>
> Guozhang
>
> On Sun, Mar 6, 2016
- To understand what the stuck consumer is doing, it would be useful to
collect the logs and a thread dump. I'd try to find out what the fetcher
threads are doing. What about the handler/application/stream threads?
- are the offsets committed? After a restart, could it be that the consumer
is just
Sun, Mar 6, 2016 at 10:12 AM Alexis Midon
wrote:
> My recollection is that you have to come up with the partition assignment
> yourself, and pass the json file as an argument.
> This is quite error prone, especially during an outage.
>
> we quickly wrote kafkat to have a simple comm
try to
> resolve if there's any issues.
>
> Guozhang
>
> On Fri, Mar 4, 2016 at 3:13 PM, Alexis Midon <
> alexis.mi...@airbnb.com.invalid> wrote:
>
> > The command line tool that ships with Kafka is error prone.
> >
> > Our standard procedure is:
> &g
The command line tool that ships with Kafka is error prone.
Our standard procedure is:
1. spin up the new broker
2. use `kafkat drain [--brokers ]
3. shut down old broker
The `drain` command will generate and submit a partition assignment plan
where the new broker id replaces the old one. It's p
Also the metrics kafka.network.RequestChannel.RequestQueueSize and
ResponseQueueSize
will give you the saturation of the network and IO threads.
On Tue, Mar 1, 2016 at 9:21 AM Alexis Midon wrote:
> "request queue time" is the time it takes for IO threads to pick up the
>
"request queue time" is the time it takes for IO threads to pick up the
request. As you increase the load on your broker, it makes sense to see
higher queue time.
Here are more details on the request/response model in a Kafka broker
(0.8.2).
All your requests and responses are belong to RequestCha
You can fetch messages by offset.
https://cwiki.apache.org/confluence/display/KAFKA/A+Guide+To+The+Kafka+Protocol#AGuideToTheKafkaProtocol-FetchRequest
On Fri, Feb 26, 2016 at 7:23 AM rahul shukla
wrote:
> Hello,
> I am working SNMP trap parsing project in my acadmic. i am using kafka
> message
regarding the "Allocation Failure" messages, these are not errors, it's the
standard behavior of a generational GC. I let you google the details, there
are tons of resources.
for ex,
https://plumbr.eu/blog/garbage-collection/understanding-garbage-collection-logs
I believe you should stop the broke
O.java:361)
> > > > ~[zookeeper-3.4.6.jar:3.4.6-1569965]
> > > > at
> org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)
> > > > ~[zookeeper-3.4.6.jar:3.4.6-1569965]
> > > >
> > > > It just keep repeating t
By "re-connect", I'm assuming that the ZK session is expired, not
disconnected.
For details see
http://zookeeper.apache.org/doc/trunk/zookeeperProgrammers.html#ch_zkSessions
In that case, the high level consumer is basically dead, and the
application should create a new instance of it.
On Mon, F
want to clarify, the Protocol Guide says there
> should be single compressed message but I'm able to receive 2, 3 and more,
> all in single MessageSet.
>
> Kafka 0.9
>
> I can provide the actual buffers received.
>
> Thanks!
>
>
> On Tue, 16 Feb 2016 at 20:01 Alexi
it will throw an exception:
https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/admin/AdminUtils.scala#L76-L78
On Tue, Feb 16, 2016 at 3:55 PM Alex Loddengaard wrote:
> Hi Sean, you'll want equal or more brokers than your replication factor.
> Meaning, if your replication factor
0. I don't understand how deleting log files quickly relates to file/page
cache. Only consumer read patterns are the main factor here afaik.
The OS will eventually discard unused cached pages. I'm not an expert of
page cache policies though, and will be happy to learn.
1. have a look at per-topic
What makes you think there are 2? would you have data or code to share?
When compression is enabled, multiple messages will be packed and
compressed in a MessageSet. That MessageSet will then have a single message.
The interface however will let you iterate over the unpacked messages. See
https://
When porting an existing consumer group from 0.8.2 to 0.9 (clients and
brokers), is there any impact on the last committed offsets?
is the "native offset storage" feature enable by default?
On Thu, Feb 11, 2016 at 4:52 PM Jason Gustafson wrote:
> The new Java consumer in 0.9.0 will not work w
u can check the source code high level for reference
> On Sat, Sep 12, 2015 at 5:18 AM Alexis Midon
> wrote:
>
> > When a new topic is created, I agree that the regex would remain
> unchanged
> > but how would an existing consumer be notified of the topic creation?
> > af
When a new topic is created, I agree that the regex would remain unchanged
but how would an existing consumer be notified of the topic creation?
afaik there's no such notification mechanism in the High level consumer.
On Thu, Sep 10, 2015 at 8:43 AM, tao xiao wrote:
> You can create message st
014 at 11:45 AM, Alexis Midon
> wrote:
> > distribution will be even based on the number of partitions.
> > It is the same logic as AdminUtils.
> > see
> >
> https://github.com/airbnb/kafkat/blob/master/lib/kafkat/command/reassign.rb#L39
> >
> >
kat automatically figure out the
> right reassignment strategy based on even data distribution?
>
> On Wed, Sep 3, 2014 at 12:12 AM, Alexis Midon <
> alexis.mi...@airbedandbreakfast.com> wrote:
>
> > Hi Marcin,
> >
> > A few weeks ago, I did an upgrade to 0.8.1.1 and th
looks like I can't edit the wiki page.
feel free to add https://github.com/airbnb/kafka-statsd-metrics2 to the
page.
thanks
On Tue, Sep 9, 2014 at 8:10 AM, Andrew Otto wrote:
> We use jmxtrans to pull data out of JMX.
>
>
> https://github.com/wikimedia/puppet-kafka/blob/master/kafka-jmxtrans.jso
r consumers to consume.
>
> Thanks,
>
> Bhavesh
>
>
> On Wed, Sep 3, 2014 at 2:59 PM, Alexis Midon <
> alexis.mi...@airbedandbreakfast.com> wrote:
>
> > Hi Bhavesh
> >
> > can you explain what limit you're referring to?
> > I'm a
ally try to avoid data
> loss as much as possible.
>
> Please let me know what is your opinion on this...
>
> Thanks,
>
> Bhavesh
>
>
> On Wed, Sep 3, 2014 at 6:21 AM, Alexis Midon <
> alexis.mi...@airbedandbreakfast.com> wrote:
>
> > Thanks Jun.
> >
&
compression
> ratio.
>
> The reason that we have to fail the whole batch is that the error code in
> the produce response is per partition, instead of per message.
>
> Retrying individual messages on MessageSizeTooLarge seems reasonable.
>
> Thanks,
>
> Jun
>
>
Hi Marcin,
A few weeks ago, I did an upgrade to 0.8.1.1 and then augmented the cluster
from 3 to 9 brokers. All went smoothly.
In a dev environment, we found out that the biggest pain point is to have
to deal with the json file and the error-prone command line interface.
So to make our life easier
t; probably memory since currently we need to allocate memory for a full
> message in the broker and the producer and the consumer client.
>
> Thanks,
>
> Jun
>
>
> On Wed, Aug 27, 2014 at 9:52 PM, Alexis Midon <
> alexis.mi...@airbedandbreakfast.com> wrote:
>
>
thing.
On Wed, Aug 27, 2014 at 9:38 PM, Alexis Midon <
alexis.mi...@airbedandbreakfast.com> wrote:
> Hi Jun,
>
> thanks for you answer.
> Unfortunately the size won't help much, I'd like to see the actual message
> data.
>
> By the way what a
is currently only done on the broker. If you enable
> trace level logging in RequestChannel, you will see the produce request,
> which includes the size of each partition.
>
> Thanks,
>
> Jun
>
>
> On Wed, Aug 27, 2014 at 4:40 PM, Alexis Midon <
> alexis.mi...@airbedan
Hello,
my brokers are reporting that some received messages exceed the
`message.max.bytes` value.
I'd like to know what producers are at fault but It is pretty much
impossible:
- the brokers don't log the content of the rejected messages
- the log messages do not contain the IP of the producers
-
Assuming the partitions are replicated, a migration plan could be:
- install latest kafka on broker $i, /etc/kakfa-NEW
- update server config in /etc/kakfa-NEW/config/server.properties to match
the old one (in particular zookeeper.connect and log.dirs)
- stop broker $i
- start broker $i from /etc/
39 matches
Mail list logo