Apologise in advance if it's a dummy or common question but as usual I couldn't
yet find the answer anywhere to it.
How can we setup some kind of handler to catch async errors?
Let's say something as simple as this:
final Properties props = new Properties();
props.put("metadata.
Kafka never had a callback for the async producer yet. But this is proposed
for Kafka 0.9. You can find the proposal here -
https://cwiki.apache.org/confluence/display/KAFKA/Client+Rewrite#ClientRewrite-ProposedProducerAPI
Thanks,
Neha
On Oct 7, 2013 4:52 AM, "Bruno D. Rodrigues"
wrote:
> Apolog
When people using message queues, the message size is usually pretty small.
I want to know who out there is using kafka with larger payload sizes?
In the configuration, the maximum message size by default is set to 1
megabyte (
message.max.bytes100)
My message sizes will be probably be aroun
Kafka port is documented in
http://kafka.apache.org/documentation.html#brokerconfigs
Thanks,
Jun
On Sat, Oct 5, 2013 at 12:05 PM, Jiang Jacky wrote:
> Hi, I tried to setup the host.name in servier.properties, it doesn't work.
> I believe it is the network security issue. However, I create a n
Hi everyone,
I wrapped kafka 0.8beta1 client into a jruby class:
https://github.com/joekiller/jruby-kafka and then a wrote an input for
logstash: https://github.com/joekiller/logstash/tree/kafka8
It's a little rough around the edges still but it does work fine.
I'll be enhancing both the logs
So the concept to keep in mind is that as long as we set the whole kafka
servers list on the producer and the zookeeper(s) list on the consumers, from a
producer and consumer's perspective it should just work and the code won't get
any information, but instead one should look at the logs?
What
Jason,
As Neha said, what you said is possible, but may require a more careful
design. For example, what if the followers don't catch up with the leader
quickly? Do we want to wait forever or up to some configurable amount of
time? If we do the latter, we may still lose data during controlled
shut
At LinkedIn, our message size can be 10s of KB. This is mostly because we
batch a set of messages and send them as a single compressed message.
Thanks,
Jun
On Mon, Oct 7, 2013 at 7:44 AM, S Ahmed wrote:
> When people using message queues, the message size is usually pretty small.
>
> I want t
I see, so that is one thing to consider is if I have 20 KB messages, I
shouldn't batch too many together as that will increase latency and the
memory usage footprint on the producer side of things.
On Mon, Oct 7, 2013 at 11:55 AM, Jun Rao wrote:
> At LinkedIn, our message size can be 10s of KB.
Thanks for pointing this out. Updated the doc.
The reason for the change is the following. If the timeout is caused by a
problem at the broker, It's actually not very useful to set the timeout too
small. After timing out, the producer is likely to resend the data. This
adds more load to the broker
The message size limit is imposed on the compressed message. To answer your
question about the effect of large messages - they cause memory pressure on
the Kafka brokers as well as on the consumer since we re-compress messages
on the broker and decompress messages on the consumer.
I'm not so sure
Hi Joe,
This is great and thanks for sharing. If you are up for maintaining this,
would you like to add this to
https://cwiki.apache.org/confluence/display/KAFKA/Clients ?
Thanks,
Neha
On Mon, Oct 7, 2013 at 8:10 AM, Joseph Lawson wrote:
> Hi everyone,
>
>
> I wrapped kafka 0.8beta1 client in
When you batch things on the producer, say you batch 1000 messages or by
time whatever, the total message size of the batch should be less than
message.max.bytes or is that for each individual message?
When you batch, I am assuming that the producer sends some sort of flag
that this is a batch, an
The async producer's send() API is never supposed to block. If, for some
reason, the producer's queue is full and you try to send more messages, it
will drop those messages and raise a QueueFullException. You can configure
the "message.send.max.retries" config to retry sending the messages n
times,
I don't think the batch referred to initially is a Kafka API batch, hence
the confusion. I'm sure someone from LinkedIn can clarify.
On Oct 7, 2013 9:27 AM, "S Ahmed" wrote:
> When you batch things on the producer, say you batch 1000 messages or by
> time whatever, the total message size of the b
the total message size of the batch should be less than
message.max.bytes or is that for each individual message?
The former is correct.
When you batch, I am assuming that the producer sends some sort of flag
that this is a batch, and then the broker will split up those messages to
individual mes
Sure go ahead.
From: Neha Narkhede
Sent: Monday, October 07, 2013 12:23 PM
To: users@kafka.apache.org
Subject: Re: introducing jruby-kafka (Kafka 0.8beta1) and Kafka logstash input
Hi Joe,
This is great and thanks for sharing. If you are up for maintainin
Hi,
I have question regarding the offset in kafka (0.8). I've gone through the
documentation and did some tests, but I want to make sure I'm on the right
track.
* Are the offsets guaranteed to be sequential in a partition ?
o Can it contain holes ?
* How offsets are distrib
Offsets always begin at 0 for each partition and increase sequentially from
there. Offsets aren't unique within a topic. As old data is discarded the
first retained offset will not remain 0. The behavior of what is retained
is controlled by your retention settings.
In trunk there is a feature that
Thanks for the quick answer !
Francis
Sent from Samsung Mobile
Original message
From: Jay Kreps
Date: 10-07-2013 17:55 (GMT-05:00)
To: users@kafka.apache.org
Subject: Re: Offset question
Offsets always begin at 0 for each partition and increase sequentially from
there. Of
Hi, Everyone,
I made another pass of the remaining jiras that we plan to fix in the 0.8
final release.
https://issues.apache.org/jira/browse/KAFKA-954?jql=project%20%3D%20KAFKA%20AND%20fixVersion%20%3D%20%220.8%22%20AND%20status%20in%20(Open%2C%20%22In%20Progress%22%2C%20Reopened%2C%20%22Patch%20
Hi ,
We are using central standalone zookeeper for kafka and HBase .
Due to some problem, we have installed new zookeeper on different machine,
but since we do not have old metadata available in Zookeeper required for
kafka.
We are not able to read previous topic messages.
How we can restor
Neha,
Does the broker store messages compressed, even if the producer doesn't
compress them when sending them to the broker?
Why does the broker re-compress message batches? Does it not have enough
info from the producer request to know the number of messages in the batch?
Jason
On Mon, Oct 7
Jun,
KAFKA-1018 should be fixed in 0.8 final I can post a patch or review.
+1 on the list for whats left
On Mon, Oct 7, 2013 at 8:33 PM, Jun Rao wrote:
> Hi, Everyone,
>
> I made another pass of the remaining jiras that we plan to fix in the 0.8
> final release.
>
>
> https://issues.apache.org
24 matches
Mail list logo