he actual purpose is, as the name implies, to limit the
> size of a request, which could potentially include many messages. This
> keeps the producer from sending very large requests to the broker. The
> limitation on message size is just a side effect.
>
>
> On Tue, May
the last fetched metadata is stale (the refresh interval has expired)
> >>or
> >> if
> >> > it is not able to send data to a particular broker in its current
> >> metadata
> >> > (This might happen in some cases like if the leader moves).
&
Hi, sorry if my understanding is incorrect.
I am integrating kafka producer with application, when i try to shutdown
all kafka broker (preparing for prod env) I notice that 'send' method is
blocking.
Is new producer fetch metadata not async?
Rendy
Hi,
I see configuration for broker "max.message.bytes" 1,000,000
and configuration for producer "max.request.size" 1,048,576
Why is default config for broker is less than producer? If that is the case
then there will be message sent by producer which is bigger than what
broker could receive.
Cou
Hi
- Legacy scala api for producer is having keyed message along with topic,
key, partkey, and message. Meanwhile new api has no partkey. Whats the
difference between key and partkey?
- In javadoc, new producer api send method is always async, does
producer.type properties overriden?
- Will scala
Based on documentation, as long as you define different folder zookeeper
chroot at broker configuration, it should be OK. Cmiiw.
Disclaimer: myself never tried this scheme.
Rendy
On Mar 28, 2015 2:14 AM, "Shrikant Patel" wrote:
> Can 2 separate kafka cluster share same ZK Ensemble??
> If yes, h
Hi,
I'm a new Kafka user. I'm planning to send web usage data from application
to S3 for EMR and MongoDB using Kafka.
What is common form to write as message in Kafka for data ingestion use
case? I am doing a little homework and find Avro as one of the options.
Thanks.
Rendy