Thank you for reply, Neha
Kafka executes the reconnecting logic as 'reconnect.time.interval.ms'
property after running 'send.writeCompletely(channel)' (See, 'send' method
in SyncProducer.scala (88th line))
An exception occurs at SocketChannel.write (at
BoundedByteBufferSend.writeTo)
So, I'll mee
It seems you have turned on security on your zookeeper cluster. Which kafka
client is this log from? Could you send the code snippet that reproduces
this behavior? The kafka server side log seems fine.
Thanks,
Neha
On Sep 25, 2013 11:23 PM, "Shi JinKui(无线事业部)"
wrote:
> I use the scala 2.10 vers
Are you using the java or non-java producer? Are you using ZK based,
broker-list based, or VIP based producer?
Thanks,
Jun
On Wed, Sep 25, 2013 at 10:06 PM, Nicolas Berthet
wrote:
> Jun,
>
> I observed similar kind of things recently. (didn't notice before because
> our file limit is huge)
>
>
In Kafka, we do set TCP keepalive in the socket connection. However, in OS
like linux, the default value of tcp_keepalive_time is 2 hours, larger than
the firewall timeout. What you can do is to reduce tcp_keepalive_time to be
less than 1 hour.
Thanks,
Jun
On Mon, Sep 23, 2013 at 12:14 AM, Rhap
You are probably right. Though we introduced that reconnect functionality
to get around the VIP idle connection issue, it may not solve the problem
entirely. Your fix makes sense.
Thanks,
Neha
On Thu, Sep 26, 2013 at 12:00 AM, Rhapsody wrote:
> Thank you for reply, Neha
>
> Kafka executes the
Is restarting the broker the only way to put a broker back to the isr?
Thanks
Cal
We are using a hardware loadbalancer with a VIP based ruby producer.
On Sep 26, 2013, at 7:37 AM, Jun Rao wrote:
> Are you using the java or non-java producer? Are you using ZK based,
> broker-list based, or VIP based producer?
>
> Thanks,
>
> Jun
>
>
> On Wed, Sep 25, 2013 at 10:06 PM, Nico
What OS settings did you change? How high is your huge file limit?
On Sep 25, 2013, at 10:06 PM, Nicolas Berthet wrote:
> Jun,
>
> I observed similar kind of things recently. (didn't notice before because our
> file limit is huge)
>
> I have a set of brokers in a datacenter, and producers in
Hi Mark,
I'm using centos 6.2. My file limit is something like 500k, the value is
arbitrary.
One of the thing I changed so far are the TCP keepalive parameters, it had
moderate success so far.
net.ipv4.tcp_keepalive_time
net.ipv4.tcp_keepalive_intvl
net.ipv4.tcp_keepalive_probes
I still notic
I've been doing some testing, trying to understand how the
max.message.bytes works, with respect to sending batches of messages. In a
previous discussion, there appeared to be a suggestion that one work around
when triggering a MessageSizeTooLargeException is to reduce the batch size
and resubmit
Jason,
Glad that you asked. This is only an issue if compression is turned on for
a batch of messages. The problem is that the compressed batch is treated as
a single message, whose size has to be smaller than max.message.bytes. If
you are sending data uncompressed, only each uncompressed message
Yes, once the replicas in the restarted broker catch up, they will be
automatically added back to isr.
Thanks,
Jun
On Thu, Sep 26, 2013 at 12:48 PM, Calvin Lei wrote:
> Is restarting the broker the only way to put a broker back to the isr?
>
> Thanks
> Cal
>
I think you may be asking a slightly different question. If a broker falls
out of ISR and does not rejoin the ISR, it may point to some bottleneck
(e.g. local IO), fewer partitions for large topics or some fatal error
causing the ReplicaFetcherThread to die. Just restarting the broker without
knowi
I am trying to find an implementation of Partitioner trait that supports
for random distribution of messages onto partitions; something that existed
in 0.7 by simply passing null as key. However, the only way to achieve this
seems to be by implementing a custom Partitioner. Is this feature dropped
14 matches
Mail list logo