I use the scala 2.10 version, use the example code, but meet the follow error:
org.apache.kafka
kafka_2.10
0.8.0-beta1
Client side error
==
2013-09-26 14:05:23 [main-SendThread(10.13.80.124:2181)] DEBUG
org.ap
Jun,
I observed similar kind of things recently. (didn't notice before because our
file limit is huge)
I have a set of brokers in a datacenter, and producers in different data
centers.
At some point I got disconnections, from the producer perspective I had
something like 15 connections to th
If a client is gone, the broker should automatically close those broken
sockets. Are you using a hardware load balancer?
Thanks,
Jun
On Wed, Sep 25, 2013 at 4:48 PM, Mark wrote:
> FYI if I kill all producers I don't see the number of open files drop. I
> still see all the ESTABLISHED connecti
FYI if I kill all producers I don't see the number of open files drop. I still
see all the ESTABLISHED connections.
Is there a broker setting to automatically kill any inactive TCP connections?
On Sep 25, 2013, at 4:30 PM, Mark wrote:
> Any other ideas?
>
> On Sep 25, 2013, at 9:06 AM, Jun R
Any other ideas?
On Sep 25, 2013, at 9:06 AM, Jun Rao wrote:
> We haven't seen any socket leaks with the java producer. If you have lots
> of unexplained socket connections in established mode, one possible cause
> is that the client created new producer instances, but didn't close the old
> one
We haven't seen any socket leaks with the java producer. If you have lots
of unexplained socket connections in established mode, one possible cause
is that the client created new producer instances, but didn't close the old
ones.
Thanks,
Jun
On Wed, Sep 25, 2013 at 6:08 AM, Mark wrote:
> No.
No. We are using the kafka-rb ruby gem producer.
https://github.com/acrosa/kafka-rb
Now that you asked that question I need to ask. Is there a problem with the
java producer?
Sent from my iPhone
> On Sep 24, 2013, at 9:01 PM, Jun Rao wrote:
>
> Are you using the java producer client?
>
> Th