Also, in a comment on this thread you mentioned that this is an expected
exception
This is expected during shutdown of a client since the server's attempts at
sending any outstanding responses fails. This happens since the other
endpoint of the socket connection is dead (the client).
On Thu, Sep
Aniket,
Could you provide more context to this email? The previous conversation on
the exception is missing so I'm not sure which exception you are referring
to.
Thanks,
Neha
On Thu, Sep 25, 2014 at 8:52 AM, Aniket Kulkarni <
kulkarnianiket...@gmail.com> wrote:
> @Neha When you say this is an e
Hello,
With reference to this[1] discussion, I am facing a similar issue with the
following stack trace interchangeably with broken pipe:
java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.ja
This is an expected exception.
On Sat, Feb 1, 2014 at 9:29 PM, Ranjith Venkatesan
wrote:
> Hi ,
>
> We are evaluating kafka 0.8 for our product as a queue system. Our
> architecture remains simple. Our producer (single) will send mesages to any
> of the topics in broker. Thread will be ru
Hmm, I would suspect GC to give a Connection timeout instead of Connection
reset, but I could be wrong.
On Thu, Oct 31, 2013 at 9:42 PM, Banerjee, Aparup wrote:
> That was my initial hunch, but consumer logs are clean. Could it be some
> GC stuff on consumer?
>
> Thanks,
> Aparup
>
> > On Oct 31
That was my initial hunch, but consumer logs are clean. Could it be some GC
stuff on consumer?
Thanks,
Aparup
> On Oct 31, 2013, at 9:36 PM, "Neha Narkhede" wrote:
>
> Is the consumer being shutdown or interrupted? Do you see any relevant
> errors in your consumer's logs?
>
> Thanks,
> Neha
>
Is the consumer being shutdown or interrupted? Do you see any relevant
errors in your consumer's logs?
Thanks,
Neha
On Thu, Oct 31, 2013 at 9:31 PM, Banerjee, Aparup wrote:
> No I am not using a LB. Infact I keep getting the same error even if my
> Kafka server and consumer are on same box. The
No I am not using a LB. Infact I keep getting the same error even if my Kafka
server and consumer are on same box. The client here is consumer btw.
Aparup
> On Oct 31, 2013, at 9:28 PM, "Jun Rao" wrote:
>
> That typically means the client closed the socket in the middle of a
> request. Are you
That typically means the client closed the socket in the middle of a
request. Are you using hardware load balancer?
Thanks,
Jun
On Wed, Oct 30, 2013 at 9:03 PM, Banerjee, Aparup wrote:
> Hi,
>
> I keep getting this error in Kafka server.log. I don't see anything in my
> producer or consumer lo
Not sure why re-registering in broker fails. Normall, when the broker
registers, the ZK path should already be gone.
Thanks,
Jun
On Thu, Mar 28, 2013 at 8:31 AM, Yonghui Zhao wrote:
> Will do a check, I just wonder why broker need re-regiester and it failed,
> so broker service is stopped.
>
>
Will do a check, I just wonder why broker need re-regiester and it failed,
so broker service is stopped.
2013/3/28 Jun Rao
> Do you see lots of ZK session expiration in the broker too? If so, that
> suggests a GC issue in the broker too. So, you may need to tune the GC in
> the broker as well.
>
Do you see lots of ZK session expiration in the broker too? If so, that
suggests a GC issue in the broker too. So, you may need to tune the GC in
the broker as well.
Thanks,
Jun
On Thu, Mar 28, 2013 at 8:20 AM, Yonghui Zhao wrote:
> Thanks Jun.
>
> But I can't understand how consumer GC trigge
Thanks Jun.
But I can't understand how consumer GC trigger kafka server issue:
java.lang.RuntimeException: A broker is already registered on the path
/brokers/ids/0. This probably indicates that you either have configured a
brokerid that is already in use, or else you have shutdown this broker and
The zk session timeout only kicks in if you force kill the consumer.
Otherwise, consumer will close ZK session properly on clean shutdown.
The problem with GC is that if the consumer pauses for a long time, ZK
server won't receive pings from the client and thus can expire a still
existing session.
I used zookeeper-3.3.4 in kafka.
Default tickTime is 3 seconds, minSesstionTimeOut is 6 seconds.
Now I change tickTime to 5 seconds. minSesstionTimeOut to 10 seconds
But if we change timeout to a larger one,
"you have shutdown this broker and restarted it faster than the zookeeper
timeout so it ap
Not sure why the re-registration fails. Are you using ZK 3.3.4 or above?
It seems that you consumer still GCs, which is the root cause. So, you will
need to tune the GC setting further. Another way to avoid ZK session
timeout is to increase the session timeout config.
Thanks,
Jun
On Wed, Mar 27
Now I used GC like this:
-server -Xms1536m -Xmx1536m -XX:NewSize=128m -XX:MaxNewSize=128m
-XX:+UseConcMarkSweepGC -XX:+UseParNewGC
-XX:CMSInitiatingOccupancyFraction=70
But it still happened. It seems kafka server reconnect with zk, but the
old node was still there. So kafka server stopped.
Can
The kafka-server-start.sh script doesn't have the mentioned GC
settings and heap size configured. However, probably doing that is a
good idea.
Thanks,
Neha
On Tue, Mar 26, 2013 at 9:47 AM, Yonghui Zhao wrote:
> kafka server is started by bin/kafka-server-start.sh. No gc setting.
> 在 2013-3-26 下
kafka server is started by bin/kafka-server-start.sh. No gc setting.
在 2013-3-26 下午11:40,"Neha Narkhede" 写道:
> Did you have a gc pause around that time on the server ? What are your
> server's current gc settings ?
>
> Thanks,
> Neha
>
> On Mon, Mar 25, 2013 at 8:48 PM, Yonghui Zhao
> wrote:
> >
Did you have a gc pause around that time on the server ? What are your
server's current gc settings ?
Thanks,
Neha
On Mon, Mar 25, 2013 at 8:48 PM, Yonghui Zhao wrote:
> Thanks Neha, btw have you seen this exception. We didn't restart any
> service it happens in deep night.
>
> java.lang.Runtim
Thanks Neha, btw have you seen this exception. We didn't restart any
service it happens in deep night.
java.lang.RuntimeException: A broker is already registered on the path
/brokers/ids/0. This probably indicates that you either have configured a
brokerid that is already in use, or else you have
That really depends on your consumer application's memory allocation
patterns. If it is a thin wrapper over a Kafka consumer, I would imagine
you can get away with using CMS for the tenured generation and parallel
collector for the new generation with a small heap like 1gb or so.
Thanks,
Neha
On
Any suggestion on consumer side?
在 2013-3-25 下午9:49,"Neha Narkhede" 写道:
> For Kafka 0.7 in production at Linkedin, we use a heap of size 3G, new gen
> 256 MB, CMS collector with occupancy of 70%.
>
> Thanks,
> Neha
>
> On Sunday, March 24, 2013, Yonghui Zhao wrote:
>
> > Hi Jun,
> >
> > I used kaf
For Kafka 0.7 in production at Linkedin, we use a heap of size 3G, new gen
256 MB, CMS collector with occupancy of 70%.
Thanks,
Neha
On Sunday, March 24, 2013, Yonghui Zhao wrote:
> Hi Jun,
>
> I used kafka-server-start.sh to start kafka, there is only one jvm setting
> "-Xmx512M“
>
> Do you hav
Hi Jun,
I used kafka-server-start.sh to start kafka, there is only one jvm setting
"-Xmx512M“
Do you have some recommend GC setting? Usually our sever has 32GB or 64GB
RAM.
2013/3/22 Jun Rao
> A typical reason for many rebalancing is the consumer side GC. If so, you
> will see logs in the co
thanks Jun!Will tune our GC setting.
Sent from my iPad
在 2013-3-22,23:05,Jun Rao 写道:
> A typical reason for many rebalancing is the consumer side GC. If so, you
> will see logs in the consume saying sth like "expired session" for ZK.
> Occasional rebalances are fine. Too many rebalances can slo
A typical reason for many rebalancing is the consumer side GC. If so, you
will see logs in the consume saying sth like "expired session" for ZK.
Occasional rebalances are fine. Too many rebalances can slow down the
consumption and you will need to tune your GC setting.
Thanks,
Jun
On Thu, Mar 21
Hi Jun:
We use 1 consumer 1 kafka server with 4 partitions of only 1 topic.
2013/3/22 Yonghui Zhao
> Yes, before consumer exception:
>
> 2013/03/21 12:07:17.909 INFO [ZookeeperConsumerConnector] []
> 0_lg-mc-db01.bj-1363784482043-f98c7868 *end rebalancing
> consumer*0_lg-mc-db01.bj-13637844820
Yes, before consumer exception:
2013/03/21 12:07:17.909 INFO [ZookeeperConsumerConnector] []
0_lg-mc-db01.bj-1363784482043-f98c7868 *end rebalancing
consumer*0_lg-mc-db01.bj-1363784482043-f98c7868 try #0
2013/03/21 12:07:17.911 INFO [ZookeeperConsumerConnector] []
0_lg-mc-db01.bj-1363784482043-f98
Do you see any rebalances in the consumer? Each rebalance will interrupt
existing fetcher threads first.
Thanks,
Jun
On Thu, Mar 21, 2013 at 9:40 PM, Yonghui Zhao wrote:
> The application won't shut down the consumer connector. The consumer is
> always alive.
>
> 2013/3/22 Jun Rao
>
> > If
The application won't shut down the consumer connector. The consumer is
always alive.
2013/3/22 Jun Rao
> If you use the high level consumer, normally ClosedByInterruptException
> happens because the application calls shutdown on the consumer connector.
> Is that the case?
>
> Thanks,
>
> Jun
If you use the high level consumer, normally ClosedByInterruptException
happens because the application calls shutdown on the consumer connector.
Is that the case?
Thanks,
Jun
On Thu, Mar 21, 2013 at 8:38 PM, Yonghui Zhao wrote:
> No, I use java consumer connector, and set 10 seconds timeout.
No, I use java consumer connector, and set 10 seconds timeout.
ConsumerConfig consumerConfig = new ConsumerConfig(props);
_consumerConnector =
Consumer.createJavaConsumerConnector(consumerConfig);
Map topicCountMap = new HashMap();
topicCountMap.put(_topic, 1);
Map>> topicMessag
So, it seems that your consume thread was interrupted and therefore the
socket channel was closed. Are you using SimpleConsumer?
Thanks,
Jun
On Wed, Mar 20, 2013 at 9:25 PM, Yonghui Zhao wrote:
> Hi Jun,
>
> I didn't find any error in producer log.
> I did another test, first I injected data
Hi Jun,
I didn't find any error in producer log.
I did another test, first I injected data to kafka server, then stop
producer, and start consumer.
The exception still happened, so the exception is not related with producer.
>From the log below, it seems consumer exception happened first.
*
Exc
"Connect reset by peer" means the other side of the socket has closed the
connection for some reason. Could you provide the error/exception in both
the producer and the broker when a produce request fails?
Thanks,
Jun
On Tue, Mar 19, 2013 at 1:34 AM, Yonghui Zhao wrote:
> Connection reset exc
Connection reset exception reproed.
[2013-03-19 16:30:45,814] INFO Closing socket connection to /127.0.0.1.
(kafka.network.Processor)
[2013-03-19 16:30:55,253] ERROR Closing socket for /127.0.0.1 because of
error (kafka.network.Processor)
java.io.IOException: Connection reset by peer
at sun.n
Thanks Jun.
Now I use onebox to test kafka, kafka server ip on zk is 127.0.0.1, network
is not affected by external factors.
Reset connection is not reproed, but I still find Broken pipe exceptions
and a few zk exceptions.
[2013-03-19 15:23:28,660] INFO Closed socket connection for client /
127.
The error you saw on the broker is for consumer requests, not for producer.
For the issues in the producer, are you using a VIP? Is there any firewall
btw producer and broker? The typical issues with "connection reset" that we
have seen are caused by the load balancer or the firewall killing idle
c
39 matches
Mail list logo