Socket timeouts while reading the producer response could indicate a
bottleneck on the server request handling. It could be in the network
layer, or i/o performance or config issue. It will help if you create a
JIRA and attach some part of your producer log that includes the timeout
errors. Also wi
Do you mind filing a bug and attaching the reproducible test case there ?
Thanks,
Neha
On Wednesday, March 20, 2013, 王国栋 wrote:
> Hi Jun,
>
> We use one thread with one sync produce to send data to broker
> (QPS:10k-15k, each log is about 1k bytes). The problem is reproduced.
>
> We have used Pr
Hi Jun,
We use one thread with one sync produce to send data to broker
(QPS:10k-15k, each log is about 1k bytes). The problem is reproduced.
We have used Producer and SyncProducer in our test. The same Exception
appears.
Thanks.
On Thu, Mar 21, 2013 at 12:19 PM, Jun Rao wrote:
> How many th
Hi Jun,
I didn't find any error in producer log.
I did another test, first I injected data to kafka server, then stop
producer, and start consumer.
The exception still happened, so the exception is not related with producer.
>From the log below, it seems consumer exception happened first.
*
Exc
How many threads are you using?
Thanks,
Jun
On Wed, Mar 20, 2013 at 7:33 PM, Yang Zhou wrote:
> Sorry, I made a mistake, we use many threads producing at same time.
>
>
> 2013/3/20 Jun Rao
>
> > How many producer instances do you have? Can you reproduce the problem
> with
> > a single produce
Our webpage source is at https://svn.apache.org/repos/asf/kafka/site . You
can file a jira and attach a patch.
Thanks,
Jun
On Wed, Mar 20, 2013 at 12:39 PM, Chris Curtin wrote:
> Okay, how do we do this logistically? I've take the Producer code that I
> wrote for testing purposes and wrote a de
Sorry, I made a mistake, we use many threads producing at same time.
2013/3/20 Jun Rao
> How many producer instances do you have? Can you reproduce the problem with
> a single producer?
>
> Thanks,
>
> Jun
>
> On Wed, Mar 20, 2013 at 12:29 AM, 王国栋 wrote:
>
> > Hi Jun,
> >
> > we do not use any
There was only one producer running in all our tests. Beside, we also tried
the low-level java api, the problem still shows up.
Thanks
2013/3/20 Jun Rao
> How many producer instances do you have? Can you reproduce the problem with
> a single producer?
>
> Thanks,
>
> Jun
>
> On Wed, Mar 20, 20
Okay, how do we do this logistically? I've take the Producer code that I
wrote for testing purposes and wrote a description around it. How do I get
it to you guys?
Simple Consumer is going to take a little longer since my test Consumers
are non-trivial and I'll need to simplify them.
Thanks,
Chr
Ok,
I programmatically configure my kafka producers, but essentially configure
them passing a set of config properties, as if specified in a .properties
file.
I'll think about trying this, seems like it might just work.
Jason
On Wed, Mar 20, 2013 at 12:16 PM, Philip O'Toole wrote:
> On Wed,
On Wed, Mar 20, 2013 at 12:06 PM, Jason Rosenberg wrote:
> On Wed, Mar 20, 2013 at 12:00 PM, Philip O'Toole wrote:
>
>> > For
>> > producers, also, you can't really use a load-balancer to connect to
>> brokers
>> > (you can use zk, or you can use a broker list, in 0.7.2, and in 0.8, you
>> > can
On Wed, Mar 20, 2013 at 12:00 PM, Philip O'Toole wrote:
> > For
> > producers, also, you can't really use a load-balancer to connect to
> brokers
> > (you can use zk, or you can use a broker list, in 0.7.2, and in 0.8, you
> > can use an LB for the initial meta data connection, but then you still
> No worries Philip, I'll assume you you mispoke at first when talking about
> a load-balancer between the consumers and brokers. Kafka, unfortunately,
> doesn't allow consumers to connect to kafka via a load balancer.
Ah yes, I misspoke. I meant an LB between Producers and Kafka Brokers.
> For
Fantastic!
Who is the target audience for the book, 0.7.2 users migrating to 0.8 or
potential users of 0.8? Let me suggest a 2nd topic catering to the latter
audience: getting from a dev environment (as described in the quickstart
doc) to a production environment, with Zk/brokers/etc. deployed on
On Wed, Mar 20, 2013 at 11:10 AM, Philip O'Toole wrote:
> On Wed, Mar 20, 2013 at 10:55 AM, Jason Rosenberg
> wrote:
>
> > On Wed, Mar 20, 2013 at 9:06 AM, Philip O'Toole
> wrote:
> >
> >> On Wed, Mar 20, 2013 at 8:41 AM, Jason Rosenberg
> wrote:
> >> > I think might be cool, would be to have
On Wed, Mar 20, 2013 at 10:55 AM, Jason Rosenberg wrote:
> On Wed, Mar 20, 2013 at 9:06 AM, Philip O'Toole wrote:
>
>> On Wed, Mar 20, 2013 at 8:41 AM, Jason Rosenberg wrote:
>> > I think might be cool, would be to have a feature where by you can tell a
>> > broker to stop accepting new data pr
We are seeing some odd socket timeouts from one of our producers. This
producer fans out data from one topic into dozens or hundreds of potential
output topics. We batch the send's to write 1,000 messages at a time.
The odd thing is that the timeouts are happening in the socket read, so I
assume
On Wed, Mar 20, 2013 at 9:06 AM, Philip O'Toole wrote:
> On Wed, Mar 20, 2013 at 8:41 AM, Jason Rosenberg wrote:
> > I think might be cool, would be to have a feature where by you can tell a
> > broker to stop accepting new data produced to it, but still allow
> consumers
> > to consume from it.
The latest version of 0.8 can be found in the 0.8 branch, not trunk.
Thanks,
Jun
On Wed, Mar 20, 2013 at 7:47 AM, Jason Huang wrote:
> The 0.8 version I use was built from trunk last Dec. Since then, this
> error happened 3 times. Each time we had to remove all the ZK and
> Kafka log data and
On Wed, Mar 20, 2013 at 8:41 AM, Jason Rosenberg wrote:
> I think might be cool, would be to have a feature where by you can tell a
> broker to stop accepting new data produced to it, but still allow consumers
> to consume from it.
>
> That way, you can roll out new brokers to a cluster, turn off
Makes sense. This is a non issue in 0.8, see this -
https://issues.apache.org/jira/browse/KAFKA-155#comment-13607766
Thanks,
Neha
On Wed, Mar 20, 2013 at 8:41 AM, Jason Rosenberg wrote:
> I think might be cool, would be to have a feature where by you can tell a
> broker to stop accepting new d
I think might be cool, would be to have a feature where by you can tell a
broker to stop accepting new data produced to it, but still allow consumers
to consume from it.
That way, you can roll out new brokers to a cluster, turn off producing to
the old nodes, then wait for the log retention period
>From the consumer logs, it seems senseidb is interrupting the simple
consumer thread. This causes the socket to close which then shows up as a
broken pipe on the server. I don't know senseidb to say if this thread
interruption makes sense. But there are better ways to close the consumer
properly.
The 0.8 version I use was built from trunk last Dec. Since then, this
error happened 3 times. Each time we had to remove all the ZK and
Kafka log data and restart the services.
I will try newer versions with more recent patches and keep monitoring it.
thanks!
Jason
On Wed, Mar 20, 2013 at 10:39
Ok, so you are using the same broker id. What the error is saying is that
broker 1 doesn't seem to be up.
Not sure what revision of 0.8 you are using. Could you try the latest
revision in 0.8 and see if the problem still exists? You may have to wipe
out all ZK and Kafka data first since some ZK da
How many producer instances do you have? Can you reproduce the problem with
a single producer?
Thanks,
Jun
On Wed, Mar 20, 2013 at 12:29 AM, 王国栋 wrote:
> Hi Jun,
>
> we do not use any compression in our test.
>
> We deploy producer and broker in the same machine. The problem still
> exists. We
I restarted the zookeeper server first, then broker. It's the same
instance of kafka 0.8 and I am using the same config file. In
server.properties I have: brokerid=1
Is that sufficient to ensure the broker get restarted with the same
broker id as before?
thanks,
Jason
On Wed, Mar 20, 2013 at 12
The zookeeper connection URL with namespace can be
zkhost1:123,zkhost2:123/newnamespace
The wiki is up to date for Kafka 0.7.2. There is no officially supported
feature to do that sort of migration, I suggested one approach that I could
think of :-)
Thanks,
Neha
On Tuesday, March 19, 2013, Jason
Hi Neha,
Our test is reproducible. We use a log replay tool to send the same part of
log to kafka.
We will check the log file latter and try to find some more hints.
Till now, it seems that if the batch size in sync producer is small, the
problem is hard to reproduce.
Thanks.
On Tue, Mar 19,
Hi Jun,
we do not use any compression in our test.
We deploy producer and broker in the same machine. The problem still
exists. We use sync producer, and send one message at a time(no batch now).
We find that when the qps reaches more than 40k, the exception appears. So
I don't think it's the und
30 matches
Mail list logo