Hi Guozhang,
I just have 1 producer client per producer machine.
The producer is in a singleton scope.
Is there a possibility to close producer sockets by force or use a producer
socket pool??
--
Arya
Arya
On Thu, Apr 10, 2014 at 11:38 AM, Guozhang Wang wrote:
> Hello Arya,
>
> The broker
A doubt regarding the same thing. If we are using the high level
consumer and auto commit of offset is set to false, and we are not
committing the offset for some reason how many messages can the consumer
still read.
i mean if there are 100 messages in the kafka server, then I just
started th
I am using the latest version of kafka.
package kafka.test;
import kafka.consumer.Consumer;import kafka.consumer.ConsumerConfig;import
kafka.consumer.ConsumerIterator;import kafka.consumer.KafkaStream;import
kafka.javaapi.consumer.ConsumerConnector;import
kafka.javaapi.producer.Producer;import k
Hi,
We have 3 node kafka 0.8 setup with zookeepers ensemble. We use high
level consumer with auto commit offset false. I am facing some peculiar
problem with kafka. When i send some 10-20 messages or so the consumer
starts to consume the messages. But if i send only one message to
kafka, the
Could you re-format your code in the email? They can hard be read from my
browser.
Guozhang
On Thu, Apr 10, 2014 at 6:58 AM, Khalef Bessaih
wrote:
> I am using the latest version of kafka.
> package kafka.test;
> import kafka.consumer.Consumer;import kafka.consumer.ConsumerConfig;import
> kafka
Hi Arjun,
Did you manually create the topic or use auto.topic.creation?
Guozhang
On Thu, Apr 10, 2014 at 7:39 AM, Arjun wrote:
> Hi,
>
> We have 3 node kafka 0.8 setup with zookeepers ensemble. We use high level
> consumer with auto commit offset false. I am facing some peculiar problem
> wit
We will be using Kafka for log message transport layer. Log have specific
format (
TIMESTAMP HOSTNAMEENVIRONMENT DATACENTERMESSAGE_TYPE
APPLICATION_IDPAYLOAD)
There are two type of messages *HEARTBEAT* and *APPLICATION_LOG*
So we have created two topics *HEARTBEATS* AND
its auto created
but even after topic creation this is the scenario
Arjun
On Thursday 10 April 2014 08:41 PM, Guozhang Wang wrote:
Hi Arjun,
Did you manually create the topic or use auto.topic.creation?
Guozhang
On Thu, Apr 10, 2014 at 7:39 AM, Arjun wrote:
Hi,
We have 3 node kafka 0.8 s
It could be https://issues.apache.org/jira/browse/KAFKA-1006.
Guozhang
On Thu, Apr 10, 2014 at 8:50 AM, Arjun wrote:
> its auto created
> but even after topic creation this is the scenario
>
> Arjun
>
> On Thursday 10 April 2014 08:41 PM, Guozhang Wang wrote:
>
>> Hi Arjun,
>>
>> Did you manua
But we have auto offset reset to smallest not largest, even then this
issue arises? If so is there any work around?
Thanks
Arjun NArasimha Kota
On Thursday 10 April 2014 09:39 PM, Guozhang Wang wrote:
It could be https://issues.apache.org/jira/browse/KAFKA-1006.
Guozhang
On Thu, Apr 10, 20
>>> I have set up kafka 0.8 in 3 servers. I have pushed some data into these
>>> servers. The number of partitions i use is 12, with a replication factor of
>>> 2.
Are you running multiple broker instances on a single server.
Or your 12 partitions for multiple topics.
I thought you should not
Hi,
The API description at http://kafka.apache.org/documentation.html#api is
rather thin -- when you are used to the API docs of other apache projects
like hadoop , cassandra , tomcat etc etc.
Is there a comprehensive API description somewhere (like javadocs) ?
Besides looking at the source code
Hi Arjun,
If you only send one message, does that message show up on the server? Does
you consumer use wildcard topics or specific topics?
Guozhang
On Thu, Apr 10, 2014 at 9:20 AM, Arjun wrote:
> But we have auto offset reset to smallest not largest, even then this
> issue arises? If so is t
Magnus this worked in our scripts perfectly, thanks a bunch!!
--Ian
On Apr 9, 2014, at 8:39 PM, Magnus Edenhill wrote:
> Hey Ian,
>
> this is where a tool like kafkacat comes in handy, it will use a random
> partitioner by default (without the need for defining a key):
>
> tail -f /my/log |
When you see this happening (on broker 4 in this instance), can you
confirm the Kafka process handle limit?
cat /proc//limits
On Thu, Apr 10, 2014 at 09:20:51AM +0530, Arya Ketan wrote:
> *Issue : *Kafka cluster goes to an un-responsive state after some time with
> producers getting Socket time-o
You don't need to match the number of partitions. Your nine consumers
should distribute those 12 partitions amongst themselves. Are you by
any chance consuming across a high latency link? Can you see if there
are rebalance failures in your consumer logs?
On Thu, Apr 10, 2014 at 12:27:39PM +0530, A
I have 12 partitions on 3 different nodes.
On Apr 10, 2014 11:41 PM, "Maung Than" wrote:
>
> >>> I have set up kafka 0.8 in 3 servers. I have pushed some data into
> these servers. The number of partitions i use is 12, with a replication
> factor of 2.
>
> Are you running multiple broker instance
Yes the message shows up on the server.
On Apr 11, 2014 12:07 AM, "Guozhang Wang" wrote:
> Hi Arjun,
>
> If you only send one message, does that message show up on the server? Does
> you consumer use wildcard topics or specific topics?
>
> Guozhang
>
>
> On Thu, Apr 10, 2014 at 9:20 AM, Arjun wr
The consumer uses do specific topics.
On Apr 11, 2014 6:23 AM, "Arjun Kota" wrote:
> Yes the message shows up on the server.
> On Apr 11, 2014 12:07 AM, "Guozhang Wang" wrote:
>
>> Hi Arjun,
>>
>> If you only send one message, does that message show up on the server?
>> Does
>> you consumer use
I see that the state-change logs have warning messages of this kind (Broker
7 is the 0.8.1 API and this is a log snippet from that broker) :
s associated leader epoch 11 is old. Current leader epoch is 11
(state.change.logger)
[2014-04-09 10:32:21,974] WARN Broker 7 ignoring LeaderAndIsr request fr
By the way, we may have found the issue ..
On going through the thread dump, we found that 4-5 threads were blocked on
log4j.CallAppenders and 2-3 threads were in IN_NATIVE state while trying to
write logs to disk. The network threads were there-fore blocked on log4j
threads, thus hanging the kaf
This just dosent happen only when the topic is newly created, happens
even if topic has lot of messgaes, but all messages are consumed by the
consumer. Now if you add just one message, consumer will not fetch, if
at that scenario we add more than 10 messages things work fine. (10 is
just a arbi
We don't have good java docs right now. We are rewriting both the producer
and the consumer and will have better javadoc then. For now, following the
examples in the documentation is probably your best option.
Thanks,
Jun
On Thu, Apr 10, 2014 at 11:43 AM, Manoj Khangaonkar
wrote:
> Hi,
>
> The
One should be able to upgrade from 0.8 to 0.8.1 one broker at a time
online. There are some corner cases that we are trying to patch in 0.8.1.1,
which will be released soon.
As for your issue, not sure what happened. Do you see any ZK session
expirations in the broker log?
Thanks,
Jun
On Thu,
Are you using the high-level consumer? How did you set fetch.wait.max.msand
fetch.min.bytes?
Thanks,
Jun
On Thu, Apr 10, 2014 at 8:13 PM, Arjun wrote:
> This just dosent happen only when the topic is newly created, happens even
> if topic has lot of messgaes, but all messages are consumed by
Hi Manoj,
Thanks for the feedback. We agree that javadocs are a must. As Jun
mentioned, we are working on adding extensive javadocs to our new clients.
The new producer javadoc, once released in 0.8.2, will look like
http://empathybox.com/kafka-javadoc/
The new consumer javadoc, once released in 0
I hope this one would give u a better idea.
bin/kafka-run-class.sh kafka.tools.ConsumerOffsetChecker --group group1
--zkconnect zkhost:port --topic testtopic
Group Topic Pid Offset
logSize Lag Owner
group1 testtopic0 253
i see this in the consumer logs
[kafka.consumer.ConsumerFetcherManager]
[ConsumerFetcherManager-1397188062631] Adding fetcher for partition
[taf.referral.emails.service,11], initOffset 250 to broker 1 with
fetcherId 0
but no data and i get this warning
[ConsumerFetcherThread-group1_ip-10-91-
Hi Ajrun,
It seems to be the cause:
https://issues.apache.org/jira/browse/KAFKA-1016
Guozhang
On Thu, Apr 10, 2014 at 9:21 PM, Arjun wrote:
> I hope this one would give u a better idea.
>
> bin/kafka-run-class.sh kafka.tools.ConsumerOffsetChecker --group group1
> --zkconnect zkhost:port --
Hi,
I could not see any out of memory exceptions in the broker logs. One
thing i can see is i may have configured consumer poorly. If its not
too much to ask can u let me know the changes i have to do for over
coming this problem.
Thanks
Arjun Narasimha Kota
On Friday 11 April 2014 10:04 A
What I tried to say is that it may be caused by your
"fetch.wait.max.ms"="18"
too large. Try a small value and see if that helps.
On Thu, Apr 10, 2014 at 9:44 PM, Arjun wrote:
> Hi,
>
> I could not see any out of memory exceptions in the broker logs. One thing
> i can see is i may have con
Hi,
From my understanding, the fetch wait max time is the maximum time the
consumer waits if there are no messages in the broker. If there are
messages in the broker, it just gets all the messages from the broker.Is
my understanding wrong?
thanks
Arjun Narasimha Kota
On Friday 11 April 2014
yup i will change the value and recheck. Thanks for the help.
thanks
Arjun Narasimha Kota
On Friday 11 April 2014 10:28 AM, Guozhang Wang wrote:
What I tried to say is that it may be caused by your
"fetch.wait.max.ms"="18"
too large. Try a small value and see if that helps.
On Thu, Apr 10
i changed the time to 60 seconds even now i see the same result. The
Consumer is not consuming the messages.
Thanks
Arjun Narasimha Kota
On Friday 11 April 2014 10:36 AM, Arjun wrote:
yup i will change the value and recheck. Thanks for the help.
thanks
Arjun Narasimha Kota
On Friday 11 April
34 matches
Mail list logo