Kafka Consumer does not receive any message after a while

2013-12-11 Thread shahab
fetching (consumer) part, right? best, /Shahab The kafka is run in one machine, no clusters, replications,etc, very basic configuration. The consumer config file is ; "zookeeper.connect", myserver:2181); "group.id", group1); "zookeeper.session.timeout.ms", &

Kafka Consumer does not receive any message after a while

2013-12-11 Thread shahab
fetching (consumer) part, right? best, /Shahab The kafka is run in one machine, no clusters, replications,etc, very basic configuration. The consumer config file is ; "zookeeper.connect", myserver:2181); "group.id", group1); "zookeeper.session.timeout.ms", &

why log files are never deleted?

2013-12-12 Thread shahab
Hi I just wonder why the log files, in {kafka_path}/log , are not deleted automatically? Is there any way to purge those files? Also is there anyway to purge the Kafka queue (make it empty) without having to consuming or knowing the last fetched offset? best, /Shahab

Re: Kafka Consumer does not receive any message after a while

2013-12-12 Thread shahab
Thanks a lot, very good hints. I am trying to see what happened in my case. best, /Shahab On Wed, Dec 11, 2013 at 5:16 PM, Jun Rao wrote: > Have you looked at > > https://cwiki.apache.org/confluence/display/KAFKA/FAQ#FAQ-Myconsumerseemstohavestopped%2Cwhy%3F > ? > > Thanks

Re: why log files are never deleted?

2013-12-13 Thread shahab
Thanks Jun, I already set the retention policy to 1 hour, and size 20 10 M but it didn't work, still logs are piled at "logs/" folder. Maybe I am missing something . best, /Shahab On Thu, Dec 12, 2013 at 4:57 PM, Jun Rao wrote: > Log deletion is controlled a retention po

How to read in batch using HighLevel Consumer?

2015-08-04 Thread shahab
. for example, read 100 items at once! Is this correct observation? or I am missing something? best, /Shahab

Re: How to read in batch using HighLevel Consumer?

2015-08-04 Thread shahab
Thanks a lot Shaminder for clarification and thanks Raja for pointing me to the example. best, /shahab On Tue, Aug 4, 2015 at 6:06 PM, Rajasekar Elango wrote: > Here is an example on what sharninder suggested > > http://ingest.tips/2014/10/12/kafka-high-level-consumer-frequently-missi

Re: How to read in batch using HighLevel Consumer?

2015-08-05 Thread shahab
I just wonder if it is possible to read as batch using SimpleConsumer instead of HighLevel consumer? does same principle apply to low level consumer (i.e. SimpleConsumer)? best, /Shahab On Tue, Aug 4, 2015 at 9:10 PM, Gwen Shapira wrote: > To add some internals, the high level consu

How to read messages from Kafka by specific time?

2015-08-05 Thread shahab
y one know how to do this? I do appreciate it. best, /Shahab

Re: How to read messages from Kafka by specific time?

2015-08-11 Thread shahab
Thanks Ewen for the clarification. I will test this. best, /Shahab On Mon, Aug 10, 2015 at 9:03 PM, Ewen Cheslack-Postava wrote: > You can use SimpleConsumer.getOffsetsBefore to get a list of offsets before > a Unix timestamp. However, this isn't per-message. The offests returne

How to sync back a failed broker?

2015-08-11 Thread shahab
ro is back. it seems it is not in sync with leader and in fact it never became in sync again. Now question is how to make first broker in sync again so it appears both in "isr" list and also it becomes leader for one of the partitions? best, /Shahab

broker.id does not work still showing 0,1 while it was set to 7,8

2015-08-11 Thread shahab
but it did not change. Does anyone know how to resolve this? best, /Shahab

Re: To Synch a failed broker?

2015-08-14 Thread shahab
Juts to answer my own question,I found the source problem. It was because the brokers couldn't communicate with each other. By opening TCP ports (in my case EC2 security policy) the problem is solved. On Wed, Aug 12, 2015 at 4:32 PM, shahab wrote: > Sorry for posting the ema

why does producer fail and does not try other brokers when one of brokers in the cluster fails?

2015-08-17 Thread shahab
Hi, I have a kafka cluster consisting of two servers. I created a topic XYZ with 3 partitions and replication factor of 2. In the producer side, the producer is configured with broker list of both brokers broker0 and broker1. Topic:XYZ PartitionCount:3 ReplicationFactor:2 Configs: Topic: Replica

Example of "Offset Commit" suing SimpleConsumer API?

2015-08-28 Thread shahab
Hi, I do appreciate if someone point me to any java example showing how one can implement offset commit using Simple Consumer API? I have not found any ! best, /Shahab

How to monitor lag when "kafka" is used as offset.storage?

2015-09-02 Thread shahab
Hi, I wonder how we can monitor lag (difference between consumer offset and log ) when "kafka" is set as offset.storage? because the "kafka-run-class.sh kafka.tools.ConsumerOffsetChecker ... " does work only when zookeeper is used as storage manager. best, /Shahab

Re: How to monitor lag when "kafka" is used as offset.storage?

2015-09-02 Thread shahab
Thanks Noah. I installed Burrow and played with it a little bit. It seems as you pointed out I need to implement the alerting system myself. Do you know any other Kafka tools that can give alerts? best, /Shahab On Wed, Sep 2, 2015 at 1:44 PM, noah wrote: > We use Burrow <https://gith

Is there any way to find out whether "kafka" is used as offset storage or "zookeeper"

2015-09-03 Thread shahab
ther "kafka" is used as offset storage or "zookeeper" ? best, /Shahab

KafkaHighLevel consumer in java returns topics which are removed before?

2016-06-10 Thread shahab
StringDeserializer"); Map> topics = new KafkaConsumer<>(props ).listTopics(); System.out.println (topics); best, Shahab

KafkaStream: puncutuate() never called even when data is received by process()

2016-11-23 Thread shahab
coming to the topology (as I have logged the incoming tuples to process() ), punctuate() is never executed. What I am missing? best, Shahab

Re: KafkaStream: puncutuate() never called even when data is received by process()

2016-11-23 Thread shahab
ctuate is not triggered base on wall-clock time, but based in > internally tracked "stream time" that is derived from > TimestampExtractor. > Even if you use WallclockTimestampExtractor, "stream time" is only > advance if there are input records. > > N

internals.AbstractCoordinator {} - Marking the coordinator ip-xyz:9092 (id: 2144 rack: null) dead for group MyTopic

2016-12-09 Thread shahab
Does anyone know what is the source of this issue? I played with CONNECTIONS_MAX_IDLE_MS_CONFIG in Consumer side kafka configuration and it didn't effect the results. best, Shahab Here is the related logs I found in consumer side: 2016-12-08 20:41:12.559 INFO internals.AbstractCoordi

internals.AbstractCoordinator {} - Marking the coordinator ip-xyz:9092 (id: 2144 rack: null) dead for group

2016-12-12 Thread shahab
sumer side logs: Marking the coordinator ip-XYZ:9092 (id: 2147482644 rack: null) dead for group MyGroup Does anyone know what is the source of this issue? I played with CONNECTIONS_MAX_IDLE_MS_CONFIG in Consumer side kafka configuration and it didn't effect the results. best, Shahab H

How to set Group Id in SimpleConsumer

2014-02-19 Thread shahab
. clientName) Maybe I did something wrong, but I ran two consumers with same "clientName" and still both consumers received exactly same amount and same data from Kafka while it is supposed that the data is divided between these two consumers (due to load balancing)! best, /Shahab

Re: How to set Group Id in SimpleConsumer

2014-02-20 Thread shahab
Thanks a lot Guozhang. Very helpful comment. best, /Shahab On Wed, Feb 19, 2014 at 5:46 PM, Guozhang Wang wrote: > Group management like load balancing only exists in high level consumers, > SimpleConsumer do not have the group id settings since it does not have > group managemen