Log compaction not working as expected

2015-06-12 Thread Shayne S
Hi, I'm new to Kafka and having trouble with log compaction. I'm attempting to set up topics that will aggressively compact, but so far I'm having trouble getting complete compaction at all. The topic is configured like so: Topic:beer_archive PartitionCount:20 ReplicationFactor:1 Configs:min.cle

Re: Log compaction not working as expected

2015-06-16 Thread Shayne S
Some further information, and is this a bug? I'm using 0.8.2.1. Log compaction will only occur on the non active segments. Intentional or not, it seems that the last segment is always the active segment. In other words, an expired segment will not be cleaned until a new segment has been created

Re: Log compaction not working as expected

2015-06-16 Thread Shayne S
> > > Manikumar > > On Tue, Jun 16, 2015 at 5:35 PM, Shayne S wrote: > > > Some further information, and is this a bug? I'm using 0.8.2.1. > > > > Log compaction will only occur on the non active segments. Intentional > or > > not, it seems that the las

Re: Log compaction not working as expected

2015-06-17 Thread Shayne S
On Wed, Jun 17, 2015 at 5:58 AM, Jan Filipiak wrote: > Hi, > > you might want to have a look here: > http://kafka.apache.org/documentation.html#topic-config > _segment.ms_ and _segment.bytes _ should allow you to control the > time/size when segments are rolled. > > Best &g

Re: Keeping Zookeeper and Kafka Server Up

2015-06-17 Thread Shayne S
kafka-server-start.sh has a -daemon option, but I don't think Zookeeper has it. On Tue, Jun 16, 2015 at 11:32 PM, Su She wrote: > It seems like nohup has solved this issue, even when the putty window > becomes inactive the processes are still running (I din't need to > interact with them). I mig

Re: duplicate messages at consumer

2015-06-19 Thread Shayne S
Duplicate messages might be due to network issues, but it is worthwhile to dig deeper. It sounds like the problem happens when you have 3 partitions and 3 consumers. Based on my understanding (still learning), each consumer should have it's own partition to consume. Can you verify this while your

Producer repeatedly locking up

2015-06-30 Thread Shayne S
This problem is intermittent, not sure what is causing it. Some days everything runs non-stop with no issues, some days I get the following. Setup: - Single broker - Running 0.8.2.1 I'm running a single broker. When the problem is presenting, anywhere from 5,000 to 30,000 messages may be processe

Re: Producer repeatedly locking up

2015-06-30 Thread Shayne S
happens? > > On Tue, Jun 30, 2015 at 10:14 AM, Shayne S wrote: > > This problem is intermittent, not sure what is causing it. Some days > > everything runs non-stop with no issues, some days I get the following. > > > > Setup: > > - Single broker > > - R

Re: Producer repeatedly locking up

2015-07-01 Thread Shayne S
The problem is gone, but I'm unsure of the root cause. The client library I use recently added support for the new producer. Switching to that seems to have sidestepped the problem. On Tue, Jun 30, 2015 at 12:53 PM, Shayne S wrote: > Thanks for responding Gwen. > > There is some

Re: Using Kafka as a persistent store

2015-07-10 Thread Shayne S
There are two ways you can configure your topics, log compaction and with no cleaning. The choice depends on your use case. Are the records uniquely identifiable and will they receive updates? Then log compaction is the way to go. If they are truly read only, you can go without log compaction. We

Re: Kafka producer input file

2015-07-11 Thread Shayne S
The console producer will read from STDIN. Assuming you are using 0.8.2, you can pipe the file right in like this: kafka-console-produce.sh --broker-list localhost:9092 --topic my_topic --new-producer < my_file.txt On Sat, Jul 11, 2015 at 6:32 PM, tsoli...@gmail.com wrote: > Hello, I am trying

Re: stunning error - Request of length 1550939497 is not valid, it is larger than the maximum size of 104857600 bytes

2015-07-12 Thread Shayne S
Your payload is so small that I suspect it's an encoding issue. Is your producer set to expect a byte array and you're passing a string? Or vice versa? On Sat, Jul 11, 2015 at 11:08 PM, David Montgomery < davidmontgom...@gmail.com> wrote: > I cant send this s simple payload using python. > >

Re: Using Kafka as a persistent store

2015-07-13 Thread Shayne S
ote: > > > > If I recall correctly, setting log.retention.ms and log.retention.bytes > to > > -1 disables both. > > Thanks! > > > > > On Fri, Jul 10, 2015 at 1:55 PM, Daniel Schierbeck < > > daniel.schierb...@gmail.com> wrote: > > > &

Re: Using Kafka as a persistent store

2015-07-14 Thread Shayne S
g -1 for log.retention.ms should work only for 0.8.3 ( > https://issues.apache.org/jira/browse/KAFKA-1990). > > 2015-07-13 17:08 GMT-03:00 Shayne S : > > > Did this work for you? I set the topic settings to retention.ms=-1 and > > retention.bytes=-1 and it looks like it

Re: log.retention.hours not working?

2015-09-22 Thread Shayne S
One caveat. If you are relying on log.segment.ms to roll the current log segment, it will not roll until the both time elapses and something new arrives for the log. In other words, if your topic/log segment are idle, no rolling will happen. The theoretically ineligible log will still be the curre