I use exactly one iterator and of course I didn't close() it :-(
I switched to a try-with-ressource and now my StateStore works like a charm.
Huge thanks to you Eno!
V.
On Thu, May 18, 2017 at 10:31 AM, Eno Thereska
wrote:
> Hi Vincent,
>
> Could you share your code, the part where you write t
Hi Vincent,
Could you share your code, the part where you write to the state store and then
delete. I'm wondering if you have iterators in your code that need to be
closed().
Eno
> On 16 May 2017, at 16:22, Vincent Bernardi wrote:
>
> Hi Eno,
> Thanks for your answer. I tried sending a follow
I just upgraded Kafka Streams to 0.10.2.1 and have the exact same symptom:
new SST files keep getting created and old ones are never deleted. Note
that when I cleanly exit my streams application, all disk space is almost
instantly reclaimed, and the total size of the database becomes about the
amou
0.10.2.1 is compatible with Kafka 0.10.1.
Eno
> On 16 May 2017, at 20:45, Vincent Bernardi wrote:
>
> The LOG files stay small. The SST files are growing but not in size, in
> numbers. Old .sst files seem never written to anymore but are not deleted
> and new ones appear regularly.
> I can certa
The LOG files stay small. The SST files are growing but not in size, in
numbers. Old .sst files seem never written to anymore but are not deleted
and new ones appear regularly.
I can certainly try streams 0.10.2.1 if it's compatible with Kafka 0.10.1.
I have not checked the compatibility matrix yet
Thanks. Which RocksDb files are growing indefinitely, the LOG or SST ones?
Also, any chance you could use the latest streams library 0.10.2.1 to check if
problem still exists?
Eno
> On 16 May 2017, at 16:43, Vincent Bernardi wrote:
>
> Just tried setting compaction threads to 5, but I have th
Just tried setting compaction threads to 5, but I have the exact same
problem: the rocksdb files get bigger and bigger, while my application
never stores more than 200k K/V pairs.
V.
On Tue, May 16, 2017 at 5:22 PM, Vincent Bernardi
wrote:
> Hi Eno,
> Thanks for your answer. I tried sending a f
Hi Eno,
Thanks for your answer. I tried sending a followup email when I realised I
forgot to tell you the version number but it must have fallen through.
I'm using 0.10.1.1 both for Kafka and for the streams library.
Currently my application works on 4 partitions and only uses about 100% of
one cor
I forgot to add that I'm using Kafka and Kafka-streams 10.1.1.
V.
On 2017-05-16 15:58 (+0200), Vincent Bernardi wrote:
> Hi,
> I'm running an experimental Kafka Stream Processor which accumulates lots
> of data in a StateStoreSupplier during transform() and forwards lots of
> data during punctu
Which version of Kafka are you using? It might be that RocksDb doesn't get
enough resources to compact the data fast enough. If that's the case you can
try increasing the number of background compaction threads for RocksDb through
the RocksDbConfigSetter class (see
http://docs.confluent.io/curr
Hi,
I'm running an experimental Kafka Stream Processor which accumulates lots
of data in a StateStoreSupplier during transform() and forwards lots of
data during punctuate (and deletes it form the StateStoreSupplier). I'm
currently using a persistent StateStore, meaning that Kafka Streams
provides
11 matches
Mail list logo