Pierre
Do you see keys persisting in Rocksdb even after deleting them? Imagine k1
got deleted in first execution of punctuate and then you see it in the
second execution of punctuate as well. Do you see such behaviour? That
could explain why Rocksdb size keeps increasing.
Regards
Sab
On 21 Feb
Hi Sachin,
So, I have reconfigured to use 6 consumers, each managing only one
partition. As you can see on the picture, the memory is still growing over
time but very slowly. It seems the number of partitions have an impact on
how fast the memory increases.
For now, we will use only the in memory
Hi,
We have made some changes at our side to control the rocksdb.
Firstly we are assigning one partition per thread. Also creating only 4
threads per VM (four cores).
This way only 4 rocks db state stores get created on one VM.
Then we are making sure that VMs state store directory is within the VM
I have checked from my previous test, and in each partition, there is
between 2 and 4 sst files. I am open to any test which could pinpoint what
I am missing :)
Off topic question: what's the best solution to clean the rocksdb logs ?
logrotate or is there a configuration directly in rocksdb ? Or ar
I see. The serdes should be fine then.
Could you also check the .sst files on disks and see if their count keep
increasing? If .sst files are not cleaned up in time and disk usage keep
increasing then it could mean that some iterators are still not closed and
hence pin SST files from being deleted
For the K, we use a simple StringSerde, for the V, we use a custom Serde
which translates an avro payload into a generic bean containing an
identifier, a version and an Avro record.
On Sun, Feb 12, 2017 at 10:39 PM, Guozhang Wang wrote:
> Pierre,
>
> Could you let me know what serdes do you use
Pierre,
Could you let me know what serdes do you use for the key-value pair of type
?
I looked at the 0.10.1.1 code and did not find any obvious memory leak from
Streams in its iterator implementation. One thing that I'm suspecting is
that when returning the key-value pair, we call the serdes to
Hi Sachin,
We have 6 consumers per node, each managing multiple partitions. We see the
memory growing at the start of the application.
To get the memory snapshot, download jemalloc here
https://github.com/jemalloc/jemalloc/releases, compile and install it using
standard command with the option for
Hi,
We also seem to be facing some potential rocksdb issue when there are more
than one partition state store created on a machine.
It looks like rocksdb is spending too much time in disk i/o.
Could you please tell under what case you get the issue and also after how
long running the streams appli
Here is the gist with the two gif
https://gist.github.com/PierreCoquentin/d2df46e5e1c0d3506f6311b343e6f775
On Fri, Feb 10, 2017 at 7:45 AM, Guozhang Wang wrote:
> Pierre,
>
> Apache mailing list has some restricts to attach large attachments and I
> think that is why your gif files are not show
Pierre,
Apache mailing list has some restricts to attach large attachments and I
think that is why your gif files are not shown up. Could you try using a
gist link?
Guozhang
On Wed, Feb 8, 2017 at 9:49 AM, Pierre Coquentin wrote:
> Well, I am a little perplexed now... I have already recompiled
Well, I am a little perplexed now... I have already recompiled the branch
0.10.1 with rocksdb 4.11.2 and it doesn't seem better.
So I have modified the launcher of our jvm to use jemalloc with the profile
enabled and from the first result I have, it seems that the problem comes
from the method all(
Hello Pierre,
As Damian said your code looks fine and I cannot think of a direct reason
for the rocksdb memory leak on top of my head.
Could you build and try out the latest Kafka trunk (will be released as
0.10.2 in a few days) which contains a newer version of RocksDB and see if
this issue stil
Looks fine
On Sat, 4 Feb 2017 at 19:27, Pierre Coquentin
wrote:
Oh ok, this is a snippet of the code we use :
List> keyValues = new ArrayList<>();
try (KeyValueIterator iterator = kvStore.all()) {
iterator.forEachRemaining(keyValues::add);
}
Oh ok, this is a snippet of the code we use :
List> keyValues = new ArrayList<>();
try (KeyValueIterator iterator = kvStore.all()) {
iterator.forEachRemaining(keyValues::add);
}
// process all entries at once
try {
Keeping the rocksdb iterator wouldn't cause a memory leak in the heap. That
is why i asked.
On Sat, 4 Feb 2017 at 16:36 Pierre Coquentin
wrote:
> The iterator is inside a try-with-resources. And if the memory leak was
> inside our code, we will see it using visualvm or jmap, and that's not the
>
The iterator is inside a try-with-resources. And if the memory leak was
inside our code, we will see it using visualvm or jmap, and that's not the
case. This is not a memory leak in the heap. That's why my guess goes
directly to rocksdb.
On Sat, Feb 4, 2017 at 5:31 PM, Damian Guy wrote:
> Hi Pie
Hi Pierre,
When you are iterating over the entries do you close the iterator once you
are finished? If you don't then that will cause a memory leak.
Thanks,
Damian
On Sat, 4 Feb 2017 at 16:18 Pierre Coquentin
wrote:
> Hi,
>
> We ran a few tests with apache kafka 0.10.1.1.
> We use a Topology w
Hi,
We ran a few tests with apache kafka 0.10.1.1.
We use a Topology with only one processor and a KVStore configured as
persistent backed by rocksdb 4.9.0. Each events received are stored using
the method put(key, value) and in the punctuate method, we iterate over all
entries with all(), process
19 matches
Mail list logo