Are you using remote disks for rocksdb? (I guess that's EBS on AWS) Afaik
there are usually limitations wrt to the IOPS you can perform.
I would generally recommend measuring where the bottleneck is coming from.
It could be that your CPUs are at 100%, then adding more machines / cores
will help (m
Thanks Maciej, I think this has helped a bit. We are now at 2k/3k eps on a
single node. Now, I just wonder if this isn't too slow for a single node
and such a simple query.
On Sat, Jul 10, 2021 at 9:28 AM Maciej Bryński wrote:
> Could you please set 2 configuration options:
> - state.backend.roc
Could you please set 2 configuration options:
- state.backend.rocksdb.predefined-options = SPINNING_DISK_OPTIMIZED_HIGH_MEM
- state.backend.rocksdb.memory.partitioned-index-filters = true
Regards,
Maciek
sob., 10 lip 2021 o 08:54 Adrian Bednarz napisał(a):
>
> I didn’t tweak any RocksDB knobs. T
I didn’t tweak any RocksDB knobs. The only thing we did was to increase
managed memory to 12gb which was supposed to help RocksDB according to the
documentation. The rest stays at the defaults. Incremental checkpointing
was enabled as well but it made no difference in performance if we disabled
it.
Hi Adrian,
Could you share your state backend configuration ?
Regards,
Maciek
pt., 9 lip 2021 o 19:09 Adrian Bednarz napisał(a):
>
> Hello,
>
> We are experimenting with lookup joins in Flink 1.13.0. Unfortunately, we
> unexpectedly hit significant performance degradation when changing the stat
Hello,
We are experimenting with lookup joins in Flink 1.13.0. Unfortunately, we
unexpectedly hit significant performance degradation when changing the
state backend to RocksDB.
We performed tests with two tables: fact table TXN and dimension table
CUSTOMER with the following schemas:
TXN:
|--