Your understanding of the problem is correct -- the serialization cost is
the reason for the high CPU usage.
What you can also try to optimize is the serializers you are using (by
using data types that are efficient to serialize). See also this blog post:
https://flink.apache.org/news/2020/04/15/f
Yes absolutely. Unless we need a very large state order of GB rocks DB is
not required. RocksDB is good only because the Filesystem is very bad at
LargeState. In other words FileSystem performs much better than RocksDB
upto GB's. After that the file system degrades compared to RocksDB. Its not
that
Hi Li Jim,
Filesystem performs much better than rocksdb (by multiple times), but it is
only suitable for small states. Rocksdb will consume more CPU on background
tasks, cache management, serialization/deserialization and
compression/decompression. In most cases, performance of the Rocksdb will
mee
Hello everyone,
I am using Flink 1.13.1 CEP Library and doing some pressure test.
My message rate is about 16000 records per second.
I find that it cant process more than 16000 records per second because the CPU
cost is up to 100%(say 800% because I allocated 8 vcores to a taskmanager).
I tried sw