Hi, Fabian
Thanks for your replay, it helps.
in 1.13, [state.backend.rocksdb.log.dir] is deleted and I use
[state.backend.rocksdb.localdir].
It works fine.
On 2021/08/11 19:07:28, Fabian Paul wrote:
> Hi Li,
>
> Flink has disabled the RocksDb logs because sizing problems but you can have
> a
Hi, everyone,
I have a problem with checking rocksdb's log.
I set "state.backend.rocksdb.log.level" to INFO_LEVEL
but I can't find the rocksdb's log anywhere?
Where can I set the log dir or where should I check by default?
Thanks for any replys
Hello everyone,
I am using Flink 1.13.1 CEP Library and doing some pressure test.
My message rate is about 16000 records per second.
I find that it cant process more than 16000 records per second because the CPU
cost is up to 100%(say 800% because I allocated 8 vcores to a taskmanager).
I tried sw
ck it and opened a PR (which
> needs test coverage before it can be merged) with fixes for those.
>
> Best,
>
> Dawid
>
> [1] https://issues.apache.org/jira/browse/FLINK-23314
>
> On 06/07/2021 09:11, Li Jim wrote:
> > Hi, Mohit,
> >
> > Have y
more detail in
> this thread,
> http://deprecated-apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Flink-cep-checkpoint-size-td44141.html#a44168
> . Hope that helps
>
> On Tue, Jul 6, 2021 at 12:44 AM Li Jim wrote:
> >
> > I am using Flink CEP to
I am using Flink CEP to do some performance tests.
Flink version 1.13.1.
below is the sql:
INSERT INTO to_kafka
SELECT bizName, wdName, wdValue , zbValue , flowId FROM kafka_source
MATCH_RECOGNIZE
(
PARTITION BY flow_id
ORDER BY proctime
MEASURES A.biz_name as bizName, A.wd_name as w
Hi, Mohit,
Have you figured out any solusions on this problem ?
I am now facing the exactly same problem ,
I was using Flink of version 1.12.0 and I also upgrated it to 1.13.1 but the
checkpoint size is still growing.
On 2021/06/02 15:45:59, "Singh, Mohit" wrote:
> Hi,
>
> I am facing an i