Hi,Community:
I hava encountered a problem when deploy reactive flink scheduler
on kubernetes with flink kubernetes operator 1.6.0,the manifest and exception
stack info listed as follows.
Any clues would be appreciated.
#
After taking a closer look to the logs, I found out it was a
`java.lang.OutOfMemoryError: Java heap space` error which confirms what I
thought: the serialized object is too big. Here is the solution to increase the
JVM heap:
https://nightlies.apache.org/flink/flink-docs-release-1.18/docs/deploy
Hello,
I was wondering if Flink has a size limit to serialize data. I have an object
that stores a big 2D array and when I try to hand it over the next operator, I
have the following error:
```
2024-07-10 10:14:51,983 ERROR
org.apache.flink.runtime.util.ClusterUncaughtExceptionHandler [] - WAR
Hello,Community:
I am puzzled by what the Priority means in Flink Buffer,it explains with
example(as follows) in Buffer.java,but I still don't get what exactly is "it
skipped buffers"??Could anyone give me a
intuitive explanation?
Thanks.
/** Same as EVENT_BUFFER, but the event has been
Sorry, I mean "could not".
--
Best!
Xuyang
在 2024-07-10 15:21:48,"Xuyang" 写道:
Hi, which Flink version does you use? I could re-produce this bug in master. My
test sql is below:
```
CREATE TABLE UNITS_DATA(
proctime AS PROCTIME()
, `IDENT` INT
Hi, which Flink version does you use? I could re-produce this bug in master. My
test sql is below:
```
CREATE TABLE UNITS_DATA(
proctime AS PROCTIME()
, `IDENT` INT
, `STEPS_ID` INT
, `ORDERS_ID` INT
) WITH (
'connector' = 'datagen',