Currently my flink application has state size of 160GB(around 50 operators),
where few state operator size is much higher, I am planning to use state
processor API to bootstrap let say one particular state having operator id
o1 and inside is a ValueState s1 as ID.
Following steps I have planned to
I have a flink job running on version 1.8.2 with parallelism of 12, I took
the savepoint of the application on disk and it is of approx 70GB, now when
I running the application from this particular savepoint checkpoint keeps
getting failed and app could not restart. I am getting following error :
K
Hi All,
I have multiple steam in flink job which contains different state such as
ValueState or MapState.
But I now I need to remove one stream having specific (UID,NAME) from the
JOB.
If I remove it I face issue while restoration stating operator does not
exists.
I was using BucketSink for sink
Hi,
Presently I have a flink application running on version 1.8.2 I have taken a
savepoint on the running app which is stored in s3 , Now I have changed my
flink version to 1.10.1 , Now when I running the new application on version
flink-1.10.1 from the savepoint taken on flink 1.8.2 it is throwin
I have been trying to alter the current state case class (scala) which has
250 variables, now when I add 10 more variables to the class, and when I run
my flink application from the save point taken before(Some of the variables
are object which are also maintained as state). It fails to migrate the
Yes, I have tried giving it as option, also the case class has default
constructor (this) still unable to migrate
--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
It is throwing below error ,
the class I am adding variables have other variable as an object of class
which are also in state.
Caused by: org.apache.flink.util.StateMigrationException: The new state
typeSerializer for operator state must not be incompatible.
at
org.apache.flink.runtime.st
I am not using any custom serialisation, but pojo is composite type, the pojo
I am trying to modify has variables which are other pojo defined by me, is
ther any example for TypeSerialization for this kind please share
--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabbl
Team,
Presently I have added elasticsearch as a sink to a stream and inserting the
json data, the problem is when I restore the application in case of crash it
reprocess the data in between (meanwhile a backend application updates the
document in ES) and flink reinsert the document in ES and all u
Team,
Presently I have added elasticsearch as a sink to a stream and inserting the
json data, the problem is when I restore the application in case of crash it
reprocess the data in between (meanwhile a backend application updates the
document in ES) and flink reinsert the document in ES and all up
flink app is crashing due to "too many file opens" issue , currently app is
having 300 operator and 60GB is the state size. suddenly app is opening 35k
around files which was 20k few weeks before, hence app is crashing, I have
updated the machine as well as yarn limit to 60k hoping it will not cras
I have some case class which have primitive as well as nested class objects
hence if I add any more variable in class savepoint does not restore I read
if I can add kyroserializer on those class using google protobuf I will be
able to serialize it from state. Can anyone please share any example in
Hi Team,
Earlier we have developed on flink 1.6.2 , So there are lots of case classes
which have Map,Nested case class within them for example below :
case class MyCaseClass(var a: Boolean,
var b: Boolean,
var c: Boolean,
13 matches
Mail list logo