how to move forward?
Med venlig hilsen / Best regards
Lasse Nedergaard
We have been playing around with Flink CDC and a postgresql in HA mode.
Has anyone any experience with this and got it to work?
As we see it the current version of Flink CDC can’t handle the failover and only support a single instance.
Med venlig hilsen / Best regards
Lasse Nedergaard
Hi
We have been playing around with Flink CDC and a postgresql in HA mode.
Has anyone any experience with this and got it to work?
As we see it the current version of Flink CDC can’t handle the failover and
only support a single instance.
Med venlig hilsen / Best regards
Lasse Nedergaard
recommended what of handling this problem?
Med venlig hilsen / Best regards
Lasse Nedergaard
venlig hilsen / Best regards
Lasse Nedergaard
time calculations when comparing timestamps from different sources.On Mon, May 6, 2024 at 5:40 AM Lasse Nedergaard <lassenedergaardfl...@gmail.com> wrote:Hi. In Flink jobs running 1.18 I see the error below sometimes. I Can see the same problem has been reported and fixed I 1.14 Anyone have an
)
[flink-dist-1.18.0.jar:1.18.0]
at java.lang.Thread.run(Unknown Source) [?:?]
Med venlig hilsen / Best regards
Lasse Nedergaard
-release-1.19/docs/dev/datastream/testing/#unit-testing-stateful-or-timely-udfs--custom-operators?On Tue, Apr 9, 2024 at 7:09 AM Lasse Nedergaard <lassenedergaardfl...@gmail.com> wrote:Hi.
I my Integration test, running on 1.19, with a mini cluster I mock all my sources with DataGeneratorSour
.
so I’m looking for a solution where I can advance the processing and the
onTimer method get called.
Any one who knows how to do that or a nice workaround.
Med venlig hilsen / Best regards
Lasse Nedergaard
postfix that are applied to
all files.
All ideas appreciate
Med venlig hilsen / Best regards
Lasse Nedergaard
, Lasse Nedergaard wrote:Hi. I have a case where I would like to collect object from a completeablefuture future in a flat map function. I run into some problem where I get an exception regarding a buffer pool that don’t exists when I collect the objets after some times. I can see if I for testing
the function (creating a fori loop with a thread
sleep or wait for the future) it works.
Can anyone explain what going on behind the screen and if possible any hints
for a working solution.
Thanks in advance
Med venlig hilsen / Best regards
Lasse Nedergaard
?
Thanks in advance
Best regards
Lasse Nedergaard
nector support will be available in 1.18
Please let me know.
Med venlig hilsen / Best regards
Lasse Nedergaard
our
fat jar i application mode.
I can’t find any information how to do the configuration and what the
configuration should be.
If anyone can help me here it would be much appreciated.
Med venlig hilsen / Best regards
Lasse Nedergaard
Hi
From the documentation I can see there isn’t any ES support in Flink 1.18 right
now and Flink-26088 (ES 8 support) is still open.
Does anyone has an idea when ES connector support will be available in 1.18
Please let me know.
Med venlig hilsen / Best regards
Lasse Nedergaard
it but I find it pretty hard get
all the concepts to work together and I haven’t been able to find a “simple”
app example.
Is there anyone out there knowing a good example or give a simple explanation
of best practice for implementing your own source
Med venlig hilsen / Best regards
Lasse
I have found a ticket 20637 and as I understand it this is the problem I have.
No activity since September 2021 on this major issue.
Med venlig hilsen / Best regards
Lasse Nedergaard
> Den 8. mar. 2023 kl. 09.11 skrev Shammon FY :
>
>
>
think you can first check whether there is a file `META-INF/services/org.apache.flink.table.factories.Factory` in your uber jar and there's `org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicTableFactory` in the file. Flink would like to create table factory from that file.And then you can check whether yo
think you can first check whether there is a file `META-INF/services/org.apache.flink.table.factories.Factory` in your uber jar and there's `org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicTableFactory` in the file. Flink would like to create table factory from that file.And then you can check whether your uber ja
upsert-Kafka
as It does on my local machine. 🤔
Anyone has a clue where I should look for the problem?
Med venlig hilsen / Best regards
Lasse Nedergaard
Mar 2, 2023 at 6:02 PM Lasse Nedergaard <lassenedergaardfl...@gmail.com> wrote:Hi
I’m working with a simple pipeline that reads from two Kafka topics one standard and one compacted in sql. Then create a temporary view doing a join of the two tables also in sql On top of the view I create two
/ Best regards
Lasse Nedergaard
can keep up with the load
and if not I have to increase the batch size.
Once again thanks for your input
Med venlig hilsen / Best regards
Lasse Nedergaard
> Den 7. jul. 2021 kl. 10.33 skrev Arvid Heise :
>
>
> Hi Lasse,
>
> That's a tough question. The real Kappa w
and I’m open for good ideas and
suggestions how to handle this challenge
Thanks in advance
Med venlig hilsen / Best regards
Lasse Nedergaard
to build my own
version of the processor API and include the missing part.
Med venlig hilsen / Best regards
Lasse Nedergaard
> Den 10. mar. 2021 kl. 17.33 skrev sri hari kali charan Tummala
> :
>
>
> Flink,
>
> I am able to access Kinesis from Intellij but not S3 I
Thanks for your feedback.
I go with specific Kryo serialisation as it make the code easier to use and if
I encounter perf. Problems I can change the dataformat later.
Med venlig hilsen / Best regards
Lasse Nedergaard
> Den 24. feb. 2021 kl. 17.44 skrev Maciej Obuchowski
> :
>
>
/ Best regards
Lasse Nedergaard
wait until they
was removed and start again until we fixed the underlying problem.
Med venlig hilsen / Best regards
Lasse Nedergaard
> Den 3. feb. 2021 kl. 02.54 skrev Xintong Song :
>
>
>> How is the memory measured?
> I meant which flink or k8s metric is collected?
state with a map?
Med venlig hilsen / Best regards
Lasse Nedergaard
Hi
At Trackunit We have been using Mesos for long time but have now moved to k8s.
Med venlig hilsen / Best regards
Lasse Nedergaard
> Den 23. okt. 2020 kl. 17.01 skrev Robert Metzger :
>
>
> Hey Piyush,
> thanks a lot for raising this concern. I believe we should keep Me
stions are:
1. What is the most optimal way to transport byte arrays in Avro in Flink.
2. Do Flink use Avro serializer for our Avro object when they contain
ByteBuffer?
Thanks
Lasse Nedergaard
Hi.
If you can cache the data i state it’s the preferred way. Then you read all you
values from a store do a key by and store them in state in a coprocessfunction.
If you need to do a lookup for every row you have to use the AsyncIO function
Med venlig hilsen / Best regards
Lasse Nedergaard
on timerservice and timer service don’t have a functionality
to list registered timers.
Any idea how to read current registered timers.
Med venlig hilsen / Best regards
Lasse Nedergaard
ok. We do tests where we increase parallelism and they’ve reduce the state size
for each task manager and then they run ok.
4.
We don’t use windows functions and the jobs use standard value, list and map
state.
Med venlig hilsen / Best regards
Lasse Nedergaard
> Den 30. apr. 2020 kl. 08
We using Flink 1.10 running on Mesos.
Med venlig hilsen / Best regards
Lasse Nedergaard
> Den 30. apr. 2020 kl. 04.53 skrev Yun Tang :
>
>
> Hi Lasse
>
> Which version of Flink did you use? Before Flink-1.10, there might exist
> memory problem when RocksDB executes
state it can handle.
So do anyone have any recommendations/ base practices for this and can
someone explain why savepoint requires memory.
Thanks
In advance
Lasse Nedergaard
Hi Yun
Thanks for looking into it and forwarded it to the right place.
Med venlig hilsen / Best regards
Lasse Nedergaard
> Den 22. apr. 2020 kl. 11.06 skrev Yun Tang :
>
>
> Hi Lasse
>
> After debug locally, this should be a bug in Flink (even the latest version).
Hi
Thnaks for the reply. We Will try it out and let everybody know
Med venlig hilsen / Best regards
Lasse Nedergaard
> Den 20. apr. 2020 kl. 08.26 skrev Xintong Song :
>
>
> Hi Lasse,
>
> From what I understand, your problem is that JVM tries to fork some native
> pr
?
Thanks in advance
Lasse Nedergaard
WARN o.a.flink.runtime.state.filesystem.FsCheckpointStreamFactory - Could
not close the state stream for
s3://flinkstate/dcos-prod/checkpoints/fc9318cc236d09f0bfd994f138896d6c/chk-3509/cf0714dc-ad7c-4946-b44c-96d4a131a4fa.
java.io.IOException: Cannot allocate
be corrupted and causing the exceptions.
We work on preparing a simple project on github to reproduce the problem so the underlying problem can be solved.
Anyone else have seen these kind of problems?
Med venlig hilsen / Best regards
Lasse Nedergaard
underlying problem can be solved.
Anyone else have seen these kind of problems?
Med venlig hilsen / Best regards
Lasse Nedergaard
Lasse Nedergaard
Any good suggestions?
Lasse
Den tir. 11. feb. 2020 kl. 08.48 skrev Lasse Nedergaard <
lassenederga...@gmail.com>:
> Hi.
>
> We would like to do some batch analytics on our data set stored in
> Cassandra and are looking for an efficient way to load data from a single
> t
reading from the underlying SS tables to
load in parallel.
Do we have something similarly in Flink, or how is the most efficient way
to load all, or many random data from a single Cassandra table into Flink?
Any suggestions and/or recommendations is highly appreciated.
Thanks in advance
Lasse
Hi.
We have the same Challenges. I asked on Flink forward and it’s a known problem.
We input in utc but Flink output in local machine time. We have created a
function that converts it back to utc before collecting to down stream.
Med venlig hilsen / Best regards
Lasse Nedergaard
> De
is to skip watermark and create a keyed processed function and
handle the time for each key my self.
Med venlig hilsen / Best regards
Lasse Nedergaard
> Den 23. sep. 2019 kl. 06.16 skrev 廖嘉逸 :
>
> Hi all,
> Currently Watermark can only be supported on task’s level(or partit
hilsen / Best regards
Lasse Nedergaard
> Den 12. sep. 2019 kl. 17.45 skrev Elias Levy :
>
> Just for a Kafka source:
>
> https://ci.apache.org/projects/flink/flink-docs-stable/dev/connectors/kafka.html#the-deserializationschema
>
> There is also a version of this sche
Hi.
Do Flink have out of the Box Support for Kafka Schema registry for both sources
and sinks?
If not, does anyone knows about a implementation we can build on so we can help
make it general available in a future release.
Med venlig hilsen / Best regards
Lasse Nedergaard
Hi.
I have encountered the same problem when you input epoch time to window table
function and then use window.start and window.end the out doesn’t output in
epoch but local time and I located the problem to the same internal function as
you.
Med venlig hilsen / Best regards
Lasse
CheckpointedFunction interface and this sink isn’t used
in all our jobs.
Med venlig hilsen / Best regards
Lasse Nedergaard
> Den 3. jun. 2019 kl. 12.50 skrev Tzu-Li (Gordon) Tai :
>
> Hi Lasse,
>
> This is indeed a bit odd. I'll need to reproduce this locally before I can
> figure out t
state.
Don’t know why this is needed in 1.8
Med venlig hilsen / Best regards
Lasse Nedergaard
> Den 28. maj 2019 kl. 10.06 skrev Tzu-Li (Gordon) Tai :
>
> Hi Lasse,
>
> Did you move the class to a different namespace / package or changed to be a
> nested class, across the Flin
point data in S3 so is there a way to use this save point
together with test case so we can debug it locally? Or start Flink mini cluster
with this save point?
Med venlig hilsen / Best regards
Lasse Nedergaard
> Den 28. maj 2019 kl. 10.06 skrev Tzu-Li (Gordon) Tai :
>
> Hi Lasse,
>
&g
way to use Avro instead of POJO, but are not
there yet.
If anyone have a clue what the root cause could be, and how to resolve it
would be appreciated.
Thanks in advance
Lasse Nedergaard
java.lang.Exception: Exception while creating StreamOperatorStateCont
aggregate with IdleStateRentionTime so state are removed when my
devices are up to date again. I then merge the two outputs and continue.
By doing this I handle 99% as standard and only keeping state for the late
data.
Make sense? And would it work?
Med venlig hilsen / Best regards
Lasse Nedergaard
ave anyone consider this feature?
Best
Lasse Nedergaard
Hi Hequn
Thanks for the details. I will give it a try.
Med venlig hilsen / Best regards
Lasse Nedergaard
> Den 17. apr. 2019 kl. 04.09 skrev Hequn Cheng :
>
> Hi Lasse,
>
> > some devices can deliver data days back in time and I would like to have
> > the res
Hi
Thanks for the fast reply. Unfortunately it not an option as some devices can
deliver data days back in time and I would like to have the results as fast as
possible.
I have to convert my implementation to use streaming API instead.
Med venlig hilsen / Best regards
Lasse Nedergaard
hat we don't do our calculation wrong.
I would like to know if there is any option in the Table API to get
access to late data, or my only option is to use Streaming API?
Thanks in advance
Lasse Nedergaard
Hi William
No iterations isn’t the solution as you can (will) end up in a deadlock. We
concluded that storing the results from external lookup in Kafka and use these
data as input to the cache was the only way
Med venlig hilsen / Best regards
Lasse Nedergaard
> Den 5. feb. 2019 kl. 10
Hi William
We have created a solution that do it. Please take a look at my presentation
from Flink forward.
https://www.slideshare.net/mobile/FlinkForward/flink-forward-berlin-2018-lasse-nedergaard-our-successful-journey-with-flink
Hopefully you can get inspired.
Med venlig hilsen / Best
your data input until your database lookup is done first time is not
simple but a simple solution could be to implement a delay operation or keep
the data in your process function until data arrive from your database stream.
Med venlig hilsen / Best regards
Lasse Nedergaard
> Den 28. sep. 2018
Hi.
>From my presentation on Flink forward you can validate this
•We used EMR on Amazon’s Linux AMI
•We didn't change the default blob server location (/tmp)
•Default a cron job cleaning up in /tmp
•Solution change blob server location with blob.storage.directory
Den tir. 11. sep. 2018 kl. 09.14
Please try to use fsstatebackend as a test to see if the problems disappear.
Med venlig hilsen / Best regards
Lasse Nedergaard
> Den 3. sep. 2018 kl. 11.46 skrev 祁明良 :
>
> Hi Lasse,
>
> Is there JIRA ticket I can follow?
>
> Best,
> Mingliang
>
>>
/ Best regards
Lasse Nedergaard
> Den 3. sep. 2018 kl. 10.08 skrev 祁明良 :
>
> Hi All,
>
> We are running flink(version 1.5.2) on k8s with rocksdb backend.
> Each time when the job is cancelled and restarted, we face OOMKilled problem
> from the container.
> In our cas
TaskManagers (-n).
So try to lookup the configuration for your system.
Next step is to investigate why the task manager is killed.
Med venlig hilsen / Best regards
Lasse Nedergaard
> Den 20. aug. 2018 kl. 16.34 skrev Dominik Wosiński :
>
> Hey,
> Can You please provide a little more infor
>
> I remember a previous request for this feature, but I don't think a JIRA
> was created for it.
> Might be a good time to create one now.
>
> Could you open one and specify your requirements?
>
>
> On 11.07.2018 06:33, Lasse Nedergaard wrote:
>
> Hi Tang
>
Hi Tang
Thanks for the link. Yes your are rights and it works fine. But when I use the
REST API for getting running jobs I can’t find any reference back to the jar
used to start the job.
Med venlig hilsen / Best regards
Lasse Nedergaard
> Den 11. jul. 2018 kl. 05.22 skrev Tang Cl
?
Need to work for Flink 1.5.0 or 1.4.2
Any help appreciated
Thanks in advance
Lasse Nedergaard
in the near future.
Med venlig hilsen / Best regards
Lasse Nedergaard
> Den 14. jun. 2018 kl. 20.24 skrev Alexey Tsitkin :
>
> Hi,
> I'm trying to run a simple program which consumes from one kinesis stream,
> does a simple transformation, and produces to another stream
Hi.
We sometimes see job fails with a blob store exception, like the one below.
Anyone has an idea why we get them, and how to avoid them?.
In this case the job has run without any problems for a week and then we
get the error. Only this job are affected right now all other running as
expected and
Hi
If you use Rocksdbstate backend rocksdb use memory outside the process to my
understanding.
We have the same problem and I guess it started when we introduced job with
larger state and moved all jobs over to rocksdb
Med venlig hilsen / Best regards
Lasse Nedergaard
> Den 4. jun. 2018
I could but the external Rest call is done with async operator and I want to
reduce the number of objects going to async and it would require that I store
the state in the async operator to.
Med venlig hilsen / Best regards
Lasse Nedergaard
> Den 3. maj 2018 kl. 13.09 skrev Aljoscha Kret
them into the Flink job that way but if I could do it inside Flink it
would be easier
Med venlig hilsen / Best regards
Lasse Nedergaard
> Den 3. maj 2018 kl. 12.09 skrev Aljoscha Krettek :
>
> Hi,
>
> Why do you want to do the enrichment downstream and send the data back up?
Hi.
Because the data that I will cache come from a downstream operator and
iterations was the only way to look data back to a prev. Operator as I know
Med venlig hilsen / Best regards
Lasse Nedergaard
> Den 2. maj 2018 kl. 15.35 skrev Piotr Nowojski :
>
> Hi,
>
> Why
Hi.
I have a case where I have a input stream that I want to enrich with
external data. I want to cache some of the external lookup data to improve
the overall performances.
To update my cache (a CoProcessFunction) I would use iteration to send the
external enriched information back to the cache a
Hi
I encounter the same problem for the Kinesis producer and will try to play
around with the config setting and look into the code base. If I figure it out
l let you know and please do the same if you figure it out before me
Med venlig hilsen / Best regards
Lasse Nedergaard
> Den 27.
Hi.
What kind of problems and what configuration should we be aware of.?
Med venlig hilsen / Best regards
Lasse Nedergaard
> Den 23. apr. 2018 kl. 13.44 skrev Jörn Franke :
>
> I would disable it if possible and use the Flink parallism. The threading
> might work but can create
This time attached.
2018-04-10 10:41 GMT+02:00 Ted Yu :
> Can you use third party site for the graph ?
>
> I cannot view it.
>
> Thanks
>
> Original message ----
> From: Lasse Nedergaard
> Date: 4/10/18 12:25 AM (GMT-08:00)
> To: Ken Krugler
Checkpointing(config.checkPointInterval,
> CheckpointingMode.AT_LEAST_ONCE);
> env.setStateBackend(new RocksDBStateBackend(config.checkpointDataUri));
>
> Where checkpointDataUri point to S3
>
> Lasse Nedergaard
>
> 2018-04-09 16:52:01,239 INFO org.apache.flink.yarn.
> Y
);
env.setStateBackend(new RocksDBStateBackend(config.checkpointDataUri));
Where checkpointDataUri point to S3
Lasse Nedergaard
2018-04-09 16:52:01,239 INFO org.apache.flink.yarn.YarnFlinkResourceManager
- Diagnostics for container
container_1522921976871_0001_01_79 in state COMPLETE
appreciated.
Med venlig hilsen / Best regards
Lasse Nedergaard
> Den 9. apr. 2018 kl. 21.48 skrev Chesnay Schepler :
>
> We will need more information to offer any solution. The exception simply
> means that a TaskManager shut down, for which there are a myriad of possible
> explanatio
configure Flink so it will spill to disk instead of
OOM. I would prefer a slow system instead of a dead system
Please let me know if you need additional information or it don't make any
sense.
Lasse Nedergaard
2018-03-26 12:29 GMT+02:00 Timo Walther :
> Hi Lasse,
>
> in order to
the input to avoid OOM
Med venlig hilsen / Best regards
Lasse Nedergaard
Hi.
Go to Catalog, Search for Flink and click deploy
Med venlig hilsen / Best regards
Lasse Nedergaard
> Den 18. mar. 2018 kl. 16.18 skrev miki haiat :
>
>
> Hi ,
>
> Im trying to run flink on mesos iv installed mesos and marathon
> successfully but im unable to
And We see the same too
Med venlig hilsen / Best regards
Lasse Nedergaard
> Den 8. feb. 2018 kl. 11.58 skrev Stavros Kontopoulos
> :
>
> We see the same issue here (2):
> 2018-02-08 10:55:11,447 ERROR
> org.apache.flink.runtime.webmonitor.files.StaticFileServerHandler - C
Great news 👍
Does it also cover protection against loosing all your task managers?
Med venlig hilsen / Best regards
Lasse Nedergaard
> Den 25. jan. 2018 kl. 12.00 skrev Chesnay Schepler :
>
> In 1.4, jobs submitted through the WebUI that never actually execute a job
> will cau
Med venlig hilsen / Best regards
Lasse Nedergaard
> Den 24. jan. 2018 kl. 18.51 skrev Vishal Santoshi :
>
> Yep, got it. It seems that the response when in an error state is reported
> by JM is not being rendered.
>
> We can see the response in the UI debugger but no
Hi.
Did you find a reason for the detaching ?
I sometimes see the same on our system running Flink 1.4 on dc/os. I have
enabled taskmanager.Debug.memory.startlogthread for debugging.
Med venlig hilsen / Best regards
Lasse Nedergaard
> Den 20. jan. 2018 kl. 12.57 skrev Kien Truong :
>
are gone and sometimes we loss all the
task managers. I don’t think it is the attended behaviour.
Med venlig hilsen / Best regards
Lasse Nedergaard
> Den 7. dec. 2017 kl. 13.00 skrev Chesnay Schepler :
>
> In retrospect I'm quite frustrated we didn't get around to implement
Hi.
It is not possible through REST in Flink 1.3.2 I’m looking for the feature. The
only option is to use ./Flink savepoint for now
Med venlig hilsen / Best regards
Lasse Nedergaard
> Den 6. dec. 2017 kl. 21.52 skrev Vishal Santoshi :
>
> I was more interested in savepoint
91 matches
Mail list logo