Hi Arvid,
Thanks for the suggestion. Sorry to ask a trivial question 😂. How to
backport klarna connector to Java 8?
Thanks,
Jing
On Sat, Oct 30, 2021 at 5:58 AM Arvid Heise wrote:
> I have also found [1]. You could also fork the klarna connector and
> backport it to Java 8.
>
> [1] https://fl
Thanks, Jeff! This is very insightful.
On Sat, Oct 30, 2021 at 12:15 AM Jeff Zhang wrote:
> The reason is that flink-scala-shell uses scala-2.11 which uses jvm-1.6 as
> its target by default, that's why it can not use any library that depends
> on jvm-1.8.
>
> You can use Zeppelin instead which
Hi ,
We have the following Flink Job that processes records from kafka based on
the rules we get from S3 files into broadcasted state.
Earlier we were able to spin a job with any number of task parallelism
without any issues.
Recently we made changes to the Broadcast state Structure and it is work
Hi Gil,
you should not need to call FileSystem.initialize. The only entry point
where it's currently necessary is the LocalExecutionEnvironment [1] but
that's a bug. As done now, you are actually circumventing the plugin
manager, so I'm not suprised that it's not working.
[1] https://issues.apach
I have also found [1]. You could also fork the klarna connector and
backport it to Java 8.
[1] https://flink-packages.org/packages/streaming-flink-dynamodb-connector
On Thu, Oct 28, 2021 at 10:04 PM Martijn Visser
wrote:
> Hi,
>
> I am not aware of any at the moment. There is an open Flink tick
This seems to be a valid concern but I'm not deep enough to clearly say
that this is indeed a bug. @renqschn could you please
double-check?
On Thu, Oct 28, 2021 at 8:39 PM Mason Chen wrote:
> Hi all,
>
> I noticed that the KafkaSourceReader did not have a pointer to the
> KafkaSubscriber, so I
Hi Patrick,
do you even have so much backpressure that unaligned checkpoints are
necessary? You seem to have only one network exchange where unaligned
checkpoint helps.
The Flink 1.11 implementation of unaligned checkpoint was still
experimental and it might cause unexpected side-effects. Afaik, w
If you are using JDBCInputFormat, then the respective query is executed
only once. The task will be closed and with it any downstream tasks that do
not have multiple inputs.
However, JDBCInputFormat currently cannot be used with HybridSource. Only
source that use the new unified Source interface c
Hi Mason,
thanks for creating that.
We are happy to take contribuitons (I flagged it as a starter task).
On Wed, Oct 27, 2021 at 2:36 AM Mason Chen wrote:
> Hi all,
>
> I have a similar requirement to Preston. I created
> https://issues.apache.org/jira/browse/FLINK-24660 to track this effort.
The reason is that flink-scala-shell uses scala-2.11 which uses jvm-1.6 as
its target by default, that's why it can not use any library that depends
on jvm-1.8.
You can use Zeppelin instead which supports scala-shell of scala-2.12, I
have verified that it works in Zeppelin when you use scala-2.12.
10 matches
Mail list logo