取消订阅
Hi,
we need to use OAuth2 (Client Credentials Flow) in Flink to authenticate and
authorise against different services, initially Kafka and Opensearch. We have
it working with Kafka using however it doesn't seem to be possible with the
Opensearch Flink Connector
(https://github.com/apache/flink
Hi Martijn,
Thanks for your advice. However, even after installing hadoop and setting
HADOOP_CLASSPATH, the filesystem still doesn't work and has the same
ClassNotFound exception.
I think that the reported class is loaded, and is from the installed hadoop
distribution:
[Loaded org.apache.hadoop.
Hi, Si-li,
I think maybe it is not the root cause. You should find whether there are
more exceptions in the JM log and TM logs.
Best,
Hang
Shammon FY 于2023年4月18日周二 09:02写道:
> Hi Si-li
>
> Could you give some more detailed exceptions? Or you can check the metrics
> of your job such as memory us
I am currently using Stateful Functions in my application.
I use Apache Flink for stream processing, and StateFun as a hand-off point for
the rest of the application.
It serves well as a bridge between a Flink Streaming job and micro-services.
I would be disappointed if StateFun was sunsetted.
Are there any next steps here?
On Mon, Apr 3, 2023, 12:46 PM Galen Warren wrote:
> Thanks for bringing this up.
>
> I'm currently using Statefun, and I've made a few small code contributions
> over time. All of my PRs have been merged into master and most have been
> released, but a few haven't
Hi Salva
I think you can check whether `foo.bar.Job` is in the classpath of your
job. “java.lang.NoClassDefFoundError: Could not initialize class
foo.bar.Job” error usually occurs when this class is not in the classpath
Best,
Shammon FY
On Mon, Apr 17, 2023 at 5:48 PM Salva Alcántara
wrote:
>
Hi Si-li
Could you give some more detailed exceptions? Or you can check the metrics
of your job such as memory usage.
Best,
Shammon FY
On Fri, Apr 14, 2023 at 5:14 PM Si-li Liu wrote:
> My job read data from mysql and write to doris. It will crash after 20
> mins ~ 1 hour after start.
>
> org
Hi,
Only the S3 Presto and S3 Hadoop filesystem plugins don't rely on Hadoop
dependencies, all other filesystems do. See
https://nightlies.apache.org/flink/flink-docs-master/docs/dev/configuration/advanced/#hadoop-dependencies
for how to make them available.
Best regards,
Martijn
On Mon, Apr 17
Martijn,
I think we mean the same but using different words. Yes, there is a binary
incompatible change in Scala 2.12. This is also a significant road bump to
make a decision whether to upgrade Flink' Scala version. But there are
other issues identified by multiple people in that Jira ticket. I co
Hi community,
I was testing Flink 1.17 on Kubernetes and ran into a strange class loading
problem. In short, the logs
show org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback was
loaded, however the program will throw ClassNotFoundException anyway.
The exception was thrown by Aliyun
I am deploying some Flink jobs which require access to some services under
a service mesh implemented via Linkerd. From time to time, I'm running into
this error:
```
java.lang.NoClassDefFoundError: Could not initialize class foo.bar.Job
```
It's weird because I have some jobs running well but ot
Hi Alexey,
I would argue that it's not a problem from Flink's source code, the problem
was that Scala introduced a binary incompatible change in Scala 2.12.8. If
Flink wanted to allow an upgrade, it would mean breaking snapshot
compatibility. That's why Flink is still bound to be used with Scala
2
Hi Martijn,
Thanks for your reply and attention.
1. As I read Nick's report here
https://issues.apache.org/jira/browse/FLINK-13414?focusedCommentId=17257763&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-17257763
Scala maintainers were blocked by Flink's source co
> they require additional hop to serialize Scala objects
This doesn't necessarily mean that we need a Scala API, because a beefed
up type extraction could also solve this.
> This single committer is now with us and ready to maintain it in open
source. The best situation to be :-)
Have you c
Hi Prateek,
You will need to stop and restart your jobs with the new connector
configuration.
Best regards,
Martijn
On Thu, Apr 13, 2023 at 10:10 AM Prateek Kohli
wrote:
> Hi,
>
> I am using Flink Kafka connectors to communicate with Kafka broker over
> mutual TLS.
> Is there any way or recom
Hi Günter, David,
Let me reply to you both in one email. First of all, thank you for
engaging.
Günter:
- I fully agree that losing Scala API as officially supported in Flink
would be very unfortunate. Future of Scala is interesting and will bring
more benefits to Flink users.
Just to remind ever
Hi Alexey,
> Taking into account my Scala experience for the last 8 years, I predict
these wrappers will eventually be abandoned, unless such a Scala library is
a part of some bigger community like ASF.
For the past couple of years, there have been no maintainers for Scala in
the Flink community.
18 matches
Mail list logo