I cherry-picked your commits on top of the main branch and build a local
version. I can confirm that this issue is solved with this fix.
From: Arvid Heise
Date: Tuesday, 20 May 2025 at 11:17
To: Teunissen, F.G.J. (Fred)
Cc: dev@flink.apache.org
Subject: Re: Issue with Duplicate
ckchannel(BackchannelImpl.java:96)
> ~[flink-sql-connector-kafka-4.0.0-2.0.jar:4.0.0-2.0]
> at
> org.apache.flink.connector.kafka.sink.internal.BackchannelFactory.getBackchannel(BackchannelFactory.java:110)
> ~[flink-sql-connector-kafka-4.0.0-2.0.jar:4.0.0-2.0]
> ... 18 more
>
>
and use that in
> our application. If you can add a fix we can pick this up (and test this
> already 😊 ).
>
> Kind regards,
> Fred
>
> *From: *Arvid Heise
> *Date: *Tuesday, 20 May 2025 at 09:12
> *To: *Teunissen, F.G.J. (Fred)
> *Cc: *dev@flink.apache.org
> *Subject: *R
Subject: Re: Issue with Duplicate transactionalIdPrefix in Flink 2.0 Kafka Sinks
Hi Fred,
ah yes, I think I understand the issue. The KafkaSink always creates a
KafkaCommitter even if you are not using EXACTLY_ONCE. It's an unfortunate
limitation of our Sink design.
When I implemented the chan
Cc: ar...@apache.org
Subject: Re: Issue with Duplicate transactionalIdPrefix in Flink 2.0 Kafka Sinks
Hi Fred,
I see. It looks like this check was added in
https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fissues.apache.org%2Fjira%2Fbrowse%2FFLINK-37282&data=05%7C02%7
: Teunissen, F.G.J. (Fred)
Date: Monday, 19 May 2025 at 17:33
To: dev@flink.apache.org
Subject: [EXTERNAL] Re: Issue with Duplicate transactionalIdPrefix in Flink 2.0
Kafka Sinks
Hi David,
Depending on the flink version we use a different Kafka connector.
*
flink:2.0.0 -> flink-connector-kafka:4.
(at-least-once), so according
to the docs, the transactionalIdPrefix should not be required.
kind regards,
Fred
From: David Radley
Date: Monday, 19 May 2025 at 17:57
To: dev@flink.apache.org
Subject: Re: Issue with Duplicate transactionalIdPrefix in Flink 2.0 Kafka Sinks
Hi,
I had a quick loo
Hi,
I had a quick look at this. What version of the Flink Kafka connector are you
running?
I looked through recent commits in the Kafka connector and see
https://github.com/apache/flink-connector-kafka/commit/7c112abe8bf78e0cd8a310aaa65b57f6a70ad30a
for PR https://github.com/apache/flink-connecto
Hi ajay,
When you have 3 parallelisms you will have 3 independent clients. If you
want to keep prefetch count 3 you need to set setRequestedChannelMax as 1
and setParallelism 3. So All 3 clients can have one connection.
Talat
On Tue, May 7, 2024 at 5:52 AM ajay pandey wrote:
> Hi Flink Team,
>
Thanks a lot @yuxia for your time. We will
try to back port https://issues.apache.org/jira/browse/FLINK-27450 and see
if that works. Also will keep watching for your fix for the Default Dialect
fix.
Regards
Ram
On Mon, Jul 17, 2023 at 8:08 AM yuxia wrote:
> Hi, Ram.
> Thanks for reaching out.
Hi, Ram.
Thanks for reaching out.
1:
About Hive dialect issue, may be you're using JDK11?
There's a known issue in FLINK-27450[1]. The main reason that Hive dosen't
fully support JDK11. More specific to your case, it has been tracked in
HIVE-21584[2].
Flink has upgrade the Hive 2.x version t
Thank you for the clarification. Sure will use your inputs.
On Wed, May 25, 2022, 12:34 Chesnay Schepler wrote:
> This is currently not supported, no.
>
> There are some conceptual issues.
> For example, session mode can't isolate the logs between jobs, sou you'd
> end up with gargantuan logs be
This is currently not supported, no.
There are some conceptual issues.
For example, session mode can't isolate the logs between jobs, sou you'd
end up with gargantuan logs being archived for every job.
Similarly, and form of log rotation would reduce the value of such a
mechanism.
I would rec
Hi Rohith,
I guess the `word_count.py` you tried to execute is from flink release-1.15
or master branch. For pyflink 1.14 you are using, you need to change
`t_env.get_config().set_string("parallelism.default", "1")` to
`t_env.get_config().get_configuration().set_string("parallelism.default",
"1")`
Hi Dipanjan,
Please double check whether the libraries are really contained in the job
jar you are submitting because if the library is contained in this jar,
then it should be on the classpath and you should be able to load it.
Cheers,
Till
On Thu, May 20, 2021 at 3:43 PM Dipanjan Mazumder wro
Hi Pavan,
please post these kind of questions to the user ML. I've cross linked it
now.
Image attachments will be filtered out. Consequently, we cannot see what
you have posted. Moreover, it would be good if you could provide the
community with a bit more details what the custom way is and what y
that was the issue , thanks so much man!!
AU
On Thu, Jul 18, 2019 at 8:56 PM Caizhi Weng wrote:
> Hi Andres,
>
> `provided` of flink-streaming-java seems suspicious, can you
> remove it and see what happens?
>
> Andres Angel 于2019年7月19日周五 上午3:10写道:
>
> > Hello everyone,
> >
> > I'm using Intell
Hi Andres,
`provided` of flink-streaming-java seems suspicious, can you
remove it and see what happens?
Andres Angel 于2019年7月19日周五 上午3:10写道:
> Hello everyone,
>
> I'm using IntelliJ in a ubuntu environment with java 1.8 to run my Flink
> framworks. My goal is consume few kinesis stream services
Yes indeed, it affects the behavior on java 11.
I have created a bug in jira about it:
Summary: MetricReporter: "metrics.reporters" configuration has to be
provided for reporters to be taken into account
Key: FLINK-11413
URL: https://issues.apache.org/jira/browse/F
nvm, it does indeed affect behavior :/
On 23.01.2019 10:08, Chesnay Schepler wrote:
Just to make sure, this issue does not actually affect the behavior,
does it? Since we only use these as a filter for reporters to activate.
On 21.01.2019 18:22, Matthieu Bonneviot wrote:
Hi
I don't have the
Just to make sure, this issue does not actually affect the behavior,
does it? Since we only use these as a filter for reporters to activate.
On 21.01.2019 18:22, Matthieu Bonneviot wrote:
Hi
I don't have the jira permission but If you grant me the permission I could
contribute to fix the follo
Hi Devendra,
thanks for reaching out to the Flink dev mailing list. I think your
question will be better answered on the user mailing list.
Cheers,
Till
On Fri, Oct 6, 2017 at 8:14 AM, Devendra Vishwakarma
wrote:
> Hi,
>
> I am following Apache Flink Twitter streaming example given in
> https:
As the exceptions says your outputformat isn't serializable; is the
carboncontext marked as transient?
On 17.04.2017 09:12, Sangeeta Gulia wrote:
Hi Team,
Please ignore the previous mail.
I am trying to create a connector for carbondata. Currently i am working on
creating OutputFormat for car
Hi Team,
Please ignore the previous mail.
I am trying to create a connector for carbondata. Currently i am working on
creating OutputFormat for carbondata. For that i need to create a
carboncontext which i initialise in configure method. Internally
CarbonContext uses sparkContext.
I am able to c
Thanks Lining,
I added you as Contributor to JIRA.
Best, Fabian
2017-03-23 6:49 GMT+01:00 lining jing :
> Hi,
>
> All, Can give me contirbutor permissions! Thanks!
>
Hi,
done. I gave you contributor permissions. You can now assign issues to
yourself.
Best, Fabian
2017-03-21 16:19 GMT+01:00 zhengcanbin :
> Hi, all
> I have created two issues and fix them
>
> https://issues.apache.org/jira/browse/FLINK-6117
>
> https://issues.apache.org/jira/browse/FLINK-6132
Hi Roman,
welcome to the Flink community. I assigned the issue to you and gave you
Contributor permissions in JIRA.
You can now assign issues to yourself.
Best, Fabian
2016-11-14 12:45 GMT+01:00 Roman Maier :
> Hi folks.
>
> I want to do something for Flink. As the first step I have started doi
Hi!
There are some examples for how to write that in the documentation:
https://ci.apache.org/projects/flink/flink-docs-release-1.1/apis/streaming/windows.html
Best,
Stephan
On Fri, Oct 7, 2016 at 10:46 AM, MOHAMMAD SHAKIR
wrote:
> Hi,
> I am trying to implement a user defined function on the
Yes correct. Thank you for this suggestion.
This is the first time I am working on distributed application, so having
such basic questions
Now I am getting object is not serializable exception. I have generated the
serialUId for that class.
On Jun 12, 2016 11:28 AM, "Aljoscha Krettek" wrote:
>
Hi,
the static field will be null after the code has been shipped to the
cluster. You can use a RichMapFunction instead, there you can define an
open() method to initialize such a field. Please see this part of the doc
for rich functions:
https://ci.apache.org/projects/flink/flink-docs-master/apis/
Issue resolved.
Created the uber jar (fat jar) as shown in the flink-quickstart guide.
My bad, I should have checked that first.
However now I am facing NullPointerException , consider the following
example (this is the dummy code , since I cannot share the actual code):
public class App impleme
Hi Robert,
Yes we are using maven for building the jar, I have deployed both jar with
dependencies and without dependencies.
I actually cannot share the pom since it is on the client machine.
But all the dependencies required are there, I have attached a sample pom
file which is similar to the p
Are you using Maven for building your job jar?
If yes, can you post your pom file on the mailing list?
On Fri, Jun 10, 2016 at 7:16 PM, THORMAN, ROBERT D wrote:
> How did you “provide” the dependencies? Did you use the –C
> parameter when you submitted your job?
>
> On 6/10/16, 11:35 AM, "Vina
How did you “provide” the dependencies? Did you use the –C parameter
when you submitted your job?
On 6/10/16, 11:35 AM, "Vinay Patil" wrote:
>Hi Guys,
>
>I have deployed my application on a cluster, however when I try to run the
>application it throws *NoClassDefFoundError for KeyedDeserializ
Via YARN it's possible to set dynamically, but for standalone clusters
unfortunately not at the moment.
On Fri, Jun 3, 2016 at 3:46 PM, Vinay Patil wrote:
> Hi,
>
> I am unable to pass VM arguments to my jar, this is the way I am running it:
>
> *bin/flink run test.jar config.yaml*
>
> I cannot a
I think then you have to either reconfigure your cluster environment or
wait until we bump the Akka version to 2.4.x which supports having an
internal and external IP address.
Cheers,
Till
On Fri, Apr 15, 2016 at 6:36 PM, star jlong
wrote:
> Hi Till/Ned,
>
> Soory I thought this was my post.
>
Hi Till/Ned,
Soory I thought this was my post.
Le Vendredi 15 avril 2016 17h28, ned dogg a écrit :
Hi Till/Jstar,
Thanks for the the reply.
Well I'm facing the same issue as Jstar. Here is my scenarios, I have app that
is creating flink cluster on VMs for users. This app cannot create
Hi Till,
Thanks for the reply.
The idea of ssh of the instance is a good one. I thought of that but in my case
it is not applicable because I setting up a cluster some employees of a
company. And ssh the instance by employees will mean giving them the instance's
key-pair, which I can not. Matte
What is the compile error? Or could you already resolve it? there should
not be a difference from 1.0 to 1.1-SNAPSHOT for both classes...
And yes, prefix _2.11 is only available for released but not for current
development branch. But it is the same code (no need to worry about it).
-Matthias
On
Hi Ned,
what you also could do is to ssh to your remote cluster and submit the job
using the private IP address which is reachable from within your cluster. I
don't know whether that would be applicable to your use case.
Cheers,
Till
On Fri, Apr 15, 2016 at 9:22 AM, Till Rohrmann wrote:
> The
The log says: Unable to allocate on port 6123, due to error: Cannot assign
requested address
Thus, I would assume that something with your cluster configuration is not
entirely correct. Could you check that?
On Thu, Apr 14, 2016 at 11:19 AM, ned dogg wrote:
> Here a try that I given. As first I
Hi Matthias,
I change the version as per your requirement but when I do that I have a
compilation error at the level of the classes
org.apache.flink.storm.util.BoltFileSink and
org.apache.flink.storm.util.OutputFormatter
Btw, the dependency
org.apache.flink
flink-storm-examples_2.11
1.1-SNA
change the version to 1.1-SNAPSHOT
On 04/14/2016 11:52 AM, star jlong wrote:
> One question which dependency of flink are you using because I'm using
> org.apache.flink
> flink-storm-examples_2.11
> 1.0.0
> And once I change the version to SNAPSHOT version, the pom.xml complains that
> it cou
One question which dependency of flink are you using because I'm using
org.apache.flink
flink-storm-examples_2.11
1.0.0
And once I change the version to SNAPSHOT version, the pom.xml complains that
it could not satisfy the given dependency.
Le Jeudi 14 avril 2016 10h45, star jlong a
écr
Yes it is.
Le Jeudi 14 avril 2016 10h39, Matthias J. Sax a écrit :
For the fix, you need to use the current development version of Flink,
ie, change your maven dependency from 1.0 to
1.1-SNAPSHOT
One question: what is FlinkGitService.class? It does only show up when
you get the ClassLoa
For the fix, you need to use the current development version of Flink,
ie, change your maven dependency from 1.0 to
1.1-SNAPSHOT
One question: what is FlinkGitService.class? It does only show up when
you get the ClassLoader:
> ClassLoader loader = URLClassLoader.newInstance(new URL[] { new URL(pa
Here a try that I given. As first I was configuring my cluster with private
ip and it was starting properly.
So to avoid this akka issue, I decided to configure my cluster with public
address but with this configuration, my cluster is not starting at all.
Here is the logs that I get
2016-04-14 09:0
Thanks Till for the reply.
But according to you how can I address that?
Thanks,
Ned
On Thu, Apr 14, 2016 at 9:56 AM, Till Rohrmann
wrote:
> Hi Ned,
>
> I think you are facing the issue described in this JIRA issue [1]. The
> problem is that you have a private and a public IP address and that A
Hi Ned,
I think you are facing the issue described in this JIRA issue [1]. The
problem is that you have a private and a public IP address and that Akka
binds to the private IP address. Since the registered IP of an ActorSystem
and the target IP address of a request to this ActorSystem have to be
m
2016-04-14 08:23:51,900 INFO
org.apache.flink.runtime.jobmanager.JobManager-
2016-04-14 08:23:51,902 INFO
org.apache.flink.runtime.jobmanager.JobManager- Starting
JobManager (Version:
I'm referring to the jobmanager.log file not the client log file. You can
find it in the `/log` directory.
Cheers,
Till
On Thu, Apr 14, 2016 at 9:56 AM, ned dogg wrote:
> Hi Till
>
> Thanks for the prompt reply.
>
> The logs say that Please make sure that the actor is running and its port
> is
Hi Till
Thanks for the prompt reply.
The logs say that Please make sure that the actor is running and its port
is reachable.
And it is actaully reachable because I can ping that address.
Ned.
On Thu, Apr 14, 2016 at 8:43 AM, Till Rohrmann
wrote:
> Hi Ned,
>
> what does the logs of the JobMana
Hi Ned,
what does the logs of the JobManager say?
Cheers,
Till
On Apr 14, 2016 9:19 AM, "ned dogg" wrote:
> Hi everybody,
>
> I'm Ned, a young and passionte developer of apache technologies. I have
> been playing with apache flink lastly.
>
> This is what I wanted to do submit a flink topology
What I'm trying to say is that to get submit the flink topology to flink, I
had to do an invocation of the mainMethod(which contain the actaul topology) of
my topology with the class java.lang.reflect.Method.That is if you a take look
at the following the topology the mainMethod is buildTopolog
I cannot follow completely in your last step when you fail. What do you
mean by "I'm stuck at the level when I want to copy that from the jar to
submit it to flink"?
Btw: I copied the code from the SO question and it works for me on the
current master (which includes Till's hotfix).
-Matthias
O
@Stephan,
I have try using RemoteStreamEnvironment but I have another exception which is
java.lang.IllegalStateException: No operators defined in streaming topology.
Cannot execute.
Le Mercredi 13 avril 2016 20h40, star jlong a
écrit :
Thanks Matthias for the reply.
Maybe I shou
Thanks Matthias for the reply.
Maybe I should explain what I want to do better.My objective is to deploy a
flink topology on flink using java but in the production mode. For that here
are the step that I have taken.
1-Convert a sample wordcount storm topology to a flink topology as indicated
he
Hi!
For a Storm program, you would need a "RemoteStreamEnvironment" - the
"RemoteEnvironment" is for batch programs.
Stephan
On Wed, Apr 13, 2016 at 6:23 PM, star jlong
wrote:
> Thanks for the reply.
> @Stephen, I try using RemoteEnvironment to submit my topology to flink.
> Here is the try th
Hi jstar,
I need to have a close look. But I am wondering why you use reflection
in the first place? Is there any specific reason for that?
Furthermore, the example provided in project maven-example also covers
the case to submit a topology to Flink via Java. Have a look at
org.apache.flink.storm
Thanks for the reply.
@Stephen, I try using RemoteEnvironment to submit my topology to flink.
Here is the try that I did RemoteEnvironment remote = new
RemoteEnvironment(ipJobManager, 6123, jarPath); remote.execute();
While running the program, this is the exception that I got.
java.lang.RuntimeE
I think this is not the problem here since the problem is still happening
on the client side when the FlinkTopology tries to copy the registered
spouts. This happens before the job is submitted to the cluster. Maybe
Mathias could chime in here.
Cheers,
Till
On Wed, Apr 13, 2016 at 5:39 PM, Stepha
Hi!
For flink standalone programs, you would use a "RemoteEnvironment"
For Storm, I would use the "FlinkClient" in "org.apache.flink.storm.api".
That one should deal with jars, classloaders, etc for you.
Stephan
On Wed, Apr 13, 2016 at 3:43 PM, star jlong
wrote:
> Thanks for the suggestion.
Thanks for the suggestion. Sure those examples are interesting and I have
deploy them successfully on flink. The deployment is done the command line that
is doing something like
bin/flink run example.jarBut what I want is to submit the topology to flink
using a java program.
Thanks.
Le Me
you can find examples here:
https://github.com/apache/flink/tree/master/flink-contrib/flink-storm-examples
we haven't established yet that it is an API issue; it could very well
be caused by the reflection magic you're using...
On 13.04.2016 14:57, star jlong wrote:
Ok, it seems like there a
Ok, it seems like there an issue with the api. So please does anybody has a
working example for deploying a topology using the flink dependency
flink-storm_2.11 or any other will be welcoming.
Thanks,
jstar
Le Mercredi 13 avril 2016 13h44, star jlong a
écrit :
Hi Schepler,
Thanks fo
I've updated the master. Could you check it out and run your program with
the latest master? I would expect to see a ClassNotFoundException.
On Wed, Apr 13, 2016 at 2:54 PM, Till Rohrmann wrote:
> Yes that is true. I'll commit a hotfix for that.
>
> My suspicion is that we use the wrong class lo
Yes that is true. I'll commit a hotfix for that.
My suspicion is that we use the wrong class loader in the
FlinkTopology.copyObject method to load the RandomSentenceSpout class. We
can see that once I removed the exception swallowing in the current master.
On Wed, Apr 13, 2016 at 2:40 PM, star jl
Hi Schepler,
Thanks for the concerned. Yes I'm actaully having the same issue as indicated
on that post because I'm the one that posted that issue.
Le Mercredi 13 avril 2016 13h35, Chesnay Schepler a
écrit :
http://stackoverflow.com/questions/36584784/issues-while-submitting-a-topolo
I think the following is the interesting part of the stack-trace:
|Causedby:java.lang.RuntimeException:Failedto copy object.at
org.apache.flink.storm.api.FlinkTopology.copyObject(FlinkTopology.java:145)at
org.apache.flink.storm.api.FlinkTopology.getPrivateField(FlinkTopology.java:132)at
org.ap
http://stackoverflow.com/questions/36584784/issues-while-submitting-a-topology-to-apache-flink-using-the-flink-api
On 13.04.2016 14:28, Till Rohrmann wrote:
Hi jstar,
what's exactly the problem you're observing?
Cheers,
Till
On Wed, Apr 13, 2016 at 2:23 PM, star jlong
wrote:
Hi there,
I'm
Hi Till,
Thank for the quick reply. I'm unable to copy the mainMethod of my topology
using the instruction
(FlinkTopology) method.invoke(null, new Object[] {});
where method is variable of type java.lang.reflect.Method
Le Mercredi 13 avril 2016 13h28, Till Rohrmann a
écrit :
Hi jstar,
Hi jstar,
what's exactly the problem you're observing?
Cheers,
Till
On Wed, Apr 13, 2016 at 2:23 PM, star jlong
wrote:
> Hi there,
>
> I'm jstar. I have been playing around with flink. I'm very much interested
> in submitting a topoloy to flink using its api. As indicated
> on stackoverflow,
72 matches
Mail list logo