maintenance overhead of having
>> > them in the code base and no indication that someone is actively using
>> > them, I would still be in favour of removing them. This will reduce our
>> > maintenance burden in the future. What do you think?
>> >
>> > [1]
and no indication that someone is actively using
> > them, I would still be in favour of removing them. This will reduce our
> > maintenance burden in the future. What do you think?
> >
> > [1]
> >
> https://github.com/apache/flink/blob/master/flink-contrib/flink-storm/
think?
>
> [1]
> https://github.com/apache/flink/blob/master/flink-contrib/flink-storm/src/main/java/org/apache/flink/storm/wrappers/FlinkTopologyContext.java
>
> Cheers,
> Till
>
> On Tue, Oct 9, 2018 at 10:08 AM Fabian Hueske wrote:
>
> > Yes, let's do it
nance burden in the future. What do you think?
[1]
https://github.com/apache/flink/blob/master/flink-contrib/flink-storm/src/main/java/org/apache/flink/storm/wrappers/FlinkTopologyContext.java
Cheers,
Till
On Tue, Oct 9, 2018 at 10:08 AM Fabian Hueske wrote:
> Yes, let's do it this way.
e useful for our users
> > to not completely remove it in one go. Instead for those who still
> > want to use some Bolt and Spout code in Flink, it could be nice to
> > keep the wrappers. At least, we could remove flink-storm in a more
> > graceful way by first removing the To
those who still
want to use some Bolt and Spout code in Flink, it could be nice to
keep the wrappers. At least, we could remove flink-storm in a more
graceful way by first removing the Topology and client parts and then
the wrappers. What do you think?
Cheers,
Till
On Mon, Oct 8, 2018 at 11
nice to keep the wrappers.
At least, we could remove flink-storm in a more graceful way by first
removing the Topology and client parts and then the wrappers. What do you
think?
Cheers,
Till
On Mon, Oct 8, 2018 at 11:13 AM Chesnay Schepler wrote:
> I don't believe that to be the consen
I don't believe that to be the consensus. For starters it is
contradictory; we can't /drop /flink-storm yet still /keep //some parts/.
From my understanding we drop flink-storm completely, and put a note in
the docs that the bolt/spout wrappers of previous versions will continue
to
Thanks for opening the issue Chesnay. I think the overall consensus is to
drop flink-storm and only keep the Bolt and Spout wrappers. Thanks for your
feedback!
Cheers,
Till
On Mon, Oct 8, 2018 at 9:37 AM Chesnay Schepler wrote:
> I've created https://issues.apache.org/jira/browse/FLI
I've created https://issues.apache.org/jira/browse/FLINK-10509 for
removing flink-storm.
On 28.09.2018 15:22, Till Rohrmann wrote:
Hi everyone,
I would like to discuss how to proceed with Flink's storm compatibility
layer flink-strom.
While working on removing Flink's legacy
Best,
>>>> tison.
>>>>
>>>>
>>>> 远远 于2018年9月29日周六 下午2:16写道:
>>>>
>>>>> +1, it‘s time to drop it😂
>>>>>
>>>>> Zhijiang(wangzhijiang999) 于2018年9月29日周六
>>>>> 下午1:53写道:
>>>>>
t; >> Zhijiang(wangzhijiang999) 于2018年9月29日周六
> > >> 下午1:53写道:
> > >>
> > >>> Very agree with to drop it. +1
> > >>>
> > >>> --
> > >>> 发件人:
; >
> >> +1, it‘s time to drop it😂
> >>
> >> Zhijiang(wangzhijiang999) 于2018年9月29日周六
> >> 下午1:53写道:
> >>
> >>> Very agree with to drop it. +1
> >>>
> >>> ---
29日周六
>> 下午1:53写道:
>>
>>> Very agree with to drop it. +1
>>>
>>> --
>>> 发件人:Jeff Carter
>>> 发送时间:2018年9月29日(星期六) 10:18
>>> 收件人:dev
>>> 抄 送:chesnay ; Till Rohrmann ;
>>> user
>>> 主 题:Re: [DISCUSS] Dropping fli
agree with to drop it. +1
>>
>> --
>> 发件人:Jeff Carter
>> 发送时间:2018年9月29日(星期六) 10:18
>> 收件人:dev
>> 抄 送:chesnay ; Till Rohrmann ;
>> user
>> 主 题:Re: [DISCUSS] Dropping flink-storm?
>&g
nn ;
> user
> 主 题:Re: [DISCUSS] Dropping flink-storm?
>
> +1 to drop it.
>
> On Fri, Sep 28, 2018, 7:25 PM Hequn Cheng wrote:
>
> > Hi,
> >
> > +1 to drop it. It seems that few people use it.
> >
> > Best, Hequn
> >
> > On Fri, Sep 28, 2018 a
Very agree with to drop it. +1
--
发件人:Jeff Carter
发送时间:2018年9月29日(星期六) 10:18
收件人:dev
抄 送:chesnay ; Till Rohrmann ; user
主 题:Re: [DISCUSS] Dropping flink-storm?
+1 to drop it.
On Fri, Sep 28, 2018, 7:25 PM Hequn Cheng wrote
hould cull some of the more obscure ones.
> flink-storm, while interesting from a theoretical standpoint, offers too
> little value.
>
> Note that the bolt/spout wrapper parts of the part are still compatible,
> it's only topologies that aren't working.
>
> IMO compatibi
using to the relevant users.
Thanks, vino.
Chesnay Schepler 于2018年9月28日周五 下午10:22写道:
> I'm very much in favor of dropping it.
>
> Flink has been continually growing in terms of features, and IMO we've
> reached the point where we should cull some of the more obscure
I'm very much in favor of dropping it.
Flink has been continually growing in terms of features, and IMO we've
reached the point where we should cull some of the more obscure ones.
flink-storm, while interesting from a theoretical standpoint, offers too
little value.
Note that the
Hi everyone,
I would like to discuss how to proceed with Flink's storm compatibility
layer flink-strom.
While working on removing Flink's legacy mode, I noticed that some parts of
flink-storm rely on the legacy Flink client. In fact, at the moment
flink-storm does not work together wi
Hi hanjing,
*There may be both flink job and flink-storm in the my cluster, I don't
know the influence about legacy mode.*
> For storm-compatible jobs, because of technical limitations, you need to
use a cluster that supports legacy mode.
But for Jobs implemented using the Flink-rel
Is there
> any document and release note?
> There may be both flink job and flink-storm in the my cluster, I don't
> know the influence about legacy mode.
>
> Hanjing
>
> <https://maas.mail.163.com/dashi-web-extend/html/proSignature.html?ftlId=1&name=Hanjing&
Hi Till,
legacy mode worked!
Thanks a lot. And what's difference between legacy and new? Is there any
document and release note?
There may be both flink job and flink-storm in the my cluster, I don't know
the influence about legacy mode.
| |
Hanjing
|
|
|
签名由网易邮箱大师定制
On
<https://mail.163.com/dashi/dlpro.html?from=mail81> 定制
>> On 9/11/2018 10:59,vino yang
>> wrote:
>>
>> Oh, I thought the flink job could not be submitted. I don't know why the
>> storm's example could not be submitted. Because I have never used it.
>&g
I have never used it.
Maybe Till, Chesnay or Gary can help you. Ping them for you.
Thanks, vino.
jing 于2018年9月11日周二 上午10:26写道:
Hi vino,
My job mangaer log is as below. I can submit regular flink job to this
jobmanger, it worked. But the flink-storm example doesn's work.
Thanks.
H
t.
>
> Maybe Till, Chesnay or Gary can help you. Ping them for you.
>
> Thanks, vino.
>
> jing 于2018年9月11日周二 上午10:26写道:
>
>> Hi vino,
>> My job mangaer log is as below. I can submit regular flink job to this
>> jobmanger, it worked. But the flink-
no.
jing 于2018年9月11日周二 上午10:26写道:
Hi vino,
My job mangaer log is as below. I can submit regular flink job to this
jobmanger, it worked. But the flink-storm example doesn's work.
Thanks.
Hanjing
2018-09-11 18:22:48,937 INFO
org.apache.flink.runtim
log is as below. I can submit regular flink job to this
> jobmanger, it worked. But the flink-storm example doesn's work.
> Thanks.
> Hanjing
>
> 2018-09-11 18:22:48,937 INFO
> org.apache.fli
Hi vino,
My job mangaer log is as below. I can submit regular flink job to this
jobmanger, it worked. But the flink-storm example doesn's work.
Thanks.
Hanjing
2018-09-11 18:22:48,937 INFO
org.apache.flink.runtime.entrypoint.ClusterEntry
bmanager-rpc-address
>>
>> jing 于2018年9月10日周一 下午6:57写道:
>>
>>> Hi vino,
>>> I commit the job on the jvm code with the command below.
>>> hadoop3@p-a36-72 flink-1.6.0]$ ./bin/flink run
>>> WordCount-StormTopology.jar input output
>>&g
n the Flink JM configuration?
Thanks, vino.
jing 于2018年9月10日周一 上午11:00写道:
Hello,
I’m trying to run flink-storm-example on standalone clusters. But
there’s some exception I can’t sovle. Could anyone please help me with trouble.
flink-storm-example version: 1.60
flink v
items=%5B%22%E9%82%AE%E7%AE%B1hanjingzuzu%40163.com%22%5D>
>>
>> 签名由 网易邮箱大师 <https://mail.163.com/dashi/dlpro.html?from=mail88> 定制
>>
>> On 09/10/2018 15:49, vino yang wrote:
>> Hi Hanjing,
>>
>> Did you perform a CLI commit on the JM node? Is the address
/10/2018 15:49, vino yang wrote:
Hi Hanjing,
Did you perform a CLI commit on the JM node? Is the address bound to
"localhost" in the Flink JM configuration?
Thanks, vino.
jing 于2018年9月10日周一 上午11:00写道:
Hello,
I’m trying to run flink-storm-example on standalone clusters.
s bound to
> "localhost" in the Flink JM configuration?
>
> Thanks, vino.
>
> jing 于2018年9月10日周一 上午11:00写道:
>
>> Hello,
>>
>>I’m trying to run flink-storm-example on standalone clusters. But
>> there’s some exception I can’t sovle. Could anyone
ng to run flink-storm-example on standalone clusters. But
there’s some exception I can’t sovle. Could anyone please help me with trouble.
flink-storm-example version: 1.60
flink version: 1.60
The log below is the Exception. My job manager status is as the picture.
I’v t
ST Federico D'Ambrosio wrote:
>>
>>> Hello everyone,
>>>
>>> I'd like to use the HiveBolt from storm-hive inside a flink job using the
>>> Flink-Storm compatibility layer but I'm not sure how to integrate it. Let
>>> me explain, I wou
o
On Friday, 22 September 2017 12:14:32 CEST Federico D'Ambrosio wrote:
Hello everyone,
I'd like to use the HiveBolt from storm-hive inside a flink job using the
Flink-Storm compatibility layer but I'm not sure how to integrate it. Let
me explain, I would have the following:
v
I'd like to use the HiveBolt from storm-hive inside a flink job using the
> Flink-Storm compatibility layer but I'm not sure how to integrate it. Let
> me explain, I would have the following:
>
> val mapper = ...
>
> val hiveOptions = ...
>
> streamByID
> .
Hello everyone,
I'd like to use the HiveBolt from storm-hive inside a flink job using the
Flink-Storm compatibility layer but I'm not sure how to integrate it. Let
me explain, I would have the following:
val mapper = ...
val hiveOptions = ...
streamByID
.transform[OUT]("
the confirmation.
>
> When will 1.0 be ready in maven repo?
>
>
>
> From: ewenstep...@gmail.com [mailto:ewenstep...@gmail.com] On Behalf Of
> Stephan Ewen
> Sent: Friday, February 26, 2016 9:07 PM
> To: user@flink.apache.org
> Subject: Re: flink-storm FlinkLocalCluster is
Thanks for the confirmation.
When will 1.0 be ready in maven repo?
From: ewenstep...@gmail.com [mailto:ewenstep...@gmail.com] On Behalf Of Stephan
Ewen
Sent: Friday, February 26, 2016 9:07 PM
To: user@flink.apache.org
Subject: Re: flink-storm FlinkLocalCluster issue
Hi!
On 0.10.x, the Storm
. Nothing
> have been changed. I simply try to run the flink-Storm word count local
> example.
>
> It just failed to work.
>
>
> Sent from my iPhone
>
> On 26 Feb 2016, at 6:16 PM, Till Rohrmann wrote:
>
> Hi Shuhao,
>
> the configuration you’re providing i
n the flink-Storm word count local example.
It just failed to work.
Sent from my iPhone
On 26 Feb 2016, at 6:16 PM, Till Rohrmann
mailto:trohrm...@apache.org>> wrote:
Hi Shuhao,
the configuration you’re providing is only used for the storm compatibility
layer and not Flink itself. When y
ly.
>
>
>
> I’m trying out the flink-storm example project, version 0.10.2,
> flink-storm-examples, word-count-local.
>
>
>
> But, I got the following error:
>
> org.apache.flink.runtime.jobmanager.scheduler.NoResourceAvailableException:
> Not enough free s
Hi everyone,
I'm a student researcher working on Flink recently.
I'm trying out the flink-storm example project, version 0.10.2,
flink-storm-examples, word-count-local.
But, I got the following error:
org.apache.flink.runtime.jobmanager.scheduler.NoResourceAvailableException: Not
e
orm-core" as an dependency -- this will
result in a Kryo problem due to a Flink/Storm Kryo version conflict.
(The dependency is not needed anyway, as you get it automatically via
"flink-storm-examples" or "flink-storm".)
This Kryo version conflict was the problem in the first pla
la:279)
>> at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:257)
>> at akka.dispatch.Mailbox.run(Mailbox.scala:221)
>> at akka.dispatch.Mailbox.exec(Mailbox.scala:231)
>> at
>> scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTa
Task(ForkJoinPool.java:
> 1339)
> at
> scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
> at
> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.jav
> a:107)
>
>
>
> I added the below lines of code for sto
kjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.jav
a:107)
I added the below lines of code for stoping the local cluster at the end,
the code is same as flink-storm-examples one.
Utils.sleep(10 * 1000);
cluster.shutdown();
Thanks,
Naveen
On 12/5/15, 7:54 AM, "Matthias J. Sax"
ct.
-Matthias
On 12/04/2015 08:55 PM, Madhire, Naveen wrote:
> Hi Max,
>
> I forgot to include flink-storm-examples dependency in the application to
> use BoltFileSink.
>
> However, the file created by the BoltFileSink is empty. Is there any other
> stuff which I need to d
Hi Max,
I forgot to include flink-storm-examples dependency in the application to
use BoltFileSink.
However, the file created by the BoltFileSink is empty. Is there any other
stuff which I need to do to write it into a file by using BoltFileSink?
I am using the same code what you mentioned
Hi Max,
Yeah, I did route the ³count² bolt output to a file and I see the output.
I can see the Storm and Flink output matching.
However, I am not able to use the BoltFileSink class in the 1.0-SNAPSHOT
which I built. I think it¹s better to wait for a day for the Maven sync to
happen so that I can
Hi Naveen,
Were you using Maven before? The syncing of changes in the master
always takes a while for Maven. The documentation happened to be
updated before Maven synchronized. Building and installing manually
(what you did) solves the problem.
Strangely, when I run your code on my machine with t
count
-> print on console
The code is present at
https://github.com/naveenmadhire/flink-storm-example. When I run the
WordCountTopologyFlink.java program, I don¹t see any messages on the
console. I modified this class in the same way as it is mentioned in the
flink documentation.
The detailed j
ojects/flink/flink-docs-master/apis/storm_compatibility.html
>
>
> I want to use Flink-storm 1.0-SNAPSHOT version, I don’t see any
> createTopology method in FlinkTopology class.
>
> Ex, cluster.submitTopology("WordCount", conf,
> FlinkTopology.createTopology(builder));
>
>
/flink-docs-master/apis/storm_compatibility.html
I want to use Flink-storm 1.0-SNAPSHOT version, I don’t see any createTopology
method in FlinkTopology class.
Ex, cluster.submitTopology("WordCount", conf,
FlinkTopology.createTopology(builder));
Is the documentation incorrect for the 1.
>> >
>> >> > at
>> >> >
>> >>
>>
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>> >> >
>> >&g
Sep 1, 2015 at 3:10 PM, Matthias J. Sax
> >> > <mailto:mj...@informatik.hu-berlin.de>
> >> <mailto:mj...@informatik.hu-berlin.de
> <mailto:mj...@informatik.hu-berlin.de>>
> >> <mailto:mj...@informatik.hu-berlin.de
> <mailto:mj
gt; at
>> >> >
>> >>
>>
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>> >> >
>> >> > at
>> >> >
>> >>
>>
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(Delegatin
how to fix this?
> >> >
> >> > On Tue, Sep 1, 2015 at 3:10 PM, Matthias J. Sax
> >> > >> <mailto:mj...@informatik.hu-berlin.de>
> >> <mailto:mj...@informatik.hu-berlin.de
> >> <mailto:mj...@info
t; >
>> > WordCount-StormTopology uses a hard coded dop of 4. If you
>> start up
>> > Flink in local mode (bin/start-local-streaming.sh), you need
>> to increase
>> > the number of task slots to at least 4 in conf/fl
> log/flink-...-jobmanager-...log
> >
> > > NoResourceAvailableException: Not enough free slots available to
> > run the job. You can decrease the operator parallelism or increase
> > the number of slots per TaskManager in the configuration.
> >
> >
rease the operator parallelism or increase
> > the number of slots per TaskManager in the configuration.
> >
> > WordCount-StormTopology does use StormWordCountRemoteBySubmitter
> > internally. So, you do use it already ;)
> >
> > I am not sure wh
.10-SNAPSHOT it is
> located in submodule "flink-connector-kafka" (which is submodule of
> "flink-streaming-connector-parent" -- which is submodule of
> "flink-streamping-parent").
>
>
> -Matthias
>
>
> On 09/01/2015 09
use? In flink-0.10-SNAPSHOT it is
> located in submodule "flink-connector-kafka" (which is submodule of
> "flink-streaming-connector-parent" -- which is submodule of
> "flink-streamping-parent").
>
>
> -Matthias
>
>
> On 09/01/2015 09:40 PM, Jerry Pe
le of
"flink-streaming-connector-parent" -- which is submodule of
"flink-streamping-parent").
-Matthias
On 09/01/2015 09:40 PM, Jerry Peng wrote:
> Hello,
>
> I have some questions regarding how to run one of the
> flink-storm-examples, the WordCountTopology
Concerning the KafkaSource, please use the "FlinkKafkaConsumer". Its the
new and better KafkaSource.
Am 01.09.2015 21:40 schrieb "Jerry Peng" :
> Hello,
>
> I have some questions regarding how to run one of the
> flink-storm-examples, the WordCountTopology. How sh
Hello,
I have some questions regarding how to run one of the flink-storm-examples,
the WordCountTopology. How should I run the job? On github its says I
should just execute
bin/flink run example.jar but when I execute:
bin/flink run WordCount-StormTopology.jar
nothing happens. What am I
69 matches
Mail list logo