es allows to easily reuse and execute Storm
> >>>>> Topologies on Flink (what is the most important feature we need to
> >> have).
> >>>>>
> >>>>> I hope to get some more feedback from the community, if the
> >>>>> Strom-compatibility should be more "stormy" or more "
more "stormy" or more "flinky". Bot
>>>>> approaches make sense to me.
>>>>>
>>>>>
>>>>> I view minor comments:
>>>>>
>>>>> * FileSpout vs FiniteFileSpout
>>>>> -> FileSpout
ake sense from a Storm point of view (there is no
> >>> such thing as a finite spout)
> >>> Thus, this example shows how a regular Storm spout can be improved
> >>> using FiniteSpout interface -- I would keep it as is (even if seems to
> >>> be unnecessa
out interface -- I would keep it as is (even if seems to
>>> be unnecessary complicated -- imagine that you don't have the code of
>>> FileSpout)
>>>
>>> * You changed examples to use finite-spouts -- from a testing point of
>>> view this makes sense.
local copy "unprocessedBolts" when creating a Flink
>> program to allow to re-submit the same topology object twice (or alter
>> it after submission). If you don't make the copy, submitting/translating
>> the topology into a Flink job alters the object (which should
hy did you change the dop from 4 to 1 WordCountTopology ? We should
> test in parallel fashion...
>
> * Too many reformatting changes ;) You though many classes without any
> actual code changes.
>
>
>
>
>
>
> Forwarded Message
> Subject: Re: Storm C
7;t make the copy, submitting/translating
>> the topology into a Flink job alters the object (which should not
>> happen). And as it is not performance critical, the copying overhead
>> does not matter.
>>
>> * Why did you change the dop from 4 to 1 WordCountTopology
not matter.
>
> * Why did you change the dop from 4 to 1 WordCountTopology ? We should
> test in parallel fashion...
>
> * Too many reformatting changes ;) You though many classes without any
> actual code changes.
>
>
>
>
>
>
> Forwarded Me
you change the dop from 4 to 1 WordCountTopology ? We should
test in parallel fashion...
* Too many reformatting changes ;) You though many classes without any
actual code changes.
---- Forwarded Message
Subject: Re: Storm Compatibility
Date: Fri, 13 Nov 2015 12:15:19 +0100
From:
That would also work. I thought about it already, too. Thanks for the
feedback. If two people have similar idea, it might be the right way to
got. I will just include all this stuff and open an PR. Than we can
evaluate it again.
-Matthias
On 06/30/2015 12:01 AM, Gyula Fóra wrote:
> By declare I m
By declare I mean we assume a Flink Tuple datatype and the user declares
the name mapping (sorry its getting late).
Gyula Fóra ezt írta (időpont: 2015. jún. 29., H,
23:57):
> Ah ok, now I get what I didn't get before :)
>
> So you want to take some input stream , and execute a bolt implementatio
Ah ok, now I get what I didn't get before :)
So you want to take some input stream , and execute a bolt implementation
on it. And the question is what input type to assume when the user wants to
use field name based access.
Can't we force the user to declare the names of the inputs/outputs even i
Well. If a whole Storm topology is executed, this is of course the way
to got. However, I want to have named-attribute access in the case of an
embedded bolt (as a single operator) in a Flink program. And is this
case, fields are not declared and do not have a name (eg, if the bolt's
consumers emit
Hey,
I didn't look through the whole code so I probably don't get something but
why don't you just do what storm does? Keep a map from the field names to
indexes somewhere (make this accessible from the tuple) and then you can
just use a simple Flink tuple.
I think this is what's happening in stor
It looks like there is a now a PR request available for the storm
compatibility: https://github.com/apache/flink/pull/764
It seems were are not the only new stream processing system with
compatibility to Storm: http://dl.acm.org/citation.cfm?id=2742788
On Tue, Jun 2, 2015 at 11:09 AM, Szabó Péter
@Robert
Thanks! I think the PR will be ready to merge soon :)
@Matthias
I fixed the finite-source issue on my branch, now every example and ITCase
runs and stopps without throwing an exception. Also, in case of finite
sources, the spout wrapper will not loop infinitely.
I will study your branch an
Great to see that you two are working together on the storm compatibility
layer.
Please let the other Flink committers know when Matthias PR is in a state
that we can review it again (= when you think its ready).
Given the feedback from Peter and the long list of missing features and the
current r
Thank you very much, this explains a lot of things :)
I'm aware of that currently the support of TopologyContext is limited, so I
do not expect it to work smoothly. However, there was another issue with
the grouping by the "id" field, which seemed very strange. Anyway, I will
live the SimpleJoin ex
Hi Peter,
I started to look into the issue. However, I could not find the
following classes in the git repository:
org.apache.flink.stormcompatibility.util.AbstractStormSpout
org.apache.flink.stormcompatibility.util.OutputFormatter
org.apache.flink.stormcompatibility.util.StormBoltFileSink
org.ap
Hi Matthias,
Of course, here is the package that contains the example's source classes.
https://github.com/mbalassi/flink/tree/storm-backup/flink-staging/flink-streaming/flink-storm-examples/src/main/java/org/apache/flink/stormcompatibility/singlejoin
It is mostly a copy-paste of SimpleJoin from s
Hi Peter,
Thanks a lot for your feedback. It's exiting to see, that somebody uses
the layer already. :)
The current prototype is going to be merged soon. However, I am more
than happy to extend the functionality of the layer. Can you please
share your example with me, so I can see what the proble
Thanks, Matthias. Yes, please. :)
On Mon, Apr 6, 2015 at 3:40 PM, Matthias J. Sax <
mj...@informatik.hu-berlin.de> wrote:
> Done. Shall I open a pull request?
>
> -Matthias
>
>
> On 04/03/2015 09:32 PM, Robert Metzger wrote:
> > As far as I understood git rebase [1], cherry-picking all changes in
Done. Shall I open a pull request?
-Matthias
On 04/03/2015 09:32 PM, Robert Metzger wrote:
> As far as I understood git rebase [1], cherry-picking all changes in order
> to the current master is exactly equal to "git rebase flink/master".
> The problem is that you have to resolve all conflicts a
As far as I understood git rebase [1], cherry-picking all changes in order
to the current master is exactly equal to "git rebase flink/master".
The problem is that you have to resolve all conflicts again. But in this
case the changes to existing code are pretty small, so that might actually
work co
Right now, your commits in your working branch are mixed with commits which
are already in pushed to the master branch.
Merging this branch to the master branch in order to push it to our master
might turn out into a complex merging process.
Merging becomes far easier for us, if all commits that yo
That’s pretty nice Matthias, we could use a compositional API in streaming that
many people are familiar with.
I can also help in some parts, I see some issues we already encountered while
creating the samoa adapter (eg. dealing with circles in the topology). Thanks
again for initiating this!
P
This sounds amazing :) thanks Matthias!
Tomorrow I will spend some time to look through your work and give some
comments.
Also I would love to help with this effort so once we merge an initial
prototype let's open some Jiras and I will pick some up :)
Gyula
On Thursday, April 2, 2015, Márton Ba
Hey Mathias,
Thanks, this is a really nice contribution. I just scrolled through the
code, but I really like it and big thanks for the the tests for the
examples.
The rebase Fabian suggested would help a lot when merging.
On Thu, Apr 2, 2015 at 9:19 PM, Fabian Hueske wrote:
> Hi Matthias,
>
Hi Matthias,
this is really cool!I especially like that you can use Storm code within a
Flink streaming program :-)
One thing that might be good to do rather soon is to collect all your
commits and put them on top of a fresh forked Flink master branch.
When merging we cannot change the history an
Hey Henry,
you can check out the files here:
https://github.com/mjsax/flink/tree/flink-storm-compatibility/flink-staging/flink-streaming/flink-storm-compatibility
... so yes, they are located in the flink-streaming directory .. which is a
good place for now.
Once we move flink-streaming out of sta
HI Matthias,
Where do you put the code for the Storm compatibility? Under streams
module directory?
- Henry
On Thu, Apr 2, 2015 at 10:31 AM, Matthias J. Sax
wrote:
> Hi @all,
>
> I started to work on an compatibility layer to run Storm Topologies on
> Flink. I just pushed a first beta:
> https:
Hey Matthias,
a Storm compatibility layer sounds really great!
I'll soon take a closer look into the code, but the features you're listing
sound really amazing! Since the code has already testcases included, I'm
open to merging a first stable version and then continue the development of
the featu
32 matches
Mail list logo