Thanks Stephan for clarifying :)
@kostas: i am just playing around with some ideas. Only in my head so far,
so lets not worry about these things
On Thu, Jun 4, 2015 at 6:33 PM Kostas Tzoumas wrote:
> Wouldn't this kind of cross-task communication break the whole dataflow
> abstraction? How can r
Wouldn't this kind of cross-task communication break the whole dataflow
abstraction? How can recovery be implemented if we allowed something like
this?
On Thu, Jun 4, 2015 at 5:14 PM, Stephan Ewen wrote:
> That is not what Ufuk said. You can use a singleton auxiliary task that
> communicates in
That is not what Ufuk said. You can use a singleton auxiliary task that
communicates in both directions with the vertices and acts as a coordinator
between vertices on the same level.
On Thu, Jun 4, 2015 at 2:55 PM, Gyula Fóra wrote:
> Thank you!
> I was aware of the iterations as a possibility,
Thank you!
I was aware of the iterations as a possibility, but I was wondering if we
might have "lateral" communications.
Ufuk Celebi ezt írta (időpont: 2015. jún. 4., Cs, 13:29):
>
> On 04 Jun 2015, at 12:46, Stephan Ewen wrote:
>
> > There is no "lateral communication" right now. Typical patt
On 04 Jun 2015, at 12:46, Stephan Ewen wrote:
> There is no "lateral communication" right now. Typical pattern is to break
> it up in two operators that communicate in an all-to-all fashion.
You can look at the iteration tasks: the iteration sync task is communicating
with the iteration heads
There is no "lateral communication" right now. Typical pattern is to break
it up in two operators that communicate in an all-to-all fashion.
On Thu, Jun 4, 2015 at 11:52 AM, Gyula Fóra wrote:
> I am simply thinking about the best way to send data to different subtasks
> of the same operator.
>
>
I am simply thinking about the best way to send data to different subtasks
of the same operator.
Can we go back to the original question? :D
Stephan Ewen ezt írta (időpont: 2015. jún. 3., Sze,
23:45):
> I think that it may be a bit pre-mature to invest heavily into the parallel
> delta-policy w
I think that it may be a bit pre-mature to invest heavily into the parallel
delta-policy windows just yet.
We have not even answered all questions on the key-local delta windows yet:
- How does it behave with non-monotonous changes? What does the delta
refer to, the max interval in the window, th
I am talking of course about global delta windows. On the full stream not
on a partition. Delta windows per partition happens as you said currently
as well.
On Wednesday, June 3, 2015, Aljoscha Krettek wrote:
> Yes, this is obvious, but if we simply partition the data on the
> attribute that we
Yes, this is obvious, but if we simply partition the data on the
attribute that we use for the delta policy this can be done purely on
one machine. No need for complex communication/synchronization.
On Wed, Jun 3, 2015 at 1:32 PM, Gyula Fóra wrote:
> Yes, we define a delta function from the first
Yes, we define a delta function from the first element to the last element
in a window. Now let's discretize the stream using this semantics in
parallel.
Aljoscha Krettek ezt írta (időpont: 2015. jún. 3.,
Sze, 12:20):
> Ah ok. And by distributed you mean that the element that starts the
> window
Ah ok. And by distributed you mean that the element that starts the
window can be processed on a different machine than the element that
finishes the window?
On Wed, Jun 3, 2015 at 12:11 PM, Gyula Fóra wrote:
> This is not connected to the current implementation. So lets not talk about
> that.
>
This is not connected to the current implementation. So lets not talk about
that.
This is about theoretical limits to support distributed delta policies
which has far reaching implications for the windowing policies one can
implement in a prallel way.
But you are welcome to throw in any construct
Part of the reason for my question is this:
https://issues.apache.org/jira/browse/FLINK-1967. Especially my latest
comment there. If we want this, I think we have to overhaul the
windowing system anyways and then it doesn't make sense to explore
complicated workarounds for the current system.
On W
There are simple ways of implementing it in a non-distributed or
inconsistent fashion.
On Wed, Jun 3, 2015 at 8:55 AM Aljoscha Krettek wrote:
> This already sounds awfully complicated. Is there no other way to
> implement the delta windows?
>
> On Wed, Jun 3, 2015 at 7:52 AM, Gyula Fóra wrote:
>
This already sounds awfully complicated. Is there no other way to
implement the delta windows?
On Wed, Jun 3, 2015 at 7:52 AM, Gyula Fóra wrote:
> Hi Ufuk,
>
> In the concrete use case I have in mind I only want to send events to
> another subtask of the same task vertex.
>
> Specifically: if we
Hi Ufuk,
In the concrete use case I have in mind I only want to send events to
another subtask of the same task vertex.
Specifically: if we want to do distributed delta based windows we need to
send after every trigger the element that has triggered the current window.
So practically I want to br
On 02 Jun 2015, at 22:45, Gyula Fóra wrote:
> I am wondering, what is the suggested way to send some events directly to
> another parallel instance in a flink job? For example from one mapper to
> another mapper (of the same operator).
>
> Do we have any internal support for this? The first thin
Hi,
I am wondering, what is the suggested way to send some events directly to
another parallel instance in a flink job? For example from one mapper to
another mapper (of the same operator).
Do we have any internal support for this? The first thing that we thought
of is iterations but that is clea
19 matches
Mail list logo