Henry Saputra created FLINK-2650:
Summary: Fix broken link in the Table API doc
Key: FLINK-2650
URL: https://issues.apache.org/jira/browse/FLINK-2650
Project: Flink
Issue Type: Bug
Ted Yu created FLINK-2649:
-
Summary: Potential resource leak in JarHelper#unjar()
Key: FLINK-2649
URL: https://issues.apache.org/jira/browse/FLINK-2649
Project: Flink
Issue Type: Bug
Repo
> On 09 Sep 2015, at 19:31, Niklas Semmler wrote:
>
> Hello Ufuk,
>
> thanks for you amazingly quick reply.
>
> I have seen the markFinished in Execution.java, but if I get it right, this
> is simply used to stop a task. The ScheduleOrUpdateConsumers message in the
> pipeline case on the oth
Yes, pretty clear. I guess semantically it's still a co-group, but
implemented slightly differently.
Thanks!
--
Gianmarco
On 9 September 2015 at 15:37, Gyula Fóra wrote:
> Hey Gianmarco,
>
> So the implementation looks something different:
>
> The update stream is received by a stateful KVStor
Hello Ufuk,
thanks for you amazingly quick reply.
I have seen the markFinished in Execution.java, but if I get it right,
this is simply used to stop a task. The ScheduleOrUpdateConsumers
message in the pipeline case on the other hand is notifying the
consumers that a pipelined partition is re
Sachin Goel created FLINK-2648:
--
Summary: CombineTaskTest.testCancelCombineTaskSorting
Key: FLINK-2648
URL: https://issues.apache.org/jira/browse/FLINK-2648
Project: Flink
Issue Type: Bug
Hey Niklas,
this is very much hidden unfortunately. You can find it in
Execution#markFinished.
The last partition to be finished triggers the scheduling of the receivers.
From your comments I see that you have dug through the network stack code quite
a bit. If you are interested, we can have a
Hello Flink community,
what is the equivalent of the ScheduleOrUpdateConsumers message in the
pipeline execution mode for the batch execution mode?
When I run a WordCount in pipeline mode, the scheduling of the receiving
tasks is initiated in the ResultPartition class via the function
notify
Stephan Ewen created FLINK-2647:
---
Summary: Stream operators need to differentiate between close()
and dispose()
Key: FLINK-2647
URL: https://issues.apache.org/jira/browse/FLINK-2647
Project: Flink
Stephan Ewen created FLINK-2646:
---
Summary: Rich functions should provide a method
"closeAfterFailure()"
Key: FLINK-2646
URL: https://issues.apache.org/jira/browse/FLINK-2646
Project: Flink
Iss
It would be nice to support both non-positional and positional
arguments. Like in
> posarg1 posarg2 --nonpos1 nonpos1value --nonpos2 nonpos2value
The arguments should also be named but should be expected at a fixed
position counting from the left ignoring non-positional arguments.
For the time b
Maximilian Michels created FLINK-2645:
-
Summary: Accumulator exceptions are not properly forwarded
Key: FLINK-2645
URL: https://issues.apache.org/jira/browse/FLINK-2645
Project: Flink
Iss
Hey Gianmarco,
So the implementation looks something different:
The update stream is received by a stateful KVStoreOperator which stores
the K-V pairs as their partitioned state.
The query for the 2 cities is assigned an ID yes, and is split to the 2
cities, and each of these are sent to the sa
Just a silly question.
For the example you described, in a data flow model, you would do something
like this:
Have query ids added to the city pairs (qid, city1, city2),
then split the query stream on the two cities and co-group it with the
updates stream ((city1, qid) , (city, temp)), same for ci
Gyula Fora created FLINK-2644:
-
Summary: State partitioning does not respect the different
partitioning of multiple inputs
Key: FLINK-2644
URL: https://issues.apache.org/jira/browse/FLINK-2644
Project: Fl
Aljoscha Krettek created FLINK-2643:
---
Summary: Change Travis Build Profile to Exclude Hadoop
2.0.0-alpha, Include 2.7.0
Key: FLINK-2643
URL: https://issues.apache.org/jira/browse/FLINK-2643
Project:
I forgot to mention that there is also a bug with the `StreamFold` operator
which we might consider fixing for the milestone release. I've opened a PR
for it.
https://issues.apache.org/jira/browse/FLINK-2631
https://github.com/apache/flink/pull/1101
On Wed, Sep 9, 2015 at 10:58 AM, Gyula Fóra w
I saw that the tool is missing Javadocs. I think that this is a prerequisite
before moving it into all the examples (or at least both should happen hand in
hand). I would like an example-centric style there.
– Ufuk
> On 04 Sep 2015, at 14:46, Behrouz Derakhshan
> wrote:
>
> Yes, I was referr
Jonas Traub created FLINK-2642:
--
Summary: Scala Table API crashes when executing word count example
Key: FLINK-2642
URL: https://issues.apache.org/jira/browse/FLINK-2642
Project: Flink
Issue Typ
This sounds good +1 from me as well :)
Till Rohrmann ezt írta (időpont: 2015. szept. 9.,
Sze, 10:40):
> +1 for a milestone release with the TypeInformation issues fixed. I'm
> working on it.
>
> On Tue, Sep 8, 2015 at 9:32 PM, Stephan Ewen wrote:
>
> > Great!
> >
> > I'd like to push one more co
+1 for a milestone release with the TypeInformation issues fixed. I'm
working on it.
On Tue, Sep 8, 2015 at 9:32 PM, Stephan Ewen wrote:
> Great!
>
> I'd like to push one more commit later today.
> A fix for https://issues.apache.org/jira/browse/FLINK-2632 would also be
> highly appreciated by s
Hi!
I think there are four ways to do this:
1) Create a new abstract class "SerializableBaseTaskHook" that extends
BaseTaskHook and implements java.io.Serializabe. Then write the object into
bytes and put it into the config.
2) Offer in the StormCompatibilityAPI a method "public void addHook(X
22 matches
Mail list logo