Suneel Marthi created FLINK-3657:
Summary: Change access of DataSetUtils.countElements() to 'public'
Key: FLINK-3657
URL: https://issues.apache.org/jira/browse/FLINK-3657
Project: Flink
Issu
On Tue, Mar 22, 2016 at 8:42 PM, Stefano Baghino
wrote:
> My feeling is that running a job on YARN should
> end up in having more or less the same effect, regardless of the way the
> job is run.
+1
I think that the current behaviour is buggy. The resource management
is currently undergoing a mas
Hi,
probably more of a question for Till:
Imagine a common ML algorithm flow that runs until convergence.
typical distributed flow would be something like that (e.g. GMM EM would be
exactly like that):
A: input
do {
stat1 = A.map.reduce
A = A.update-map(stat1)
conv = A.map.reduce
} u
Vasia Kalavri created FLINK-3656:
Summary: Rework TableAPI tests
Key: FLINK-3656
URL: https://issues.apache.org/jira/browse/FLINK-3656
Project: Flink
Issue Type: Improvement
Compone
Hello everybody,
in the past few days me and my colleagues ran some tests with Flink on YARN
and detected a possible inconsistent behavior in the way the contents of
the flink/lib directory is shipped to the cluster when run on YARN,
depending on the fact that the jobs are deployed individually or
I have tried both log4j logger as well as System.out.println option but none of
these worked.
>From what I have seen so far is the Filesystem streaming connector classes are
>not packaged in the grand jar (flink-dist_2.10-1.1-SNAPSHOT.jar) that is
>copied under /build-target/lib location as par
Gna Phetsarath created FLINK-3655:
-
Summary: Allow comma-separated multiple directories to be
specified for FileInputFormat
Key: FLINK-3655
URL: https://issues.apache.org/jira/browse/FLINK-3655
Projec
Hi Maximilian,
Thanks for the email and looking into the issue. I'm using Scala 2.11 so it
sounds perfect to me...
I will be more than happy to test it out.
On Tue, Mar 22, 2016 at 2:48 AM, Maximilian Michels wrote:
> Hi Deepak,
>
> We have looked further into this and have a pretty easy fix. Ho
Aljoscha Krettek created FLINK-3654:
---
Summary: Disable Write-Ahead-Log in RocksDB State
Key: FLINK-3654
URL: https://issues.apache.org/jira/browse/FLINK-3654
Project: Flink
Issue Type: Impr
Stefano Baghino created FLINK-3653:
--
Summary: recovery.zookeeper.storageDir is not documented on the
configuration page
Key: FLINK-3653
URL: https://issues.apache.org/jira/browse/FLINK-3653
Project:
Maximilian Michels created FLINK-3652:
-
Summary: Enable UnusedImports check for Scala checkstyle
Key: FLINK-3652
URL: https://issues.apache.org/jira/browse/FLINK-3652
Project: Flink
Issue
Hi all,
I've been having some problems with RichMapPartitionFunction. Firstly, I
tried to convert the iterable into an array unsuccessfully. Then, I have
used some buffers to store the values per column. I am trying to
transpose the local matrix of LabeledVectors that I have in each partition
Aljoscha Krettek created FLINK-3651:
---
Summary: Fix faulty RollingSink Restore
Key: FLINK-3651
URL: https://issues.apache.org/jira/browse/FLINK-3651
Project: Flink
Issue Type: Bug
Till Rohrmann created FLINK-3650:
Summary: Add maxBy/minBy to Scala DataSet API
Key: FLINK-3650
URL: https://issues.apache.org/jira/browse/FLINK-3650
Project: Flink
Issue Type: Improvement
Till Rohrmann created FLINK-3649:
Summary: Document stable API methods maxBy/minBy
Key: FLINK-3649
URL: https://issues.apache.org/jira/browse/FLINK-3649
Project: Flink
Issue Type: Improvement
Aljoscha Krettek created FLINK-3648:
---
Summary: Introduce Trigger Test Harness
Key: FLINK-3648
URL: https://issues.apache.org/jira/browse/FLINK-3648
Project: Flink
Issue Type: Sub-task
Aljoscha Krettek created FLINK-3647:
---
Summary: Change StreamSource to use Processing-Time Clock Service
Key: FLINK-3647
URL: https://issues.apache.org/jira/browse/FLINK-3647
Project: Flink
Aljoscha Krettek created FLINK-3646:
---
Summary: Use Processing-Time Clock in Window Assigners/Triggers
Key: FLINK-3646
URL: https://issues.apache.org/jira/browse/FLINK-3646
Project: Flink
Is
Hi,
how are you printing the debug statements?
But yeah all the logic of renaming in progress files and cleaning up after a
failed job happens in restoreState(BucketState state). The steps are roughly
these:
1. Move current in-progress file to final location
2. truncate the file if necessary (i
Hi Deepak,
We have looked further into this and have a pretty easy fix. However,
it will only work with Flink's Scala 2.11 version because newer
versions of the Akka library are incompatible with Scala 2.10 (Flink's
default Scala version). Would that be a viable option for you?
We're currently di
Hello,
I have enabled checkpoint and I am using RollingSink to sink the data to HDFS
(2.7.x) from KafkaConsumer. To simulate failover/recovery, I stopped
TaskManager and the job gets rescheduled to other Taskmanager instance. During
this momemnt, the current "in-progress" gets closed and renamed
Hi,
I have some thoughts about Evictors as well yes, but I didn’t yet write them
down. The basic idea about them is this:
class Evictor {
Predicate getPredicate(Iterable> elements, int size, W
window);
}
class Predicate {
boolean evict(StreamRecord element);
}
The evictor will return a pr
Chiwan Park created FLINK-3645:
--
Summary: HDFSCopyUtilitiesTest fails in a Hadoop cluster
Key: FLINK-3645
URL: https://issues.apache.org/jira/browse/FLINK-3645
Project: Flink
Issue Type: Bug
Thanks for the write-up Aljoscha.
I think it is a really good idea to separate the different aspects (fire,
purging, lateness) a bit. At the moment, all of these need to be handled in
the Trigger and a custom trigger is necessary whenever, you want some of
these aspects slightly differently handled
24 matches
Mail list logo