Hey,
I have encountered a weird issue in a checkpointing test I am trying to
write. The logic is the same as with the previous checkpointing tests,
there is a OnceFailingReducer.
My problem is that before the reducer fails, my job cannot take any
snapshots. The Runnables executing the checkpointi
I have been trying to install, learn and understand Flink. I am using Scala-
EclipseIDE as my IDE.
I have downloaded the flink source coded, compiled and created the project.
My work laptop is Windows based and I don't have eclipse based workstation but
I do have linux boxes for running and tes
Hi Martin,
> What I got in mind is something like:
> 1 Compute partitions or a set of subgraphs
> (via CC, LP, Pattern Matching, ...)
> 2 Partition Vertices by computed Partition/Subgraph ID
> 3 Compute Algorithm X (Page Rank, BC, SSSP, FSM...) per
> Partition/Subgraph via PC iteration
I did try
Hi!
This looks like a mismatch between the Scala dependency in Flink and Scala
in your Eclipse. Make sure you use the same for both. By default, Flink
reference Scala 2.10
If your IDE is set up for Scala 2.11, set the Scala version variable in the
Flink root pom.xml also to 2.11
Greetings,
Steph
Hi,
Because I’m not user of Eclipse so I’m not sure but think that IDE Setup
documentation [1] on Flink homepage could help you.
[1]
https://ci.apache.org/projects/flink/flink-docs-master/internals/ide_setup.html
> On Jan 8, 2016, at 8:30 PM, Stephan Ewen wrote:
>
> Hi!
>
> This looks like
Hmm, strange issue indeed.
So, checkpoints are definitely triggered (log message by coordinator to
trigger checkpoint) but are not completing?
Can you check which is the first checkpoint to complete? Is it Checkpoint
1, or a later one (indicating that checkpoint 1 was somehow subsumed).
Can you c
Hi everybody,
recently we've seen an increased interest in complex event processing (CEP)
by Flink users. Even though most functionality is already there to solve
many use cases it would still be helpful for most users to have an easy to
use library. Having such a library which allows to define co
Hi all!
Currently, Flink has a module to run batch program code on Tez rather than
Flink's own distributed execution engine.
I would suggest that we drop this code for the next release (1.0) as part
of a code consolidation:
- There seems little in both the Flink and the Tez community to use an
+1 from my side
Flink on Tez never got a lot of user traction. It served well as a
prototype of "this is possible", but since the core functionality will be
subsumed by making Flink on YARN resource elastic, I don't see any reason
we should have it as part of the Flink codebase.
Best,
Kostas
On
for clarification, I was talking about dropping the code, I am unsure about
the consequences of dripping code :-)
On Fri, Jan 8, 2016 at 4:57 PM, Kostas Tzoumas wrote:
> +1 from my side
>
> Flink on Tez never got a lot of user traction. It served well as a
> prototype of "this is possible", but
This is a very comprehensive document, incredible job!
It seems that most of the machinery is already in place in Flink, which
would make this a very valuable addition taking into account the
implementation effort.
On Fri, Jan 8, 2016 at 3:54 PM, Till Rohrmann wrote:
> Hi everybody,
>
> recent
+1
I wanted to make a similar proposal.
– Ufuk
> On 08 Jan 2016, at 17:03, Kostas Tzoumas wrote:
>
> for clarification, I was talking about dropping the code, I am unsure about
> the consequences of dripping code :-)
>
> On Fri, Jan 8, 2016 at 4:57 PM, Kostas Tzoumas wrote:
>
>> +1 from my
+1 since it increase maintainability of the code base if it is not really
used and thus removed.
On Fri, Jan 8, 2016 at 5:33 PM, Ufuk Celebi wrote:
> +1
>
> I wanted to make a similar proposal.
>
> – Ufuk
>
> > On 08 Jan 2016, at 17:03, Kostas Tzoumas wrote:
> >
> > for clarification, I was tal
> On 08 Jan 2016, at 15:54, Till Rohrmann wrote:
>
> Hi everybody,
>
> recently we've seen an increased interest in complex event processing (CEP)
> by Flink users. Even though most functionality is already there to solve
> many use cases it would still be helpful for most users to have an easy
A definite +1 for this feature, thanks for your effort Till!
Really look forward to the POC foundation and would like to help contribute
where-ever possible.
Pattern matching along with event time support seems to be another major
breakthrough for stream processing framework options currently on t
Ted Yu created FLINK-3210:
-
Summary: Unnecessary call to deserializer#deserialize() in
LegacyFetcher#SimpleConsumerThread#run()
Key: FLINK-3210
URL: https://issues.apache.org/jira/browse/FLINK-3210
Project: F
Hi Devs,
It seems no release tag was pushed to 0.10.1. I presume this was an
oversight. Is there some place I can look to see from which sha the 0.10.1
release was built? Are the RC vote threads the only cannon in this matter?
Thanks,
Nick
An only-slightly related question: Is Flink using Hadoop version specific
features in some way? IIRC, the basic APIs should be compatible back as far
as 2.2. I'm surprised to see builds of flink explicitly against many hadoop
versions, but 2.5.x is excluded.
-n
On Fri, Jan 8, 2016 at 9:45 AM, Nic
Looks super cool, Till!
Especially the section about the Patterns is great.
For the other parts, I was wondering about the overlap with the TableAPI
and the SQL efforts.
I was thinking that a first version could really focus on the Patterns and
make the assumption that they are always applied on
Hi Nick!
We have not pushed a release tag, but have a frozen release-0.10.1-RC1
branch (https://github.com/apache/flink/tree/release-0.10.1-rc1)
A tag would be great, agree!
Flink does in its core not depend on Hadoop. The parts that reference
Hadoop (HDFS filesystem, YARN, MapReduce function/for
First of all, apologies for cross-posting. If you reply, please remove
dev@flink and reply to dev@calcite only.
Calcite contributors,
Fabian Hueske has sent an email to dev@flink [1] on their plans to add
SQL and streaming SQL support to Flink using Calcite, and also
included a design document [2
Yes, a tag would be very good practice, IMHO. Those of us who need to run
release + patches appreciate the release hygiene :)
If all builds are created equal re: Hadoop versions, I recommend against
publishing Hadoop-specific tarballs on the downloads page; it left me quite
confused, as I'm sure i
I am excited and nervous at the same time =)
- Henry
On Thu, Jan 7, 2016 at 6:05 AM, Fabian Hueske wrote:
> Hi everybody,
>
> in the last days, Timo and I refined the design document for adding a SQL /
> StreamSQL interface on top of Flink that was started by Stephan.
>
> The document proposes a
Tzu-Li (Gordon) Tai created FLINK-3211:
--
Summary: Add AWS Kinesis streaming connector
Key: FLINK-3211
URL: https://issues.apache.org/jira/browse/FLINK-3211
Project: Flink
Issue Type: New
24 matches
Mail list logo