Hi,
I found the fix for this issue and I'll create a pull request in the following
day.
Wow, great. Can you tell us what the issue was?
Am 02.03.2015 09:31 schrieb "Dulaj Viduranga" :
> Hi,
> I found the fix for this issue and I'll create a pull request in the
> following day.
>
Gyula Fora created FLINK-1618:
-
Summary: Add parallel time discretisation for time-window
transformations
Key: FLINK-1618
URL: https://issues.apache.org/jira/browse/FLINK-1618
Project: Flink
Is
Gyula Fora created FLINK-1619:
-
Summary: Add pre-aggregator for time windows
Key: FLINK-1619
URL: https://issues.apache.org/jira/browse/FLINK-1619
Project: Flink
Issue Type: Improvement
Gyula Fora created FLINK-1620:
-
Summary: Add pre-aggregator for count windows
Key: FLINK-1620
URL: https://issues.apache.org/jira/browse/FLINK-1620
Project: Flink
Issue Type: Improvement
Calling:
java -cp ../examples/flink-java-examples-0.9-SNAPSHOT-KMeans.jar
org.apache.flink.examples.java.clustering.util.KMeansDataGenerator 500 10
0.08
Will not connect to Flink. Its just running a standalone KMeans data
generator, not KMeans.
I would suspect that the KMeans example is not runnin
In some places of the code, "localhost" is hard coded. When it is resolved by
the DNS, it is posible to be directed to a different IP other than 127.0.0.1 (like
private range 10.0.0.0/8). I changed those places to 127.0.0.1 and it works like a charm.
But hard coding 127.0.0.1 is not a good opt
Max Michels created FLINK-1621:
--
Summary: Create a generalized combine function
Key: FLINK-1621
URL: https://issues.apache.org/jira/browse/FLINK-1621
Project: Flink
Issue Type: Improvement
Aljoscha Krettek created FLINK-1622:
---
Summary: Add ReducePartial and GroupReducePartial Operators
Key: FLINK-1622
URL: https://issues.apache.org/jira/browse/FLINK-1622
Project: Flink
Issue
Great. Thank you.
I gave some feedback in the pull request and asked some questions there.
On Fri, Feb 27, 2015 at 5:43 PM, Mustafa Elbehery wrote:
> @robert,
>
> I have created the PR https://github.com/apache/flink/pull/442,
>
>
>
> On Fri, Feb 27, 2015 at 11:58 AM, Mustafa Elbehery <
> elbeh
Hi all!
ApacheCon is coming up and it is the 15th anniversary of the Apache
Software Foundation.
In the course of the conference, Apache would like to make a series of
announcements. If we manage to make a release during (or shortly before)
ApacheCon, they will announce it through their channels.
Hey,
We have a nice list of new features - it definitely makes sense to have
that as a release. On my side I really want to have a first limited version
of streaming fault tolerance in it.
+1 for Robert's proposal for the deadlines.
I'm also volunteering for release manager.
Best,
Marton
On Mo
Hi,
if I start "mvn-Dmaven.test.skip=true clean install", the goal fails and
I get the following error:
> Unapproved licenses:
>
> flink-clients/bin/src/main/resources/web-docs/js/dagre-d3.js
> flink-clients/bin/src/main/resources/web-docs/js/d3.js
> flink-staging/flink-avro/bin/src/test/r
Hi Matthias,
I just checked and could not reproduce the error.
The files that Maven RAT complained about do not exist in Flink's master
branch.
I don't think they are put there as part of the build process.
Best, Fabian
2015-03-02 15:09 GMT+01:00 Matthias J. Sax :
> Hi,
>
> if I start "mvn-
Aljoscha Krettek created FLINK-1623:
---
Summary: Rename Expression API and Operation Representation
Key: FLINK-1623
URL: https://issues.apache.org/jira/browse/FLINK-1623
Project: Flink
Issue
Max Michels created FLINK-1624:
--
Summary: Build of old sources fails due to git-commit-id plugin
Key: FLINK-1624
URL: https://issues.apache.org/jira/browse/FLINK-1624
Project: Flink
Issue Type:
Hi there,
since I'm relying on Scala 2.11.4 on a project I've been working on, I
created a branch which updates the Scala version used by Flink from 2.10.4
to 2.11.4:
https://github.com/stratosphere/flink/commits/scala_2.11
Everything seems to work fine and the PR contains minor changes compared
Big +1 from my side!
Does it have to be a Maven profile, or does a maven property work? (Profile
may be needed for quasiquotes dependency?)
On Mon, Mar 2, 2015 at 4:36 PM, Alexander Alexandrov <
alexander.s.alexand...@gmail.com> wrote:
> Hi there,
>
> since I'm relying on Scala 2.11.4 on a proje
+1 I also like it. We just have to figure out how we can publish two
sets of release artifacts.
On Mon, Mar 2, 2015 at 4:48 PM, Stephan Ewen wrote:
> Big +1 from my side!
>
> Does it have to be a Maven profile, or does a maven property work? (Profile
> may be needed for quasiquotes dependency?)
>
Profile will be needed for the quasiquotes dependency and the maven
scalamacros plugin.
2015-03-02 16:48 GMT+01:00 Stephan Ewen :
> Big +1 from my side!
>
> Does it have to be a Maven profile, or does a maven property work? (Profile
> may be needed for quasiquotes dependency?)
>
> On Mon, Mar 2,
Matthias!
The files should not exist. Has some IDE setup copied the files into the
"bin" directory (as part of compiling it without maven) ? It looks like you
are building it not through maven really...
BTW: Does it make a difference whether you use "mvn -Dmaven.test.skip=true
clean install" or "
Spark currently only provides pre-builds for 2.10 and requires custom build
for 2.11.
Not sure whether this is the best idea, but I can see the benefits from a
project management point of view...
Would you prefer to have a {scala_version} × {hadoop_version} integrated on
the website?
2015-03-02
I guess, Eclipse created those files. I delete them manually, what
resolved the problem. I added "bin" to my local .gitignore and thus "git
status" did not list the files and I was not aware the they are not part
of the repository.
As far as I know, "-Dmaven.test.skip=true" is equal to "-DskipTest
Hey Santosh!
RDF processing often involves either joins, or graph-query like operations
(transitive). Flink is fairly good at both types of operations.
I would look into the graph examples and the graph API for a start:
- Graph examples:
https://github.com/apache/flink/tree/master/flink-example
@Ted Here is a bit of background about how things are currently done in
the Flink runtime:
There are two execution modes for the runtime: "reuse" and "non-reuse".
- The "non-reuse" mode will create new objects for every record received
from the network, or taken out of a sort-buffer or hash-tab
On Mon, Mar 2, 2015 at 5:17 PM, Stephan Ewen wrote:
> There are two execution modes for the runtime: "reuse" and "non-reuse".
That makes a fair bit of sense.
Gyula Fora created FLINK-1625:
-
Summary: Add cancel method to user defined sources and sinks and
call them on task cancellation
Key: FLINK-1625
URL: https://issues.apache.org/jira/browse/FLINK-1625
Projec
Stephan Ewen created FLINK-1626:
---
Summary: Spurious failure in MatchTask cancelling test
Key: FLINK-1626
URL: https://issues.apache.org/jira/browse/FLINK-1626
Project: Flink
Issue Type: Bug
+1 for Scala 2.11
On Mon, Mar 2, 2015 at 5:02 PM, Alexander Alexandrov <
alexander.s.alexand...@gmail.com> wrote:
> Spark currently only provides pre-builds for 2.10 and requires custom build
> for 2.11.
>
> Not sure whether this is the best idea, but I can see the benefits from a
> project manag
Ufuk Celebi created FLINK-1627:
--
Summary: Netty channel connect deadlock
Key: FLINK-1627
URL: https://issues.apache.org/jira/browse/FLINK-1627
Project: Flink
Issue Type: Bug
Reporte
I'm +1 if this doesn't affect existing Scala 2.10 users.
I would also suggest to add a scala 2.11 build to travis as well to ensure
everything is working with the different Hadoop/JVM versions.
It shouldn't be a big deal to offer scala_version x hadoop_version builds
for newer releases.
You only n
Daniel Bali created FLINK-1628:
--
Summary: Strange behavior of "where" function during a join
Key: FLINK-1628
URL: https://issues.apache.org/jira/browse/FLINK-1628
Project: Flink
Issue Type: Bug
We have not excluded the "bin" directory by default, because the flink bin
directory (flink-dist/src/main/flink-bin/ with the bash scripts) should be
checked by RAT for valid headers.
Cheers,
Stephan
On Mon, Mar 2, 2015 at 5:02 PM, Matthias J. Sax <
mj...@informatik.hu-berlin.de> wrote:
> I gue
Robert Metzger created FLINK-1629:
-
Summary: Add option to start Flink on YARN in a detached mode
Key: FLINK-1629
URL: https://issues.apache.org/jira/browse/FLINK-1629
Project: Flink
Issue Ty
Robert Metzger created FLINK-1630:
-
Summary: Add option to YARN client to re-allocate failed containers
Key: FLINK-1630
URL: https://issues.apache.org/jira/browse/FLINK-1630
Project: Flink
Is
Stephan Ewen created FLINK-1631:
---
Summary: Port collisions in ProcessReaping tests
Key: FLINK-1631
URL: https://issues.apache.org/jira/browse/FLINK-1631
Project: Flink
Issue Type: Bug
Hi everyone
February might be the shortest month of the year, but the community has
been pretty busy:
- Flink 0.8.1, a bugfix release has been made available
- The project added a new committer
- Flink contributors developed a Flink adapter for Apache SAMOA
- Flink committers contributed to Go
+1 for Scala 2.11
Regards.
Chiwan Park (Sent with iPhone)
> On Mar 3, 2015, at 2:43 AM, Robert Metzger wrote:
>
> I'm +1 if this doesn't affect existing Scala 2.10 users.
>
> I would also suggest to add a scala 2.11 build to travis as well to ensure
> everything is working with the different
Vasia Kalavri created FLINK-1632:
Summary: Use DataSet's count() and collect() to simplify Gelly
methods
Key: FLINK-1632
URL: https://issues.apache.org/jira/browse/FLINK-1632
Project: Flink
Vasia Kalavri created FLINK-1633:
Summary: Add getTriplets() Gelly method
Key: FLINK-1633
URL: https://issues.apache.org/jira/browse/FLINK-1633
Project: Flink
Issue Type: New Feature
HI Stephan,
What is "Batch mode" feature in the list?
- Henry
On Mon, Mar 2, 2015 at 5:03 AM, Stephan Ewen wrote:
> Hi all!
>
> ApacheCon is coming up and it is the 15th anniversary of the Apache
> Software Foundation.
>
> In the course of the conference, Apache would like to make a series of
>
Hi,
Can someone help me on how to access the flink-conf.yaml configuration values
inside the flink sources? Are these readily available as a map somewhere?
Thanks.
I think that you can use `org.apache.flink.configuration.GlobalConfiguration`
to obtain configuration object.
Regards.
Chiwan Park (Sent with iPhone)
> On Mar 3, 2015, at 12:17 PM, Dulaj Viduranga wrote:
>
> Hi,
> Can someone help me on how to access the flink-conf.yaml configuration values
Dulaj Viduranga created FLINK-1634:
--
Summary: Fix "Could not build up connection to JobManager" issue
on some systems
Key: FLINK-1634
URL: https://issues.apache.org/jira/browse/FLINK-1634
Project: F
44 matches
Mail list logo