Hi All,
I'm new to Flink and was looking for a JDBC stream sink connector, and
didn't see one in flink-streaming-connectors. Is there one somewhere else,
or is there one currently in development. If not could I pick up a ticket
to add it?
Thanks,
Tim
Just looked at Greg's object juggle PR - looks good for inclusion in the
next release candidate.
Have not tested the web UI Router fix, but the code looks good.
Hi,
I have two bugfix pull requests in the stack.
[FLINK-3340] [runtime] Fix object juggling in drivers
https://github.com/apache/fli
Hi,
I had a similar issue recently.
Instead of
input.assignTimestampsAndWatermarks
you have to do:
input = input.assignTimestampsAndWatermarks
On Thu, Feb 25, 2016 at 6:14 PM, Nam-Luc Tran
wrote:
> Hello everyone,
>
> I am currently playing with streams which timestamp is defined by
> Event
Ufuk Celebi created FLINK-3517:
--
Summary: Number of job and task managers not checked in scripts
Key: FLINK-3517
URL: https://issues.apache.org/jira/browse/FLINK-3517
Project: Flink
Issue Type:
I think that forking off is the regular way we do the releases and I
don't see any reasons to not do it this time.
– Ufuk
On Thu, Feb 25, 2016 at 7:41 PM, Robert Metzger wrote:
> Hi,
> I'd like to fork off a branch for the 1.0 release so that we can merge big
> changes into master. Any objectio
Ufuk Celebi created FLINK-3516:
--
Summary: JobManagerHACheckpointRecoveryITCase
testCheckpointedStreamingSumProgram still fails :(
Key: FLINK-3516
URL: https://issues.apache.org/jira/browse/FLINK-3516
Pro
On Thu, Feb 25, 2016 at 5:23 PM, Vasiliki Kalavri
wrote:
> - HA: tested on a 6-node cluster with 2 masters.
> Issues:
> 1. After new leader election, the job history is cleaned up (at least in
> the WebUI). Is this on purpose?
Yes, the job history is part of the job manager.
> 2. After cluster r
Hi,
I'd like to fork off a branch for the 1.0 release so that we can merge big
changes into master. Any objections?
On Thu, Feb 25, 2016 at 6:04 PM, Greg Hogan wrote:
> Hi,
>
> I have two bugfix pull requests in the stack.
>
> [FLINK-3340] [runtime] Fix object juggling in drivers
> https://git
Stephan Ewen created FLINK-3515:
---
Summary: Make the "file monitoring source" exactly-once
Key: FLINK-3515
URL: https://issues.apache.org/jira/browse/FLINK-3515
Project: Flink
Issue Type: Improv
Stephan Ewen created FLINK-3514:
---
Summary: Add support for slowly changing streaming broadcast
variables
Key: FLINK-3514
URL: https://issues.apache.org/jira/browse/FLINK-3514
Project: Flink
Is
Hello everyone,
I am currently playing with streams which timestamp is defined by
EventTime. I currently have the following code:
final StreamExecutionEnvironment env =
StreamExecutionEnvironment.getExecutionEnvironment();
env.getConfig().enableTimestamps();//.setAutoWatermarkInterval
Hi,
I have two bugfix pull requests in the stack.
[FLINK-3340] [runtime] Fix object juggling in drivers
https://github.com/apache/flink/pull/1626
[FLINK-3437] [web-dashboard] Fix UI router state for job plan
https://github.com/apache/flink/pull/1661
Greg
On Thu, Feb 25, 2016 at 8:32 AM, Ro
Gabor and Greg gave some good comments on the proposal.
If there is no more feedback, I'll go ahead and open a PR to update the
documentation tomorrow.
Thanks, Fabian
2016-02-24 12:24 GMT+01:00 Fabian Hueske :
> Regarding the scope of the object-reuse setting, I agree with Greg.
> It would be v
Hi Vasia,
In the WebUI, the Subtasks and TaskManagers list the same operator
statistics but expand to show either per-subtask or per-TaskManager
statistics. Summarizing the statistics by TaskManager is valuable when
viewing larger clusters.
Greg
On Thu, Feb 25, 2016 at 11:23 AM, Vasiliki Kalavri
Hi squirrels,
here's my testing outcome so far:
- Examples: Ran all examples locally and on a cluster, both from CLI and
web submission tool
Issues:
1. PageRank example doesn't run without arguments anymore. I have a fix
together with some doc fixes.
- CLI: tested locally and on cluster
Issues:
Hi, I am just exploring Flink, and have run into a curious issue. I have
cloned from github, checked out the release-1.0.0-rc1 branch, and built
from command line - no errors. I am using IntelliJ. I first tried
running some of the batch examples, and those run fine. Then I tried
stream examples
Thanks Marton, the issue is quite serious, I agree.
It is a bit tricky to solve, unfortunately. It seems very hard to make
experience inside the IDE, with Maven, and with SBT smooth.
Maven/SBT packaging needs "provided" dependencies, while IntelliJ need
"compile" dependencies.
On Thu, Feb 25, 201
Aljoscha Krettek created FLINK-3513:
---
Summary: Fix interplay of automatic Operator UID and Changing name
of WindowOperator
Key: FLINK-3513
URL: https://issues.apache.org/jira/browse/FLINK-3513
Proje
Aljoscha Krettek created FLINK-3512:
---
Summary: Savepoint backend should not revert to "jobmanager"
Key: FLINK-3512
URL: https://issues.apache.org/jira/browse/FLINK-3512
Project: Flink
Issue
@Stephan on PR #1685. Fair enough, 1.0.1 is fine. We will try to get it in
soon anyway.
Please consider a build inconvenience that I have just reported to both the
mailing list and JIRA in the meantime. [1]
[1] https://issues.apache.org/jira/browse/FLINK-3511
On Thu, Feb 25, 2016 at 3:27 PM, S
Issued JIRA ticket 3511 to make it referable in other discussions. [1]
[1] https://issues.apache.org/jira/browse/FLINK-3511
On Thu, Feb 25, 2016 at 3:36 PM, Márton Balassi
wrote:
> Recent changes to the build [1] where many libraries got their core
> dependencies (the ones included in the flink
Márton Balassi created FLINK-3511:
-
Summary: Flink library examples not runnable withput adding
dependencies
Key: FLINK-3511
URL: https://issues.apache.org/jira/browse/FLINK-3511
Project: Flink
Recent changes to the build [1] where many libraries got their core
dependencies (the ones included in the flink-dist fat jar) moved to the
provided scope.
The reasoning was that when submitting to the Flink cluster the application
already has these dependencies, while when a user writes a program
Concerning the Pull Request mentioned by Marton:
I think it is a transparent bugfix patch. It is not really affecting
behavior that we want users to make assumptions about (assignment of keys
to partitions should not be hardwired into applications).
The only affected program I can think of that is
Ufuk Celebi created FLINK-3510:
--
Summary: Pattern class class-level comment misses type argument
Key: FLINK-3510
URL: https://issues.apache.org/jira/browse/FLINK-3510
Project: Flink
Issue Type:
Maybe you can check whether this is a problem, too:
https://issues.apache.org/jira/browse/FLINK-3501
Under the assumption that no major functionality changes, I will
continue with the functional checks.
On Thu, Feb 25, 2016 at 2:32 PM, Robert Metzger wrote:
> Damn. I agree that this is a blocker
Damn. I agree that this is a blocker.
I use the maven-enforcer-plugin to check for the right maven, but the build
profile that runs the profile is only active during "deploy", not when
packaging the binaries.
That's why I didn't realize that I build the binaries with the wrong maven
version.
I sug
Hi folks,
I think I found a release blocker.
The flink-dist JAR file contains non-relocated classes of Google Guava and
Apache HttpComponents.
Fabian
2016-02-25 13:21 GMT+01:00 Chesnay Schepler :
> tested the RC on Windows:
>
> - source compiles
> - some tests categorically fail: see FLINK-3491
Robert Metzger created FLINK-3509:
-
Summary: Update Hadoop versions in release script and on travis to
the latest minor version
Key: FLINK-3509
URL: https://issues.apache.org/jira/browse/FLINK-3509
Pr
tested the RC on Windows:
- source compiles
- some tests categorically fail: see FLINK-3491 / FLINK-3496
- start/stop scripts work in both cygwin and windows CMD
- ran several examples from batch/streaming/python
- scripts also work on paths containing spaces
On 25.02.2016 12:41, Robert Metzger
(I'm removing user@ from the discussion)
Thank you for bringing the pull request to my attention Marton. I have to
admit that I didn't announce this RC properly in advance. In the RC0 thread
I said "early next week" and now its Thursday. I should have said something
in that thread.
The "trigger" f
Thanks for creating the candidate Robert and for the heads-up, Slim.
I would like to get a PR [1] in before 1.0.0 as it breaks hashing behavior
of DataStream.keyBy. The PR has the feature implemented and the java tests
adopted, there is still a bit of outstanding fix for the scala tests. Gábor
Hor
Dear Flink community
It is great news that the vote for the first release candidate (RC1) of Apache
Flink 1.0.0 is starting today February 25th, 2016!
As a community, we need to double our efforts and make sure that Flink 1.0.0 is
GA before these 2 upcoming major events:
Strata + Hadoop World
Dear Flink community,
Please vote on releasing the following candidate as Apache Flink version 1.0
.0.
I've set u...@flink.apache.org on CC because users are encouraged to help
testing Flink 1.0.0 for their specific use cases. Please report issues (and
successful tests!) on dev@flink.apache.org.
Hi Greg,
I agree with you that the "fix version" field for unresolved issues is
probably used by issue creators to express their wish for fast resolution.
I also saw some cases where issues were reopened.
I agree with your suggestion to clear the "fix version" field once 1.0.0
has been released.
Chengxiang Li created FLINK-3508:
Summary: Add more test cases to verify the rules of logical plan
optimization
Key: FLINK-3508
URL: https://issues.apache.org/jira/browse/FLINK-3508
Project: Flink
36 matches
Mail list logo