Hi wangsan,
What I mean is establishing a connection each time write data into JDBC,
i.e. establish a connection in flush() function. I think this will make
sure the connection is ok. What do you think?
On Wed, Jul 11, 2018 at 12:12 AM, wangsan wrote:
> Hi Hequn,
>
> Establishing a connection
-1
./examples/streaming folder is missing in binary packages
Cheers,
Yazdan
> On Jul 10, 2018, at 9:57 PM, vino yang wrote:
>
> +1
> reviewed [1], [4] and [6]
>
> 2018-07-11 3:10 GMT+08:00 Chesnay Schepler :
>
>> Hi everyone,
>> Please review and vote on the release candidate #3 for the ver
+1
reviewed [1], [4] and [6]
2018-07-11 3:10 GMT+08:00 Chesnay Schepler :
> Hi everyone,
> Please review and vote on the release candidate #3 for the version 1.5.1,
> as follows:
> [ ] +1, Approve the release
> [ ] -1, Do not approve the release (please provide specific comments)
>
>
> The comple
I now see that the test that fails for me already has this check. But I don't
think it is done correctly.
https://github.com/apache/flink/blob/master/flink-filesystems/flink-hadoop-fs/src/test/java/org/apache/flink/runtime/fs/hdfs/HdfsBehaviorTest.java
has verifyOS() but a failed assumption in t
Hi everyone,
Please review and vote on the release candidate #3 for the version
1.5.1, as follows:
[ ] +1, Approve the release
[ ] -1, Do not approve the release (please provide specific comments)
The complete staging area is available for your review, which includes:
* JIRA release notes [1],
That flat-out disables all tests in the module, even those that could
run on Windows.
We commonly add an OS check to respective tests that skip the tests,
with an "Assume.assumeTrue(os!=windows)" statement in a "@BeforeClass"
method.
On 10.07.2018 21:00, NEKRASSOV, ALEXEI wrote:
I added lin
I added lines below to flink-hadoop-fs/pom.xml, and that allowed me to turn off
the tests that were failing for me.
Do we want to add this change to master?
If so, do I need to document this new switch somewhere?
(
the build then hang for me at flink-runtime, but that's a different issue
Tests r
Chesnay Schepler created FLINK-9796:
---
Summary: Add failure handling to release guide
Key: FLINK-9796
URL: https://issues.apache.org/jira/browse/FLINK-9796
Project: Flink
Issue Type: Improve
Leonid Ishimnikov created FLINK-9795:
Summary: Update Mesos documentation for flip6
Key: FLINK-9795
URL: https://issues.apache.org/jira/browse/FLINK-9795
Project: Flink
Issue Type: Improv
wangsan created FLINK-9794:
--
Summary: JDBCOutputFormat does not consider idle connection and
multithreads synchronization
Key: FLINK-9794
URL: https://issues.apache.org/jira/browse/FLINK-9794
Project: Flink
Hi Hequn,
Establishing a connection for each batch write may also have idle connection
problem, since we are not sure when the connection will be closed. We call
flush() method when a batch is finished or snapshot state, but what if the
snapshot is not enabled and the batch size not reached be
There's currently no workaround except going in and manually disabling them.
On 10.07.2018 16:32, Chesnay Schepler wrote:
Generally, any test that uses HDFS will fail on Windows. We've
disabled most of them, but some slip through from time to time.
Note that we do not provide any guarantees fo
Generally, any test that uses HDFS will fail on Windows. We've disabled
most of them, but some slip through from time to time.
Note that we do not provide any guarantees for all tests passing on Windows.
On 10.07.2018 16:28, NEKRASSOV, ALEXEI wrote:
I'm running 'mvn clean verify' on Windows wi
I'm running 'mvn clean verify' on Windows with no Hadoop libraries installed,
and the build fails (see below).
What's the solution? Is there a switch to skip Hadoop-related tests?
Or I need to install Hadoop libraries?
Thanks,
Alex
Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed:
linzhongjun created FLINK-9793:
--
Summary: flink on yarn 以yarn-cluster提交的时候flink-dist*.jar重复上传
Key: FLINK-9793
URL: https://issues.apache.org/jira/browse/FLINK-9793
Project: Flink
Issue Type: Imp
HeartSaVioR, thanks for the helpful pointer!
PR created: https://github.com/apache/flink/pull/6295
-Original Message-
From: Jungtaek Lim [mailto:kabh...@gmail.com]
Sent: Monday, July 09, 2018 11:43 PM
To: dev@flink.apache.org
Cc: Chesnay Schepler
Subject: Re: 'mvn verify' fails on rat p
Dawid Wysakowicz created FLINK-9792:
---
Summary: Cannot add html tags in options description
Key: FLINK-9792
URL: https://issues.apache.org/jira/browse/FLINK-9792
Project: Flink
Issue Type: B
I'm canceling the RC to include a fix for "FLINK-9789 - Watermark
metrics for an operator&task shadow each other".
On 06.07.2018 17:09, Chesnay Schepler wrote:
Hi everyone,
Please review and vote on the release candidate #2 for the version
1.5.1, as follows:
[ ] +1, Approve the release
[ ] -1
Dawid Wysakowicz created FLINK-9791:
---
Summary: Outdated savepoint compatibility table
Key: FLINK-9791
URL: https://issues.apache.org/jira/browse/FLINK-9791
Project: Flink
Issue Type: Bug
Timo Walther created FLINK-9790:
---
Summary: Add documentation for UDF in SQL Client
Key: FLINK-9790
URL: https://issues.apache.org/jira/browse/FLINK-9790
Project: Flink
Issue Type: Improvement
Hi wangsan,
I agree with you. It would be kind of you to open a jira to check the
problem.
For the first problem, I think we need to establish connection each time
execute batch write. And, it is better to get the connection from a
connection pool.
For the second problem, to avoid multithread pro
+1 to Chesnay's proposal to create a new RC with a shortened voting period.
On Tue, Jul 10, 2018 at 2:02 PM Chesnay Schepler wrote:
> I've opened a PR for the metric issue:
> https://github.com/apache/flink/pull/6292
>
> Given that we've already got the required votes and still got 24h left,
> I
I've opened a PR for the metric issue:
https://github.com/apache/flink/pull/6292
Given that we've already got the required votes and still got 24h left,
I would like to cancel this vote, create a new RC this evening with a
shortended
voting period (24h).
Virtually all checks made so far (excl
+1
* start local cluster using start-cluster.bat and checked log files
* uploaded and submitted multiple jobs through WebUI
On 10.07.2018 12:47, Chesnay Schepler wrote:
This issue has already affected 1.5.0 (in other places).
I rescind my -1 and will continue testing.
On 10.07.2018 12:45, Che
This issue has already affected 1.5.0 (in other places).
I rescind my -1 and will continue testing.
On 10.07.2018 12:45, Chesnay Schepler wrote:
I've linked the wrong jira:
https://issues.apache.org/jira/browse/FLINK-9789
On 10.07.2018 12:42, Chesnay Schepler wrote:
-1
I found an issue wher
I've linked the wrong jira: https://issues.apache.org/jira/browse/FLINK-9789
On 10.07.2018 12:42, Chesnay Schepler wrote:
-1
I found an issue where watermark metrics override each other, which
would be a regression to 1.5.0:
https://issues.apache.org/jira/browse/FLINK-8731
On 10.07.2018 11:
Chesnay Schepler created FLINK-9789:
---
Summary: Watermark metrics for an operator&task shadow each other
Key: FLINK-9789
URL: https://issues.apache.org/jira/browse/FLINK-9789
Project: Flink
-1
I found an issue where watermark metrics override each other, which
would be a regression to 1.5.0:
https://issues.apache.org/jira/browse/FLINK-8731
On 10.07.2018 11:12, Gary Yao wrote:
+1 (non-binding)
Ran Jepsen tests [1] on EC2 for around 8 hours without issues.
[1] https://github.co
Gary Yao created FLINK-9788:
---
Summary: ExecutionGraph Inconsistency prevents Job from recovering
Key: FLINK-9788
URL: https://issues.apache.org/jira/browse/FLINK-9788
Project: Flink
Issue Type: Bug
+1 (non-binding)
Ran Jepsen tests [1] on EC2 for around 8 hours without issues.
[1] https://github.com/apache/flink/pull/6240
On Tue, Jul 10, 2018 at 11:00 AM, Aljoscha Krettek
wrote:
> +1 (binding)
>
> - verified signatures and checksums
> - built successfully from source
>
> > On 10. Jul 201
+1 (binding)
- verified signatures and checksums
- built successfully from source
> On 10. Jul 2018, at 09:48, Jeff Zhang wrote:
>
> +1.
>
> Build form source successfully.
>
> Tested scala-shell in local and yarn mode, works well.
>
>
>
> Till Rohrmann 于2018年7月10日周二 下午3:37写道:
>
>> +1 (bi
Florian Schmidt created FLINK-9787:
--
Summary: Change ExecutionConfig#getGlobalJobParameters to return
an instance of GlobalJobParameters instead of null if no custom
globalJobParameters are set yet
Key: FLINK-9787
+1.
Build form source successfully.
Tested scala-shell in local and yarn mode, works well.
Till Rohrmann 于2018年7月10日周二 下午3:37写道:
> +1 (binding)
>
> - Verified that no new dependencies were added for which the LICENSE and
> NOTICE files need to be adapted.
> - Build 1.5.1 from the source artif
+1 (binding)
- Verified that no new dependencies were added for which the LICENSE and
NOTICE files need to be adapted.
- Build 1.5.1 from the source artifact
- Run flink-end-to-end tests for 12 hours for the 1.5.1 Hadoop 2.7 binary
artifact
- Run Jepsen tests for 12 hours for the 1.5.1 Hadoop 2.8
I've assigned you.
On Mon, Jul 9, 2018 at 5:53 PM NEKRASSOV, ALEXEI wrote:
> In JIRA I don't see an option to assign this issue to myself. Can someone
> please assign?
>
> Thanks,
> Alex
>
> -Original Message-
> From: Alexei Nekrassov (JIRA) [mailto:j...@apache.org]
> Sent: Monday, July
35 matches
Mail list logo