[jira] [Created] (FLINK-17901) Add module interface in PyFlink

2020-05-24 Thread Dian Fu (Jira)
Dian Fu created FLINK-17901:
---

 Summary: Add module interface in PyFlink
 Key: FLINK-17901
 URL: https://issues.apache.org/jira/browse/FLINK-17901
 Project: Flink
  Issue Type: Task
  Components: API / Python
Reporter: Dian Fu
 Fix For: 1.12.0


The "load_module" and "unload_module" interfaces in the Java TableEnvironment 
are not available in the PyFlink Table API. We should provide these interfaces 
in PyFlink Table API as I think these interfaces are also valuable for Python 
users.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-17902) Support the new interfaces about temporary functions in PyFlink

2020-05-24 Thread Dian Fu (Jira)
Dian Fu created FLINK-17902:
---

 Summary: Support the new interfaces about temporary functions in 
PyFlink
 Key: FLINK-17902
 URL: https://issues.apache.org/jira/browse/FLINK-17902
 Project: Flink
  Issue Type: Improvement
  Components: API / Python
Reporter: Dian Fu
 Fix For: 1.12.0


The interfaces such as createTemporarySystemFunction, 
dropTemporarySystemFunction, createFunction, dropFunction, 
createTemporaryFunction, dropTemporaryFunction in the Java TableEnvironment are 
currently not available in the PyFlink. The aim of this JIRA is to add support 
of them in PyFlink.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-17903) Evolve WatermarkOutputMultiplexer to make it reusable in FLIP-27 Sources

2020-05-24 Thread Stephan Ewen (Jira)
Stephan Ewen created FLINK-17903:


 Summary: Evolve WatermarkOutputMultiplexer to make it reusable in 
FLIP-27 Sources 
 Key: FLINK-17903
 URL: https://issues.apache.org/jira/browse/FLINK-17903
 Project: Flink
  Issue Type: Sub-task
  Components: API / DataStream
Reporter: Stephan Ewen
Assignee: Stephan Ewen
 Fix For: 1.11.0


The {{WatermarkOutputMultiplexer}} merges multiple independently generated 
watermarks.

To make it usable in the FLIP-27 sources, we need to
  - Change its IDs from integers to Strings (match splitIDs)
  - Support de-registration of local outputs (when splits are finished)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-17904) Add "scheduleWithFixedDelay" to ProcessingTimeService

2020-05-24 Thread Stephan Ewen (Jira)
Stephan Ewen created FLINK-17904:


 Summary: Add "scheduleWithFixedDelay" to ProcessingTimeService
 Key: FLINK-17904
 URL: https://issues.apache.org/jira/browse/FLINK-17904
 Project: Flink
  Issue Type: Improvement
  Components: Runtime / Task
Reporter: Stephan Ewen
Assignee: Stephan Ewen


Adding {{"scheduleWithFixedDelay(...)"}} to {{ProcessingTimeService}} better 
support cases where fired timers are backed up. Rather than immediately firing 
again, they would wait their scheduled delay.

The implementation can be added in ProcessingTimeService in the exact same way 
as {{"scheduleAtFixedRate"}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-17905) Docs for JDBC connector show licence and markup

2020-05-24 Thread Roman Khachatryan (Jira)
Roman Khachatryan created FLINK-17905:
-

 Summary: Docs for JDBC connector show licence and markup
 Key: FLINK-17905
 URL: https://issues.apache.org/jira/browse/FLINK-17905
 Project: Flink
  Issue Type: Bug
  Components: Documentation
Affects Versions: 1.11.0
Reporter: Roman Khachatryan
Assignee: Roman Khachatryan
 Fix For: 1.11.0






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-17906) Fix performance issues in WatermarkOutputMultiplexer

2020-05-24 Thread Stephan Ewen (Jira)
Stephan Ewen created FLINK-17906:


 Summary: Fix performance issues in WatermarkOutputMultiplexer
 Key: FLINK-17906
 URL: https://issues.apache.org/jira/browse/FLINK-17906
 Project: Flink
  Issue Type: Improvement
  Components: API / Core
Reporter: Stephan Ewen
Assignee: Stephan Ewen
 Fix For: 1.11.0


The WatermarkOutputMultiplexer has some potential for performance improvements:
  - not using volatile variables (all accesses are anyways from the mailbox of 
under the legacy checkpoint lock)
  - Using some boolean logic instead of branches



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-17907) flink-table-api-java: Compilation failure

2020-05-24 Thread Aihua Li (Jira)
Aihua Li created FLINK-17907:


 Summary: flink-table-api-java: Compilation failure
 Key: FLINK-17907
 URL: https://issues.apache.org/jira/browse/FLINK-17907
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / API
Affects Versions: 1.11.0
 Environment: local env
Reporter: Aihua Li
 Fix For: 1.11.0


When i execute the command "mvn clean install -B -U -DskipTests 
-Dcheckstyle.skip=true -Drat.ignoreErrors -Dmaven.javadoc.skip " in branch 
"master" and "release\-1.11" to install flink in my local env, i meet this 
failure:

[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:3.8.0:compile (default-compile) 
on project flink-table-api-java: Compilation failure
[ERROR] 
flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/operations/utils/AggregateOperationFactory.java:[550,53]
 unreported exception X; must be caught or declared to be thrown
[ERROR] -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR]
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR] mvn  -rf :flink-table-api-java



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-17908) Vague document about Kafka config in SQL-CLI

2020-05-24 Thread Shengkai Fang (Jira)
Shengkai Fang created FLINK-17908:
-

 Summary: Vague document about Kafka config in SQL-CLI
 Key: FLINK-17908
 URL: https://issues.apache.org/jira/browse/FLINK-17908
 Project: Flink
  Issue Type: Improvement
  Components: Documentation, Table SQL / API
Affects Versions: 1.11.0
Reporter: Shengkai Fang
 Fix For: 1.11.0


Currentl Flink doesn't offer any default config value for Kafka and use the 
deault config from Kafka. However, it uses the different config value when 
describe how to use Kafka Connector in sql-client. Document of the connector 
use value 'ealiest-offset' for 'connector.startup-mode', which is different 
from Kafka's default behaviour. I think this vague document may mislead users, 
especially for newbies. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: java.lang.NoSuchMethodError while writing to Kafka from Flink

2020-05-24 Thread Guowei Ma
Hi
1. You could check whether the 'org.apache.flink.api.java.clean' is in
your classpath first.
2. Do you follow the doc[1] to deploy your local cluster and run some
existed examples such as WordCount?

[1] 
https://ci.apache.org/projects/flink/flink-docs-release-1.10/ops/deployment/cluster_setup.html
Best,
Guowei


[jira] [Created] (FLINK-17909) Make the GenericInMemoryCatalog to hold the serialized meta data to uncover more potential bugs

2020-05-24 Thread Jark Wu (Jira)
Jark Wu created FLINK-17909:
---

 Summary: Make the GenericInMemoryCatalog to hold the serialized 
meta data to uncover more potential bugs
 Key: FLINK-17909
 URL: https://issues.apache.org/jira/browse/FLINK-17909
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / API, Table SQL / Planner
Reporter: Jark Wu
 Fix For: 1.11.0


Currently, the builtin {{GenericInMemoryCatalog}} hold the meta objects in 
HashMap. However, this lead to many bugs when users switch to some persisted 
catalogs, e.g. Hive Metastore. For example, FLINK-17189, FLINK-17868, 
FLINK-16021. 

That is because the builtin {{GenericInMemoryCatalog}} doesn't cover the 
important path of serialization and deserialization of meta data. We missed 
some important meta information (PK, time attributes) when serialization and 
deserialization which lead to bugs. 

So I propose to hold the serialized meta data in {{GenericInMemoryCatalog}} to 
cover the serialization and deserializtion path. 

We may lose some performance here, but {{GenericInMemoryCatalog}} is mostly 
used in demo/experiment/testing, so I think it's fine. 








--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[ANNOUNCE] Apache Flink 1.11.0, release candidate #1

2020-05-24 Thread Zhijiang
Hi all,

Apache Flink-1.11.0-RC1 has been created. It has all the artifacts that we 
would typically have for a release.

This preview-only RC is created only to drive the current testing efforts, and 
no official vote will take place. It includes the following:

   * The preview source release and binary convenience releases [1], which are 
signed with the key with fingerprint 2DA85B93244FDFA19A6244500653C0A2CEA00D0E 
[2],
   * All artifacts that would normally be deployed to the Maven Central 
Repository [3]

To test with these artifacts, you can create a settings.xml file with the 
content shown below [4]. This settings file can be referenced in your maven 
commands
via --settings /path/to/settings.xml. This is useful for creating a quickstart 
project based on the staged release and also for building against the staged 
jars.

Happy testing!

Best,
Zhijiang

[1] https://dist.apache.org/repos/dist/dev/flink/flink-1.11.0-rc1/
[2] https://dist.apache.org/repos/dist/release/flink/KEYS
[3] https://repository.apache.org/content/repositories/orgapacheflink-1370/
[4]


 flink-1.11.0



flink-1.11.0

  
flink-1.11.0

https://repository.apache.org/content/repositories/orgapacheflink-1370/
 
 
   archetype
   
https://repository.apache.org/content/repositories/orgapacheflink-1370/
 
 




[jira] [Created] (FLINK-17911) K8s e2e: error: timed out waiting for the condition on deployments/flink-native-k8s-session-1

2020-05-24 Thread Robert Metzger (Jira)
Robert Metzger created FLINK-17911:
--

 Summary: K8s e2e: error: timed out waiting for the condition on 
deployments/flink-native-k8s-session-1
 Key: FLINK-17911
 URL: https://issues.apache.org/jira/browse/FLINK-17911
 Project: Flink
  Issue Type: Improvement
  Components: Deployment / Kubernetes, Tests
Affects Versions: 1.11.0
Reporter: Robert Metzger


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=2062&view=logs&j=91bf6583-3fb2-592f-e4d4-d79d79c3230a&t=94459a52-42b6-5bfc-5d74-690b5d3c6de8

{code}
error: timed out waiting for the condition on 
deployments/flink-native-k8s-session-1
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-17912) KafkaShuffleITCase.testAssignedToPartitionEventTime: "Watermark should always increase"

2020-05-24 Thread Robert Metzger (Jira)
Robert Metzger created FLINK-17912:
--

 Summary: KafkaShuffleITCase.testAssignedToPartitionEventTime: 
"Watermark should always increase"
 Key: FLINK-17912
 URL: https://issues.apache.org/jira/browse/FLINK-17912
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Kafka, Tests
Affects Versions: 1.11.0
Reporter: Robert Metzger


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=2062&view=logs&j=1fc6e7bf-633c-5081-c32a-9dea24b05730&t=0d9ad4c1-5629-5ffc-10dc-113ca91e23c5

{code}
2020-05-22T21:16:24.7188044Z 
org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
2020-05-22T21:16:24.7188796Zat 
org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:147)
2020-05-22T21:16:24.7189596Zat 
org.apache.flink.runtime.minicluster.MiniCluster.executeJobBlocking(MiniCluster.java:677)
2020-05-22T21:16:24.7190352Zat 
org.apache.flink.streaming.util.TestStreamEnvironment.execute(TestStreamEnvironment.java:81)
2020-05-22T21:16:24.7191261Zat 
org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1673)
2020-05-22T21:16:24.7191824Zat 
org.apache.flink.test.util.TestUtils.tryExecute(TestUtils.java:35)
2020-05-22T21:16:24.7192325Zat 
org.apache.flink.streaming.connectors.kafka.shuffle.KafkaShuffleITCase.testAssignedToPartition(KafkaShuffleITCase.java:296)
2020-05-22T21:16:24.7192962Zat 
org.apache.flink.streaming.connectors.kafka.shuffle.KafkaShuffleITCase.testAssignedToPartitionEventTime(KafkaShuffleITCase.java:126)
2020-05-22T21:16:24.7193436Zat 
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
2020-05-22T21:16:24.7193999Zat 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
2020-05-22T21:16:24.7194720Zat 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
2020-05-22T21:16:24.7195226Zat 
java.lang.reflect.Method.invoke(Method.java:498)
2020-05-22T21:16:24.7195864Zat 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
2020-05-22T21:16:24.7196574Zat 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
2020-05-22T21:16:24.7197511Zat 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
2020-05-22T21:16:24.7198020Zat 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
2020-05-22T21:16:24.7198494Zat 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
2020-05-22T21:16:24.7199128Zat 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
2020-05-22T21:16:24.7199689Zat 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
2020-05-22T21:16:24.7200308Zat 
java.util.concurrent.FutureTask.run(FutureTask.java:266)
2020-05-22T21:16:24.7200645Zat java.lang.Thread.run(Thread.java:748)
2020-05-22T21:16:24.7201029Z Caused by: org.apache.flink.runtime.JobException: 
Recovery is suppressed by NoRestartBackoffTimeStrategy
2020-05-22T21:16:24.7201643Zat 
org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.handleFailure(ExecutionFailureHandler.java:116)
2020-05-22T21:16:24.7202275Zat 
org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.getFailureHandlingResult(ExecutionFailureHandler.java:78)
2020-05-22T21:16:24.7202863Zat 
org.apache.flink.runtime.scheduler.DefaultScheduler.handleTaskFailure(DefaultScheduler.java:192)
2020-05-22T21:16:24.7203525Zat 
org.apache.flink.runtime.scheduler.DefaultScheduler.maybeHandleTaskFailure(DefaultScheduler.java:185)
2020-05-22T21:16:24.7204072Zat 
org.apache.flink.runtime.scheduler.DefaultScheduler.updateTaskExecutionStateInternal(DefaultScheduler.java:179)
2020-05-22T21:16:24.7204618Zat 
org.apache.flink.runtime.scheduler.SchedulerBase.updateTaskExecutionState(SchedulerBase.java:503)
2020-05-22T21:16:24.7205255Zat 
org.apache.flink.runtime.jobmaster.JobMaster.updateTaskExecutionState(JobMaster.java:386)
2020-05-22T21:16:24.7205716Zat 
sun.reflect.GeneratedMethodAccessor18.invoke(Unknown Source)
2020-05-22T21:16:24.7206191Zat 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
2020-05-22T21:16:24.7206585Zat 
java.lang.reflect.Method.invoke(Method.java:498)
2020-05-22T21:16:24.7207261Zat 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcInvocation(AkkaRpcActor.java:284)
2020-05-22T21:16:24.7207736Zat 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:199)
2020-05-22T21:16:24.7208234Zat 
org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:74)
2020-05-22T21:

Re: [DISCUSS] Semantics of our JIRA fields

2020-05-24 Thread Congxian Qiu
Hi

Currently, when I'm going to create an issue for Project-website. I'm not
very sure what the "Affect Version/s" should be set. The problem is that
the current Dockfile[1] in flink-web is broken because of the EOL of Ubuntu
18.10[2], the project-web does not affect any release of Flink, it does
affect the process to build website, so what's the version should I set to?

[1]
https://github.com/apache/flink-web/blob/bc66f0f0f463ab62a22e81df7d7efd301b76a6b4/docker/Dockerfile#L17
[2] https://wiki.ubuntu.com/Releases

Best,
Congxian


Flavio Pompermaier  于2020年5月24日周日 下午1:23写道:

> In my experience it's quite complicated for a normal reporter to be able to
> fill all the fields correctly (especially for new users).
> Usually you just wanto to report a problem, remember to add a new feature
> or improve code/documentation but you can't really give a priority, assign
> the correct label or decide which releases will benefit of the fix/new
> feature. This is something that only core developers could decide (IMHO).
>
> My experiece says that it's better if normal users could just open tickets
> with some default (or mark ticket as new) and leave them in such a state
> until an experienced user, one that can assign tickets, have the time to
> read it and immediately reject the ticket or accept it and properly assign
> priorities, fix version, etc.
>
> With respect to resolve/close I think that a good practice could be to mark
> automatically a ticket as resolved once that a PR is detected for that
> ticket, while marking it as closed should be done by the commiter who merge
> the PR.
>
> Probably this process would slightly increase the work of a committer but
> this would make things a lot cleaner and will bring the benefit of having a
> reliable and semantically sound JIRA state.
>
> Cheers,
> Flavio
>
> Il Dom 24 Mag 2020, 05:05 Israel Ekpo  ha scritto:
>
> > +1 for the proposal
> >
> > This will bring some consistency to the process
> >
> > Regarding Closed vs Resolved, should we go back and switch all the
> Resolved
> > issues to Closed so that is is not confusing to new people to the project
> > in the future that may not have seen this discussion?
> >
> > I would recommend changing it to Closed just to be consistent since that
> is
> > the majority and the new process will be using Closed going forward
> >
> > Those are my thoughts for now
> >
> > On Sat, May 23, 2020 at 7:48 AM Congxian Qiu 
> > wrote:
> >
> > > +1 for the proposal. It's good to have a unified description and write
> it
> > > down in the wiki, so that every contributor has the same understanding
> of
> > > all the fields.
> > >
> > > Best,
> > > Congxian
> > >
> > >
> > > Till Rohrmann  于2020年5月23日周六 上午12:04写道:
> > >
> > > > Thanks for drafting this proposal Robert. +1 for the proposal.
> > > >
> > > > Cheers,
> > > > Till
> > > >
> > > > On Fri, May 22, 2020 at 5:39 PM Leonard Xu 
> wrote:
> > > >
> > > > > Thanks bringing up this topic @Robert,  +1 to the proposal.
> > > > >
> > > > > It clarifies the JIRA fields well and should be a rule to follow.
> > > > >
> > > > > Best,
> > > > > Leonard Xu
> > > > > > 在 2020年5月22日,20:24,Aljoscha Krettek  写道:
> > > > > >
> > > > > > +1 That's also how I think of the semantics of the fields.
> > > > > >
> > > > > > Aljoscha
> > > > > >
> > > > > > On 22.05.20 08:07, Robert Metzger wrote:
> > > > > >> Hi all,
> > > > > >> I have the feeling that the semantics of some of our JIRA fields
> > > > (mostly
> > > > > >> "affects versions", "fix versions" and resolve / close) are not
> > > > defined
> > > > > in
> > > > > >> the same way by all the core Flink contributors, which leads to
> > > cases
> > > > > where
> > > > > >> I spend quite some time on filling the fields correctly (at
> least
> > > > what I
> > > > > >> consider correctly), and then others changing them again to
> match
> > > > their
> > > > > >> semantics.
> > > > > >> In an effort to increase our efficiency, and since I'm creating
> a
> > > lot
> > > > of
> > > > > >> (test instability-related) tickets these days, I would like to
> > > discuss
> > > > > the
> > > > > >> semantics, come to a conclusion and document this in our Wiki.
> > > > > >> *Proposal:*
> > > > > >> *Priority:*
> > > > > >> "Blocker": needs to be resolved before a release (matched based
> on
> > > fix
> > > > > >> versions)
> > > > > >> "Critical": strongly considered before a release
> > > > > >> other priorities have no practical meaning in Flink.
> > > > > >> *Component/s:*
> > > > > >> Primary component relevant for this feature / fix.
> > > > > >> For test-related issues, add the component the test belongs to
> > (for
> > > > > example
> > > > > >> "Connectors / Kafka" for Kafka test failures) + "Test".
> > > > > >> The same applies for documentation tickets. For example, if
> > there's
> > > > > >> something wrong with the DataStream API, add it to the "API /
> > > > > DataStream"
> > > > > >> and "Documentation" components.
> > > > > >> *Affects Version/s:*Only

[jira] [Created] (FLINK-17914) HistoryServerTest.testCleanExpiredJob:158->runArchiveExpirationTest:214 expected:<2> but was:<0>

2020-05-24 Thread Robert Metzger (Jira)
Robert Metzger created FLINK-17914:
--

 Summary: 
HistoryServerTest.testCleanExpiredJob:158->runArchiveExpirationTest:214 
expected:<2> but was:<0>
 Key: FLINK-17914
 URL: https://issues.apache.org/jira/browse/FLINK-17914
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Web Frontend
Affects Versions: 1.12.0
Reporter: Robert Metzger


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=2047&view=logs&j=0da23115-68bb-5dcd-192c-bd4c8adebde1&t=4ed44b66-cdd6-5dcf-5f6a-88b07dda665d

{code}
[ERROR] Tests run: 6, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 4.697 s 
<<< FAILURE! - in org.apache.flink.runtime.webmonitor.history.HistoryServerTest
[ERROR] testCleanExpiredJob[Flink version less than 1.4: 
false](org.apache.flink.runtime.webmonitor.history.HistoryServerTest)  Time 
elapsed: 0.483 s  <<< FAILURE!
java.lang.AssertionError: expected:<2> but was:<0>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:834)
at org.junit.Assert.assertEquals(Assert.java:645)
at org.junit.Assert.assertEquals(Assert.java:631)
at 
org.apache.flink.runtime.webmonitor.history.HistoryServerTest.runArchiveExpirationTest(HistoryServerTest.java:214)
at 
org.apache.flink.runtime.webmonitor.history.HistoryServerTest.testCleanExpiredJob(HistoryServerTest.java:158)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-17915) TransitiveClosureITCase>JavaProgramTestBase.testJobWithObjectReuse:113 Error while calling the test program: Could not retrieve JobResult

2020-05-24 Thread Robert Metzger (Jira)
Robert Metzger created FLINK-17915:
--

 Summary: 
TransitiveClosureITCase>JavaProgramTestBase.testJobWithObjectReuse:113 Error 
while calling the test program: Could not retrieve JobResult
 Key: FLINK-17915
 URL: https://issues.apache.org/jira/browse/FLINK-17915
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Coordination, Tests
Affects Versions: 1.11.0
Reporter: Robert Metzger


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=2096&view=logs&j=5c8e7682-d68f-54d1-16a2-a09310218a49&t=45cc9205-bdb7-5b54-63cd-89fdc0983323

{code}
2020-05-25T03:26:08.8891832Z [INFO] Tests run: 3, Failures: 0, Errors: 0, 
Skipped: 0, Time elapsed: 4.287 s - in 
org.apache.flink.test.example.java.WordCountSimplePOJOITCase
2020-05-25T03:26:09.6452511Z Could not retrieve JobResult.
2020-05-25T03:26:09.6454291Z 
org.apache.flink.runtime.client.JobExecutionException: Could not retrieve 
JobResult.
2020-05-25T03:26:09.6482505Zat 
org.apache.flink.runtime.minicluster.MiniCluster.executeJobBlocking(MiniCluster.java:673)
2020-05-25T03:26:09.6483671Zat 
org.apache.flink.test.util.TestEnvironment.execute(TestEnvironment.java:115)
2020-05-25T03:26:09.6625490Zat 
org.apache.flink.examples.java.graph.TransitiveClosureNaive.main(TransitiveClosureNaive.java:120)
2020-05-25T03:26:09.6752644Zat 
org.apache.flink.test.example.java.TransitiveClosureITCase.testProgram(TransitiveClosureITCase.java:51)
2020-05-25T03:26:09.6754368Zat 
org.apache.flink.test.util.JavaProgramTestBase.testJobWithObjectReuse(JavaProgramTestBase.java:107)
2020-05-25T03:26:09.6756679Zat 
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
2020-05-25T03:26:09.6757511Zat 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
2020-05-25T03:26:09.6759607Zat 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
2020-05-25T03:26:09.6760692Zat 
java.lang.reflect.Method.invoke(Method.java:498)
2020-05-25T03:26:09.6761519Zat 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
2020-05-25T03:26:09.6762382Zat 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
2020-05-25T03:26:09.6763246Zat 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
2020-05-25T03:26:09.6778288Zat 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
2020-05-25T03:26:09.6779479Zat 
org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
2020-05-25T03:26:09.6780187Zat 
org.junit.rules.RunRules.evaluate(RunRules.java:20)
2020-05-25T03:26:09.6780851Zat 
org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
2020-05-25T03:26:09.6781843Zat 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
2020-05-25T03:26:09.6782583Zat 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
2020-05-25T03:26:09.6783485Zat 
org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
2020-05-25T03:26:09.6784670Zat 
org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
2020-05-25T03:26:09.6785320Zat 
org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
2020-05-25T03:26:09.6786034Zat 
org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
2020-05-25T03:26:09.6786670Zat 
org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
2020-05-25T03:26:09.6787550Zat 
org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
2020-05-25T03:26:09.6788233Zat 
org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
2020-05-25T03:26:09.6789050Zat 
org.junit.rules.RunRules.evaluate(RunRules.java:20)
2020-05-25T03:26:09.6789698Zat 
org.junit.runners.ParentRunner.run(ParentRunner.java:363)
2020-05-25T03:26:09.6790701Zat 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
2020-05-25T03:26:09.6791797Zat 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
2020-05-25T03:26:09.6792592Zat 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
2020-05-25T03:26:09.6793535Zat 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
2020-05-25T03:26:09.6794429Zat 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
2020-05-25T03:26:09.6795279Zat 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
2020-05-25T03:26:09.6796109Zat 
org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
2020-05-25T03:26:09.6797133Zat 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
2020-05-25T03:26:09.6798317Z Caused 

[jira] [Created] (FLINK-17916) Separate KafkaShuffle read/write to different environments

2020-05-24 Thread Yuan Mei (Jira)
Yuan Mei created FLINK-17916:


 Summary: Separate KafkaShuffle read/write to different environments
 Key: FLINK-17916
 URL: https://issues.apache.org/jira/browse/FLINK-17916
 Project: Flink
  Issue Type: Improvement
  Components: API / DataStream, Connectors / Kafka
Affects Versions: 1.11.0
Reporter: Yuan Mei
 Fix For: 1.12.0






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-17917) ResourceInformationReflector#getExternalResources should ignore the external resource with a value of 0

2020-05-24 Thread Yangze Guo (Jira)
Yangze Guo created FLINK-17917:
--

 Summary: ResourceInformationReflector#getExternalResources should 
ignore the external resource with a value of 0
 Key: FLINK-17917
 URL: https://issues.apache.org/jira/browse/FLINK-17917
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Coordination
Affects Versions: 1.11.0
Reporter: Yangze Guo
 Fix For: 1.11.0


*Background*: In FLINK-17390, we leverage 
{{WorkerSpecContainerResourceAdapter.InternalContainerResource}} to handle 
container matching logic. In FLINK-17407, we introduce external resources in 
{{WorkerSpecContainerResourceAdapter.InternalContainerResource}}.
On containers returned by Yarn, we try to get the corresponding worker specs by:
- Convert the container to {{InternalContainerResource}}
- Get the WorkerResourceSpec from {{containerResourceToWorkerSpecs}} map.

Container mismatch could happen in the below scenario:
- Flink does not allocate any external resources, the {{externalResources}} of 
{{InternalContainerResource}} is an empty map.
- The returned container contains all the resources (with a value of 0) defined 
in Yarn's {{resource-types.xml}}. The {{externalResources}} of 
{{InternalContainerResource}} has one or more entries with a value of 0.
- These two {{InternalContainerResource}} do not match.

To solve this problem, we could ignore all the external resources with a value 
of 0 in "ResourceInformationReflector#getExternalResources".

cc [~trohrmann] Could you assign this to me.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] Semantics of our JIRA fields

2020-05-24 Thread Chesnay Schepler
flink-web is independent from any release, so there should be no 
affects/fix version.


On 25/05/2020 07:56, Congxian Qiu wrote:

Hi

Currently, when I'm going to create an issue for Project-website. I'm not
very sure what the "Affect Version/s" should be set. The problem is that
the current Dockfile[1] in flink-web is broken because of the EOL of Ubuntu
18.10[2], the project-web does not affect any release of Flink, it does
affect the process to build website, so what's the version should I set to?

[1]
https://github.com/apache/flink-web/blob/bc66f0f0f463ab62a22e81df7d7efd301b76a6b4/docker/Dockerfile#L17
[2] https://wiki.ubuntu.com/Releases

Best,
Congxian


Flavio Pompermaier  于2020年5月24日周日 下午1:23写道:


In my experience it's quite complicated for a normal reporter to be able to
fill all the fields correctly (especially for new users).
Usually you just wanto to report a problem, remember to add a new feature
or improve code/documentation but you can't really give a priority, assign
the correct label or decide which releases will benefit of the fix/new
feature. This is something that only core developers could decide (IMHO).

My experiece says that it's better if normal users could just open tickets
with some default (or mark ticket as new) and leave them in such a state
until an experienced user, one that can assign tickets, have the time to
read it and immediately reject the ticket or accept it and properly assign
priorities, fix version, etc.

With respect to resolve/close I think that a good practice could be to mark
automatically a ticket as resolved once that a PR is detected for that
ticket, while marking it as closed should be done by the commiter who merge
the PR.

Probably this process would slightly increase the work of a committer but
this would make things a lot cleaner and will bring the benefit of having a
reliable and semantically sound JIRA state.

Cheers,
Flavio

Il Dom 24 Mag 2020, 05:05 Israel Ekpo  ha scritto:


+1 for the proposal

This will bring some consistency to the process

Regarding Closed vs Resolved, should we go back and switch all the

Resolved

issues to Closed so that is is not confusing to new people to the project
in the future that may not have seen this discussion?

I would recommend changing it to Closed just to be consistent since that

is

the majority and the new process will be using Closed going forward

Those are my thoughts for now

On Sat, May 23, 2020 at 7:48 AM Congxian Qiu 
wrote:


+1 for the proposal. It's good to have a unified description and write

it

down in the wiki, so that every contributor has the same understanding

of

all the fields.

Best,
Congxian


Till Rohrmann  于2020年5月23日周六 上午12:04写道:


Thanks for drafting this proposal Robert. +1 for the proposal.

Cheers,
Till

On Fri, May 22, 2020 at 5:39 PM Leonard Xu 

wrote:

Thanks bringing up this topic @Robert,  +1 to the proposal.

It clarifies the JIRA fields well and should be a rule to follow.

Best,
Leonard Xu

在 2020年5月22日,20:24,Aljoscha Krettek  写道:

+1 That's also how I think of the semantics of the fields.

Aljoscha

On 22.05.20 08:07, Robert Metzger wrote:

Hi all,
I have the feeling that the semantics of some of our JIRA fields

(mostly

"affects versions", "fix versions" and resolve / close) are not

defined

in

the same way by all the core Flink contributors, which leads to

cases

where

I spend quite some time on filling the fields correctly (at

least

what I

consider correctly), and then others changing them again to

match

their

semantics.
In an effort to increase our efficiency, and since I'm creating

a

lot

of

(test instability-related) tickets these days, I would like to

discuss

the

semantics, come to a conclusion and document this in our Wiki.
*Proposal:*
*Priority:*
"Blocker": needs to be resolved before a release (matched based

on

fix

versions)
"Critical": strongly considered before a release
other priorities have no practical meaning in Flink.
*Component/s:*
Primary component relevant for this feature / fix.
For test-related issues, add the component the test belongs to

(for

example

"Connectors / Kafka" for Kafka test failures) + "Test".
The same applies for documentation tickets. For example, if

there's

something wrong with the DataStream API, add it to the "API /

DataStream"

and "Documentation" components.
*Affects Version/s:*Only for Bug / Task-type tickets: We list

all

currently

supported and unreleased Flink versions known to be affected by

this.

Example: If I see a test failure that happens on "master" and
"release-1.11", I set "affects version" to "1.12.0" and

"1.11.0".

*Fix Version/s:*
For closed/resolved tickets, this field lists the released Flink

versions

that contain a fix or feature for the first time.
For open tickets, it indicates that a fix / feature should be

contained

in

the listed versions. Only blocker issues can block a release,

all

other

tickets which have "fix version/s" set at the time of a release

and

are

u

Please update the Requirements of PyPI

2020-05-24 Thread Ray Chen
Hi PyFlink Developers,


I got some errors from the pyarrow depended on apache-flink v1.10.1.

It seems updating pyarrow version  greater than 0.14 will solve the problem.

The version of Flink on PyPI depends on < 0.14, so cause the error.


Best,
Ray Chen


Re: [DISCUSS] Semantics of our JIRA fields

2020-05-24 Thread Yun Tang
Hi

I like the idea to give clear description of JIRA fields in Flink community 
when creating tickets.

After reading from Robert's explanation, I found my previous understanding 
differs from the field at "Affect Version/s",
and is close to general understanding as JIRA official guide said[1]: Affects 
version is the version in which a bug or problem was found.

If the bug is introduced and found in Flink-1.8.1 and still existing in current 
release-1.11 branch. I would like to fill in the first version and
next major versions, eg. Flink-1.8.1, Flink-1.9.0, Flink-1.10.0 and 
Flink-1.11.0. Integrated with Robert's explanation, I think a better choice 
might be
filling in the version which brought in and all currently supported and 
unreleased Flink versions known to be affected by this.

I think it's okay to give different explanations on the same field for 
different communities.
If so, I think we should provide this notice in the official web-site, JIRA 
template or any other place instead of just dev mail list to reach 
developer/users.

[1] https://www.atlassian.com/agile/tutorials/versions

Best
Yun Tang


From: Chesnay Schepler 
Sent: Monday, May 25, 2020 14:36
To: dev@flink.apache.org ; Congxian Qiu 

Subject: Re: [DISCUSS] Semantics of our JIRA fields

flink-web is independent from any release, so there should be no
affects/fix version.

On 25/05/2020 07:56, Congxian Qiu wrote:
> Hi
>
> Currently, when I'm going to create an issue for Project-website. I'm not
> very sure what the "Affect Version/s" should be set. The problem is that
> the current Dockfile[1] in flink-web is broken because of the EOL of Ubuntu
> 18.10[2], the project-web does not affect any release of Flink, it does
> affect the process to build website, so what's the version should I set to?
>
> [1]
> https://github.com/apache/flink-web/blob/bc66f0f0f463ab62a22e81df7d7efd301b76a6b4/docker/Dockerfile#L17
> [2] https://wiki.ubuntu.com/Releases
>
> Best,
> Congxian
>
>
> Flavio Pompermaier  于2020年5月24日周日 下午1:23写道:
>
>> In my experience it's quite complicated for a normal reporter to be able to
>> fill all the fields correctly (especially for new users).
>> Usually you just wanto to report a problem, remember to add a new feature
>> or improve code/documentation but you can't really give a priority, assign
>> the correct label or decide which releases will benefit of the fix/new
>> feature. This is something that only core developers could decide (IMHO).
>>
>> My experiece says that it's better if normal users could just open tickets
>> with some default (or mark ticket as new) and leave them in such a state
>> until an experienced user, one that can assign tickets, have the time to
>> read it and immediately reject the ticket or accept it and properly assign
>> priorities, fix version, etc.
>>
>> With respect to resolve/close I think that a good practice could be to mark
>> automatically a ticket as resolved once that a PR is detected for that
>> ticket, while marking it as closed should be done by the commiter who merge
>> the PR.
>>
>> Probably this process would slightly increase the work of a committer but
>> this would make things a lot cleaner and will bring the benefit of having a
>> reliable and semantically sound JIRA state.
>>
>> Cheers,
>> Flavio
>>
>> Il Dom 24 Mag 2020, 05:05 Israel Ekpo  ha scritto:
>>
>>> +1 for the proposal
>>>
>>> This will bring some consistency to the process
>>>
>>> Regarding Closed vs Resolved, should we go back and switch all the
>> Resolved
>>> issues to Closed so that is is not confusing to new people to the project
>>> in the future that may not have seen this discussion?
>>>
>>> I would recommend changing it to Closed just to be consistent since that
>> is
>>> the majority and the new process will be using Closed going forward
>>>
>>> Those are my thoughts for now
>>>
>>> On Sat, May 23, 2020 at 7:48 AM Congxian Qiu 
>>> wrote:
>>>
 +1 for the proposal. It's good to have a unified description and write
>> it
 down in the wiki, so that every contributor has the same understanding
>> of
 all the fields.

 Best,
 Congxian


 Till Rohrmann  于2020年5月23日周六 上午12:04写道:

> Thanks for drafting this proposal Robert. +1 for the proposal.
>
> Cheers,
> Till
>
> On Fri, May 22, 2020 at 5:39 PM Leonard Xu 
>> wrote:
>> Thanks bringing up this topic @Robert,  +1 to the proposal.
>>
>> It clarifies the JIRA fields well and should be a rule to follow.
>>
>> Best,
>> Leonard Xu
>>> 在 2020年5月22日,20:24,Aljoscha Krettek  写道:
>>>
>>> +1 That's also how I think of the semantics of the fields.
>>>
>>> Aljoscha
>>>
>>> On 22.05.20 08:07, Robert Metzger wrote:
 Hi all,
 I have the feeling that the semantics of some of our JIRA fields
> (mostly
 "affects versions", "fix versions" and resolve / close) are not
> defined
>> in
 the s

Re: [DISCUSS] Semantics of our JIRA fields

2020-05-24 Thread Zhijiang
Thanks for launching this discussion and giving so detailed infos, Robert!  +1 
on my side for the proposal. 

For "Affects Version", I previously thought it was only for the already 
released versions, so it can give a reminder that the fix should also pick into 
the related released branches for future minor versions.
I saw that Jark had somehow similar concerns for this field in below replies.  
Either way makes sense for me as long as we give a determined rule in Wiki.

Re Flavio' s comments, I agree that the Jira reporter can leave most of the 
fields empty if not confirmed of them, then the respective component maintainer 
or committer can update them accordingly later.
But the state of Jira should not be marked as "resolved" when the PR is 
detected, that is not fitting into the resolved semantic I guess. If possible, 
the Jira can be updated as "in progress" automatically if
the respective PR is ready, then it will save some time to stat progress of 
related issues during release process.

Best,
Zhijiang
--
From:Congxian Qiu 
Send Time:2020年5月25日(星期一) 13:57
To:dev@flink.apache.org 
Subject:Re: [DISCUSS] Semantics of our JIRA fields

Hi

Currently, when I'm going to create an issue for Project-website. I'm not
very sure what the "Affect Version/s" should be set. The problem is that
the current Dockfile[1] in flink-web is broken because of the EOL of Ubuntu
18.10[2], the project-web does not affect any release of Flink, it does
affect the process to build website, so what's the version should I set to?

[1]
https://github.com/apache/flink-web/blob/bc66f0f0f463ab62a22e81df7d7efd301b76a6b4/docker/Dockerfile#L17
[2] https://wiki.ubuntu.com/Releases

Best,
Congxian


Flavio Pompermaier  于2020年5月24日周日 下午1:23写道:

> In my experience it's quite complicated for a normal reporter to be able to
> fill all the fields correctly (especially for new users).
> Usually you just wanto to report a problem, remember to add a new feature
> or improve code/documentation but you can't really give a priority, assign
> the correct label or decide which releases will benefit of the fix/new
> feature. This is something that only core developers could decide (IMHO).
>
> My experiece says that it's better if normal users could just open tickets
> with some default (or mark ticket as new) and leave them in such a state
> until an experienced user, one that can assign tickets, have the time to
> read it and immediately reject the ticket or accept it and properly assign
> priorities, fix version, etc.
>
> With respect to resolve/close I think that a good practice could be to mark
> automatically a ticket as resolved once that a PR is detected for that
> ticket, while marking it as closed should be done by the commiter who merge
> the PR.
>
> Probably this process would slightly increase the work of a committer but
> this would make things a lot cleaner and will bring the benefit of having a
> reliable and semantically sound JIRA state.
>
> Cheers,
> Flavio
>
> Il Dom 24 Mag 2020, 05:05 Israel Ekpo  ha scritto:
>
> > +1 for the proposal
> >
> > This will bring some consistency to the process
> >
> > Regarding Closed vs Resolved, should we go back and switch all the
> Resolved
> > issues to Closed so that is is not confusing to new people to the project
> > in the future that may not have seen this discussion?
> >
> > I would recommend changing it to Closed just to be consistent since that
> is
> > the majority and the new process will be using Closed going forward
> >
> > Those are my thoughts for now
> >
> > On Sat, May 23, 2020 at 7:48 AM Congxian Qiu 
> > wrote:
> >
> > > +1 for the proposal. It's good to have a unified description and write
> it
> > > down in the wiki, so that every contributor has the same understanding
> of
> > > all the fields.
> > >
> > > Best,
> > > Congxian
> > >
> > >
> > > Till Rohrmann  于2020年5月23日周六 上午12:04写道:
> > >
> > > > Thanks for drafting this proposal Robert. +1 for the proposal.
> > > >
> > > > Cheers,
> > > > Till
> > > >
> > > > On Fri, May 22, 2020 at 5:39 PM Leonard Xu 
> wrote:
> > > >
> > > > > Thanks bringing up this topic @Robert,  +1 to the proposal.
> > > > >
> > > > > It clarifies the JIRA fields well and should be a rule to follow.
> > > > >
> > > > > Best,
> > > > > Leonard Xu
> > > > > > 在 2020年5月22日,20:24,Aljoscha Krettek  写道:
> > > > > >
> > > > > > +1 That's also how I think of the semantics of the fields.
> > > > > >
> > > > > > Aljoscha
> > > > > >
> > > > > > On 22.05.20 08:07, Robert Metzger wrote:
> > > > > >> Hi all,
> > > > > >> I have the feeling that the semantics of some of our JIRA fields
> > > > (mostly
> > > > > >> "affects versions", "fix versions" and resolve / close) are not
> > > > defined
> > > > > in
> > > > > >> the same way by all the core Flink contributors, which leads to
> > > cases
> > > > > where
> > > > > >> I spend quite some time on filling the fields correctly (at
> least
> > > > what I