+1 (binding)

- Verified the signature and checksum
- Installed PyFlink successfully using the source package
- Run a few PyFlink examples: Python UDF, Pandas UDF, Python DataStream API 
with state access, Python DataStream API with batch execution mode
- Reviewed the website PR

Regards,
Dian

> 2021年4月29日 下午3:11,Jark Wu <imj...@gmail.com> 写道:
> 
> +1 (binding)
> 
> - checked/verified signatures and hashes
> - started cluster and run some e2e sql queries using SQL Client, results
> are as expect:
> * read from kafka source, window aggregate, lookup mysql database, write
> into elasticsearch
> * window aggregate using legacy window syntax and new window TVF
> * verified web ui and log output
> - reviewed the release PR
> 
> I found the log contains some verbose information when using window
> aggregate,
> but I think this doesn't block the release, I created FLINK-22522 to fix
> it.
> 
> Best,
> Jark
> 
> 
> On Thu, 29 Apr 2021 at 14:46, Dawid Wysakowicz <dwysakow...@apache.org>
> wrote:
> 
>> Hey Matthias,
>> 
>> I'd like to double confirm what Guowei said. The dependency is Apache 2
>> licensed and we do not bundle it in our jar (as it is in the runtime
>> scope) thus we do not need to mention it in the NOTICE file (btw, the
>> best way to check what is bundled is to check the output of maven shade
>> plugin). Thanks for checking it!
>> 
>> Best,
>> 
>> Dawid
>> 
>> On 29/04/2021 05:25, Guowei Ma wrote:
>>> Hi, Matthias
>>> 
>>> Thank you very much for your careful inspection.
>>> I check the flink-python_2.11-1.13.0.jar and we do not bundle
>>> org.conscrypt:conscrypt-openjdk-uber:2.5.1 to it.
>>> So I think we may not need to add this to the NOTICE file. (BTW The jar's
>>> scope is runtime)
>>> 
>>> Best,
>>> Guowei
>>> 
>>> 
>>> On Thu, Apr 29, 2021 at 2:33 AM Matthias Pohl <matth...@ververica.com>
>>> wrote:
>>> 
>>>> Thanks Dawid and Guowei for managing this release.
>>>> 
>>>> - downloaded the sources and binaries and checked the checksums
>>>> - built Flink from the downloaded sources
>>>> - executed example jobs with standalone deployments - I didn't find
>>>> anything suspicious in the logs
>>>> - reviewed release announcement pull request
>>>> 
>>>> - I did a pass over dependency updates: git diff release-1.12.2
>>>> release-1.13.0-rc2 */*.xml
>>>> There's one thing someone should double-check whether that's suppose to
>> be
>>>> like that: We added org.conscrypt:conscrypt-openjdk-uber:2.5.1 as a
>>>> dependency but I don't see it being reflected in the NOTICE file of the
>>>> flink-python module. Or is this automatically added later on?
>>>> 
>>>> +1 (non-binding; please see remark on dependency above)
>>>> 
>>>> Matthias
>>>> 
>>>> On Wed, Apr 28, 2021 at 1:52 PM Stephan Ewen <se...@apache.org> wrote:
>>>> 
>>>>> Glad to hear that outcome. And no worries about the false alarm.
>>>>> Thank you for doing thorough testing, this is very helpful!
>>>>> 
>>>>> On Wed, Apr 28, 2021 at 1:04 PM Caizhi Weng <tsreape...@gmail.com>
>>>> wrote:
>>>>>> After the investigation we found that this issue is caused by the
>>>>>> implementation of connector, not by the Flink framework.
>>>>>> 
>>>>>> Sorry for the false alarm.
>>>>>> 
>>>>>> Stephan Ewen <se...@apache.org> 于2021年4月28日周三 下午3:23写道:
>>>>>> 
>>>>>>> @Caizhi and @Becket - let me reach out to you to jointly debug this
>>>>>> issue.
>>>>>>> I am wondering if there is some incorrect reporting of failed events?
>>>>>>> 
>>>>>>> On Wed, Apr 28, 2021 at 8:53 AM Caizhi Weng <tsreape...@gmail.com>
>>>>>> wrote:
>>>>>>>> -1
>>>>>>>> 
>>>>>>>> We're testing this version on batch jobs with large (600~1000)
>>>>>>> parallelisms
>>>>>>>> and the following exception messages appear with high frequency:
>>>>>>>> 
>>>>>>>> 2021-04-27 21:27:26
>>>>>>>> org.apache.flink.util.FlinkException: An OperatorEvent from an
>>>>>>>> OperatorCoordinator to a task was lost. Triggering task failover to
>>>>>>> ensure
>>>>>>>> consistency. Event: '[NoMoreSplitEvent]', targetTask: <task name> -
>>>>>>>> execution #0
>>>>>>>> at
>>>>>>>> 
>>>>>>>> 
>>>> 
>> org.apache.flink.runtime.operators.coordination.SubtaskGatewayImpl.lambda$sendEvent$0(SubtaskGatewayImpl.java:81)
>>>>>>>> at
>>>>>>>> 
>>>>>>>> 
>>>> 
>> java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:822)
>>>>>>>> at
>>>>>>>> 
>>>>>>>> 
>>>> 
>> java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:797)
>>>>>>>> at
>>>>>>>> 
>>>>>>>> 
>>>> 
>> java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442)
>>>>>>>> at
>>>>>>>> 
>>>>>>>> 
>>>> 
>> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRunAsync(AkkaRpcActor.java:440)
>>>>>>>> at
>>>>>>>> 
>>>>>>>> 
>>>> 
>> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:208)
>>>>>>>> at
>>>>>>>> 
>>>>>>>> 
>>>> 
>> org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:77)
>>>>>>>> at
>>>>>>>> 
>>>>>>>> 
>>>> 
>> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:158)
>>>>>>>> at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26)
>>>>>>>> at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21)
>>>>>>>> at
>>>> scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123)
>>>>>>>> at akka.japi.pf
>>>>> .UnitCaseStatement.applyOrElse(CaseStatements.scala:21)
>>>>>>>> at
>>>>> scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:170)
>>>>>>>> at
>>>>> scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
>>>>>>>> at
>>>>> scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
>>>>>>>> at akka.actor.Actor$class.aroundReceive(Actor.scala:517)
>>>>>>>> at akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:225)
>>>>>>>> at akka.actor.ActorCell.receiveMessage(ActorCell.scala:592)
>>>>>>>> at akka.actor.ActorCell.invoke(ActorCell.scala:561)
>>>>>>>> at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:258)
>>>>>>>> at akka.dispatch.Mailbox.run(Mailbox.scala:225)
>>>>>>>> at akka.dispatch.Mailbox.exec(Mailbox.scala:235)
>>>>>>>> at
>>>> akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>>>>>>>> at
>>>>>>>> 
>>>>>>>> 
>>>> 
>> akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
>>>>>>>> at
>>>>>> akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
>>>>>>>> at
>>>>>>>> 
>>>>>>>> 
>>>> 
>> akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
>>>>>>>> Becket Qin is investigating this issue.
>>>>>>>> 
>> 
>> 

Reply via email to