[jira] [Created] (FLINK-18512) [KINESIS][EFO] Introduce RecordPublisher Interface

2020-07-07 Thread Danny Cranmer (Jira)
Danny Cranmer created FLINK-18512:
-

 Summary: [KINESIS][EFO] Introduce RecordPublisher Interface
 Key: FLINK-18512
 URL: https://issues.apache.org/jira/browse/FLINK-18512
 Project: Flink
  Issue Type: Sub-task
Reporter: Danny Cranmer


*Background*

In order to add support for EFO in the {{FlinkKinesisConsumer}} we are 
abstracting out the record consumption from Kinesis {{ShardConsumer}} and 
introducing an interface. 

*Scope*

Introduce the {{RecordPublisher}} interface and refactor the existing polling 
implementation to implement it:
 * Add {{PollingRecordPublisher}} that is functional equivalent to the existing 
implementation
 * Support adaptive throughput via an extension, 
{{AdaptivePollingRecordSubscriber}}
 * Split out the {{ShardMetricReporter}} into separate classes such that each 
component can report it's own metrics:
 ** {{ShardConsumer}}
 ** {{PollingRecordConsumer}}
 ** {{FanOutRecordConsumer}} (later)
 * All the existing unit test will continue to pass, and be functionally 
equivalent (there may be minor compilation tweaks)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-18514) Building fails with JDK 14 installed

2020-07-07 Thread Niels Basjes (Jira)
Niels Basjes created FLINK-18514:


 Summary: Building fails with JDK 14 installed
 Key: FLINK-18514
 URL: https://issues.apache.org/jira/browse/FLINK-18514
 Project: Flink
  Issue Type: Improvement
Reporter: Niels Basjes


On a system with JDK 14 installed the build fails with 
```
[INFO] --- gmavenplus-plugin:1.8.1:execute (merge-categories) @ 
flink-end-to-end-tests ---
[INFO] Using plugin classloader, includes GMavenPlus classpath.
java.lang.NoClassDefFoundError: Could not initialize class 
org.codehaus.groovy.vmplugin.v7.Java7
at 
org.codehaus.groovy.vmplugin.VMPluginFactory.(VMPluginFactory.java:43)
at 
org.codehaus.groovy.reflection.GroovyClassValueFactory.(GroovyClassValueFactory.java:35)
at org.codehaus.groovy.reflection.ClassInfo.(ClassInfo.java:107)
at 
org.codehaus.groovy.reflection.ReflectionCache.getCachedClass(ReflectionCache.java:95)
at 
org.codehaus.groovy.reflection.ReflectionCache.(ReflectionCache.java:39)
```

This is a known problem in groovy that has been fixed in version 2.5.10: 
https://issues.apache.org/jira/browse/GROOVY-9211





--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-18513) [Kinesis][EFO] Add AWS SDK v2.x dependency and FanOutKinesisProxy

2020-07-07 Thread Danny Cranmer (Jira)
Danny Cranmer created FLINK-18513:
-

 Summary: [Kinesis][EFO] Add AWS SDK v2.x dependency and 
FanOutKinesisProxy
 Key: FLINK-18513
 URL: https://issues.apache.org/jira/browse/FLINK-18513
 Project: Flink
  Issue Type: Sub-task
Reporter: Danny Cranmer


*Background*

EFO requires the AWS SDK v2.x dependency. Version 1.x cannot be removed from 
the project as other component require it (DynamoDB streams and KPL), therefore 
we will use AWS SDK 1.x and 2.x side by side.

*Scope*

This change will introduce the new dependency and Kinesis V2 proxy:
 * Update pom file to include the new dependency
 ** Does it need to be shaded?
 * Add a {{KinesisProxyV2}} with the following methods:
 ** {{subscribeToShard}}
 ** {{registerStreamConsumer}}
 ** {{deregisterStreamConsumer}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-18515) [Kinesis][EFO] FanOutRecordPublisher and full EFO support

2020-07-07 Thread Danny Cranmer (Jira)
Danny Cranmer created FLINK-18515:
-

 Summary: [Kinesis][EFO] FanOutRecordPublisher and full EFO support
 Key: FLINK-18515
 URL: https://issues.apache.org/jira/browse/FLINK-18515
 Project: Flink
  Issue Type: Sub-task
Reporter: Danny Cranmer


*Background*
In order to utilise an EFO subscription a consumer must register and acquire a 
ConsumerARN. Subsequently the ConsumerARN can be used to request a subscription 
and start receiving data. The user application will determine whether to use 
Polling or FanOut consumption via the connector properties.

*Scope*
Add EFO support by implementing the following four items:
* Connector Configuration
* Stream Consumer Registration
* Fan Out Record Consumption ({{FanOutRecordPublisher}}
* Stream Consumer De-registration



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [ANNOUNCE] New PMC member: Piotr Nowojski

2020-07-07 Thread Marta Paes Moreira
Go Piotr! Congrats!

On Tue, Jul 7, 2020 at 7:15 AM Biao Liu  wrote:

> Congrats Piotr! Well deserved!
>
> Thanks,
> Biao /'bɪ.aʊ/
>
>
>
> On Tue, 7 Jul 2020 at 13:03, Congxian Qiu  wrote:
>
> > Congratulations Piotr!
> >
> > Best,
> > Congxian
> >
> >
> > Zhijiang  于2020年7月7日周二 下午12:25写道:
> >
> > > Congratulations Piotr!
> > >
> > > Best,
> > > Zhijiang
> > >
> > >
> > > --
> > > From:Rui Li 
> > > Send Time:2020年7月7日(星期二) 11:55
> > > To:dev 
> > > Cc:pnowojski 
> > > Subject:Re: [ANNOUNCE] New PMC member: Piotr Nowojski
> > >
> > > Congrats!
> > >
> > > On Tue, Jul 7, 2020 at 11:25 AM Yangze Guo  wrote:
> > >
> > > > Congratulations!
> > > >
> > > > Best,
> > > > Yangze Guo
> > > >
> > > > On Tue, Jul 7, 2020 at 11:01 AM Jiayi Liao 
> > > > wrote:
> > > > >
> > > > > Congratulations Piotr!
> > > > >
> > > > > Best,
> > > > > Jiayi Liao
> > > > >
> > > > > On Tue, Jul 7, 2020 at 10:54 AM Jark Wu  wrote:
> > > > >
> > > > > > Congratulations Piotr!
> > > > > >
> > > > > > Best,
> > > > > > Jark
> > > > > >
> > > > > > On Tue, 7 Jul 2020 at 10:50, Yuan Mei 
> > > wrote:
> > > > > >
> > > > > > > Congratulations, Piotr!
> > > > > > >
> > > > > > > On Tue, Jul 7, 2020 at 1:07 AM Stephan Ewen 
> > > > wrote:
> > > > > > >
> > > > > > > > Hi all!
> > > > > > > >
> > > > > > > > It is my pleasure to announce that Piotr Nowojski joined the
> > > Flink
> > > > PMC.
> > > > > > > >
> > > > > > > > Many of you may know Piotr from the work he does on the data
> > > > processing
> > > > > > > > runtime and the network stack, from the mailing list, or the
> > > > release
> > > > > > > > manager work.
> > > > > > > >
> > > > > > > > Congrats, Piotr!
> > > > > > > >
> > > > > > > > Best,
> > > > > > > > Stephan
> > > > > > > >
> > > > > > >
> > > > > >
> > > >
> > >
> > >
> > > --
> > > Best regards!
> > > Rui Li
> > >
> > >
> >
>


Re: [ANNOUNCE] New PMC member: Piotr Nowojski

2020-07-07 Thread Fabian Hueske
Congrats Piotr!

Cheers, Fabian

Am Di., 7. Juli 2020 um 10:04 Uhr schrieb Marta Paes Moreira <
ma...@ververica.com>:

> Go Piotr! Congrats!
>
> On Tue, Jul 7, 2020 at 7:15 AM Biao Liu  wrote:
>
> > Congrats Piotr! Well deserved!
> >
> > Thanks,
> > Biao /'bɪ.aʊ/
> >
> >
> >
> > On Tue, 7 Jul 2020 at 13:03, Congxian Qiu 
> wrote:
> >
> > > Congratulations Piotr!
> > >
> > > Best,
> > > Congxian
> > >
> > >
> > > Zhijiang  于2020年7月7日周二 下午12:25写道:
> > >
> > > > Congratulations Piotr!
> > > >
> > > > Best,
> > > > Zhijiang
> > > >
> > > >
> > > > --
> > > > From:Rui Li 
> > > > Send Time:2020年7月7日(星期二) 11:55
> > > > To:dev 
> > > > Cc:pnowojski 
> > > > Subject:Re: [ANNOUNCE] New PMC member: Piotr Nowojski
> > > >
> > > > Congrats!
> > > >
> > > > On Tue, Jul 7, 2020 at 11:25 AM Yangze Guo 
> wrote:
> > > >
> > > > > Congratulations!
> > > > >
> > > > > Best,
> > > > > Yangze Guo
> > > > >
> > > > > On Tue, Jul 7, 2020 at 11:01 AM Jiayi Liao <
> buptliaoji...@gmail.com>
> > > > > wrote:
> > > > > >
> > > > > > Congratulations Piotr!
> > > > > >
> > > > > > Best,
> > > > > > Jiayi Liao
> > > > > >
> > > > > > On Tue, Jul 7, 2020 at 10:54 AM Jark Wu 
> wrote:
> > > > > >
> > > > > > > Congratulations Piotr!
> > > > > > >
> > > > > > > Best,
> > > > > > > Jark
> > > > > > >
> > > > > > > On Tue, 7 Jul 2020 at 10:50, Yuan Mei 
> > > > wrote:
> > > > > > >
> > > > > > > > Congratulations, Piotr!
> > > > > > > >
> > > > > > > > On Tue, Jul 7, 2020 at 1:07 AM Stephan Ewen <
> se...@apache.org>
> > > > > wrote:
> > > > > > > >
> > > > > > > > > Hi all!
> > > > > > > > >
> > > > > > > > > It is my pleasure to announce that Piotr Nowojski joined
> the
> > > > Flink
> > > > > PMC.
> > > > > > > > >
> > > > > > > > > Many of you may know Piotr from the work he does on the
> data
> > > > > processing
> > > > > > > > > runtime and the network stack, from the mailing list, or
> the
> > > > > release
> > > > > > > > > manager work.
> > > > > > > > >
> > > > > > > > > Congrats, Piotr!
> > > > > > > > >
> > > > > > > > > Best,
> > > > > > > > > Stephan
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > >
> > > >
> > > >
> > > > --
> > > > Best regards!
> > > > Rui Li
> > > >
> > > >
> > >
> >
>


[jira] [Created] (FLINK-18516) Improve error message for rank function in streaming mode

2020-07-07 Thread Rui Li (Jira)
Rui Li created FLINK-18516:
--

 Summary: Improve error message for rank function in streaming mode
 Key: FLINK-18516
 URL: https://issues.apache.org/jira/browse/FLINK-18516
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / Planner
Reporter: Rui Li


The following query currently fails with NPE:
{code}
create table foo (x int,y string,p as proctime()) with ...;
select x,y,row_number() over (partition by x order by p) from foo;
{code}
which can be difficult for users to figure out the reason of the failure.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-18517) kubernetes session test failed with "java.net.SocketException: Broken pipe"

2020-07-07 Thread Dian Fu (Jira)
Dian Fu created FLINK-18517:
---

 Summary: kubernetes session test failed with 
"java.net.SocketException: Broken pipe"
 Key: FLINK-18517
 URL: https://issues.apache.org/jira/browse/FLINK-18517
 Project: Flink
  Issue Type: Improvement
  Components: Deployment / Kubernetes, Tests
Affects Versions: 1.10.1
Reporter: Dian Fu


It failed on release-1.10 branch:
https://travis-ci.org/github/apache/flink/jobs/705554778

Exception message:
{code}
020-07-07 01:54:17,173 ERROR org.apache.flink.client.cli.CliFrontend
   - Error while running the command.
org.apache.flink.client.program.ProgramInvocationException: The main method 
caused an error: Operation: [get]  for kind: [Service]  with name: 
[flink-native-k8s-session-1-rest]  in namespace: [default]  failed.
at 
org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:335)
at 
org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:205)
at 
org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:138)
at 
org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:670)
at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:213)
at 
org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:901)
at 
org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:974)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836)
at 
org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:974)
Caused by: io.fabric8.kubernetes.client.KubernetesClientException: Operation: 
[get]  for kind: [Service]  with name: [flink-native-k8s-session-1-rest]  in 
namespace: [default]  failed.
at 
io.fabric8.kubernetes.client.KubernetesClientException.launderThrowable(KubernetesClientException.java:64)
at 
io.fabric8.kubernetes.client.KubernetesClientException.launderThrowable(KubernetesClientException.java:72)
at 
io.fabric8.kubernetes.client.dsl.base.BaseOperation.getMandatory(BaseOperation.java:231)
at 
io.fabric8.kubernetes.client.dsl.base.BaseOperation.get(BaseOperation.java:164)
at 
org.apache.flink.kubernetes.kubeclient.Fabric8FlinkKubeClient.getService(Fabric8FlinkKubeClient.java:299)
at 
org.apache.flink.kubernetes.kubeclient.Fabric8FlinkKubeClient.getRestService(Fabric8FlinkKubeClient.java:240)
at 
org.apache.flink.kubernetes.kubeclient.Fabric8FlinkKubeClient.getRestEndpoint(Fabric8FlinkKubeClient.java:205)
at 
org.apache.flink.kubernetes.KubernetesClusterDescriptor.lambda$createClusterClientProvider$0(KubernetesClusterDescriptor.java:88)
at 
org.apache.flink.kubernetes.KubernetesClusterDescriptor.retrieve(KubernetesClusterDescriptor.java:118)
at 
org.apache.flink.kubernetes.KubernetesClusterDescriptor.retrieve(KubernetesClusterDescriptor.java:59)
at 
org.apache.flink.client.deployment.executors.AbstractSessionClusterExecutor.execute(AbstractSessionClusterExecutor.java:63)
at 
org.apache.flink.api.java.ExecutionEnvironment.executeAsync(ExecutionEnvironment.java:962)
at 
org.apache.flink.client.program.ContextEnvironment.executeAsync(ContextEnvironment.java:108)
at 
org.apache.flink.client.program.ContextEnvironment.execute(ContextEnvironment.java:58)
at 
org.apache.flink.examples.java.wordcount.WordCount.main(WordCount.java:93)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:321)
... 11 more
Caused by: java.net.SocketException: Broken pipe (Write failed)
at java.net.SocketOutputStream.socketWrite0(Native Method)
at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:111)
at java.net.SocketOutputStream.write(SocketOutputStream.java:155)
at sun.security.ssl.OutputRecord.writeBuffer(OutputRecord.java:431)
at sun.security.ssl.OutputRecord.write(OutputRecord.java:417)
at 
sun.security.ssl.SSLSocketImpl.writeRecordInternal(SSLSocketImpl.java:894)
at sun.security.ssl.SSLSocketImpl.writeRecord(SSLSocketImpl.java:865)
at sun.security.ssl.AppOutputStream.write(AppOutputStream.java:123)
at org.apache.flink.kubernet

Re: [VOTE] FLIP-128: Enhanced Fan Out for AWS Kinesis Consumers

2020-07-07 Thread Cranmer, Danny
Thank-you to everyone that voted and contributed to the FLIP discussions.

I am closing this voting thread, the FLIP has been accepted with a consensus:
 +1 (Binding): 3 (Tzu-Li (Gordon) Tai, Aljoscha Krettek, Thomas Weise)
 +0 (Binding): 0

 +1 (Non-Binding): 1 (Konstantin Knauf)
 +0 (Non-Binding): 0

Thanks,
Danny Cranmer

On 06/07/2020, 20:37, "Thomas Weise"  wrote:

CAUTION: This email originated from outside of the organization. Do not 
click links or open attachments unless you can confirm the sender and know the 
content is safe.



+1


On Mon, Jul 6, 2020 at 12:15 AM Konstantin Knauf  wrote:

> +1
>
> On Thu, Jul 2, 2020 at 8:16 PM Aljoscha Krettek 
> wrote:
>
> > +1
> >
> > Aljoscha
> >
> > On 01.07.20 15:14, Tzu-Li (Gordon) Tai wrote:
> > > +1
> > >
> > > On Wed, Jul 1, 2020, 8:57 PM Cranmer, Danny 
> wrote:
> > >
> > >> Hi all,
> > >>
> > >> I'd like to start a voting thread for FLIP-128 [1], which we've
> reached
> > consensus
> > >> in [2].
> > >>
> > >> This voting will be open for minimum 3 days till 13:00 UTC, July 4th.
> > >>
> > >> Thanks,
> > >> Danny Cranmer
> > >>
> > >> [1]
> > >>
> >
> 
https://cwiki.apache.org/confluence/display/FLINK/FLIP-128%3A+Enhanced+Fan+Out+for+AWS+Kinesis+Consumers
> > >> [2]
> > >>
> >
> 
http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-FLIP-128-Enhanced-Fan-Out-for-AWS-Kinesis-Consumers-td42720.html
> > >>
> > >>
> > >>
> > >
> >
> >
>
> --
>
> Konstantin Knauf
>
> https://twitter.com/snntrable
>
> https://github.com/knaufk
>



Re: NullPointer Exception while trying to access or read ReadOnly ctx in processElement method in KeyedBroadCastProcessFunction in Apache Flink

2020-07-07 Thread Dawid Wysakowicz
Could you please share with us the actual code that you are running.
Possibly without any debug statements. As I said in my first reply, I
can not rely relate the stack trace with the code you sent us. Moreover
in the code you sent us I can not see a way it could throw an Exception
at the line you annotated with: /*It's hitting nullpointer exception
when printing the size of hashmpa*/. You are accessing a local variable
in the function. It must not throw nullpointer exception (unless there
is something weird about Hashmap. Again I don't necessarily know what is
that class. It might be a typo and you meant HashMap or it is some class
that I don't know.)

Without an actual code that fails I am afraid we will not be able to
help you much. From the information I got so far I can only be guessing
you might be making wrong assumption on the order in which the
processElement and processBroadcast are called. As Kostas mentioned in
his response. There is no guarantee about the order of the two sides.

Best.,

Dawid




signature.asc
Description: OpenPGP digital signature


[jira] [Created] (FLINK-18518) Add Async RequestReply handler for the Python SDK

2020-07-07 Thread Igal Shilman (Jira)
Igal Shilman created FLINK-18518:


 Summary: Add Async RequestReply handler for the Python SDK
 Key: FLINK-18518
 URL: https://issues.apache.org/jira/browse/FLINK-18518
 Project: Flink
  Issue Type: Improvement
  Components: Stateful Functions
Affects Versions: statefun-2.1.0
Reporter: Igal Shilman


I/O bound stateful functions can benefit from the built-in async/io support in 
Python, but the 

RequestReply handler is not an async-io compatible.  See 
[this|https://stackoverflow.com/questions/62640283/flink-stateful-functions-async-calls-with-the-python-sdk]
 question on stackoverflow.

 

Having an asyncio compatible handler will open the door to the usage of aiohttp 
for example:

 
{code:java}
import aiohttp
import asyncio

...

async def fetch(session, url):
async with session.get(url) as response:
return await response.text()

@function.bind("example/hello")
async def hello(context, message):
async with aiohttp.ClientSession() as session:
html = await fetch(session, 'http://python.org')
context.pack_and_reply(SomeProtobufMessage(html))


from aiohttp import webhandler 

handler = AsyncRequestReplyHandler(functions)

async def handle(request):
req = await request.read()
res = await handler(req)
return web.Response(body=res, content_type="application/octet-stream'")

app = web.Application()
app.add_routes([web.post('/statefun', handle)])
if __name__ == '__main__':
web.run_app(app, port=5000)
 {code}
 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Check pointing for simple pipeline

2020-07-07 Thread Prasanna kumar
Hi ,

I have pipeline. Source-> Map(JSON transform)-> Sink..

Both source and sink are Kafka.

What is the best checkpoint ing mechanism?

 Is setting checkpoints incremental a good option? What should be careful
of?

I am running it on aws emr.

Will checkpoint slow the speed?

Thanks,
Prasanna.


Re: [ANNOUNCE] New PMC member: Piotr Nowojski

2020-07-07 Thread Guowei Ma
Congratulations!

Best,
Guowei


Fabian Hueske  于2020年7月7日周二 下午4:34写道:

> Congrats Piotr!
>
> Cheers, Fabian
>
> Am Di., 7. Juli 2020 um 10:04 Uhr schrieb Marta Paes Moreira <
> ma...@ververica.com>:
>
> > Go Piotr! Congrats!
> >
> > On Tue, Jul 7, 2020 at 7:15 AM Biao Liu  wrote:
> >
> > > Congrats Piotr! Well deserved!
> > >
> > > Thanks,
> > > Biao /'bɪ.aʊ/
> > >
> > >
> > >
> > > On Tue, 7 Jul 2020 at 13:03, Congxian Qiu 
> > wrote:
> > >
> > > > Congratulations Piotr!
> > > >
> > > > Best,
> > > > Congxian
> > > >
> > > >
> > > > Zhijiang  于2020年7月7日周二
> 下午12:25写道:
> > > >
> > > > > Congratulations Piotr!
> > > > >
> > > > > Best,
> > > > > Zhijiang
> > > > >
> > > > >
> > > > > --
> > > > > From:Rui Li 
> > > > > Send Time:2020年7月7日(星期二) 11:55
> > > > > To:dev 
> > > > > Cc:pnowojski 
> > > > > Subject:Re: [ANNOUNCE] New PMC member: Piotr Nowojski
> > > > >
> > > > > Congrats!
> > > > >
> > > > > On Tue, Jul 7, 2020 at 11:25 AM Yangze Guo 
> > wrote:
> > > > >
> > > > > > Congratulations!
> > > > > >
> > > > > > Best,
> > > > > > Yangze Guo
> > > > > >
> > > > > > On Tue, Jul 7, 2020 at 11:01 AM Jiayi Liao <
> > buptliaoji...@gmail.com>
> > > > > > wrote:
> > > > > > >
> > > > > > > Congratulations Piotr!
> > > > > > >
> > > > > > > Best,
> > > > > > > Jiayi Liao
> > > > > > >
> > > > > > > On Tue, Jul 7, 2020 at 10:54 AM Jark Wu 
> > wrote:
> > > > > > >
> > > > > > > > Congratulations Piotr!
> > > > > > > >
> > > > > > > > Best,
> > > > > > > > Jark
> > > > > > > >
> > > > > > > > On Tue, 7 Jul 2020 at 10:50, Yuan Mei <
> yuanmei.w...@gmail.com>
> > > > > wrote:
> > > > > > > >
> > > > > > > > > Congratulations, Piotr!
> > > > > > > > >
> > > > > > > > > On Tue, Jul 7, 2020 at 1:07 AM Stephan Ewen <
> > se...@apache.org>
> > > > > > wrote:
> > > > > > > > >
> > > > > > > > > > Hi all!
> > > > > > > > > >
> > > > > > > > > > It is my pleasure to announce that Piotr Nowojski joined
> > the
> > > > > Flink
> > > > > > PMC.
> > > > > > > > > >
> > > > > > > > > > Many of you may know Piotr from the work he does on the
> > data
> > > > > > processing
> > > > > > > > > > runtime and the network stack, from the mailing list, or
> > the
> > > > > > release
> > > > > > > > > > manager work.
> > > > > > > > > >
> > > > > > > > > > Congrats, Piotr!
> > > > > > > > > >
> > > > > > > > > > Best,
> > > > > > > > > > Stephan
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > >
> > > > >
> > > > >
> > > > > --
> > > > > Best regards!
> > > > > Rui Li
> > > > >
> > > > >
> > > >
> > >
> >
>


Re: [DISCUSS] FLIP-128: Enhanced Fan Out for AWS Kinesis Consumers

2020-07-07 Thread Cranmer, Danny
Hello Thomas,

Thank-you for your vote and feedback on the FLIP.

Q: Do you see the polling (non-EFO) mode as a permanent option going forward?
A: I will follow up on this, I have forwarded the question on. But generally 
speaking AWS do not usually deprecate APIs. KDS (Kinesis Data Streams) will 
therefore likely always support the polling mechanism. The question is whether 
we want to support it within the Flink connector. I will get back to you. 

Q: Perhaps elaborate a bit more on limitations and reasoning for the different 
registration options?
A: I was planning on elaborating in the updated documentation that will be 
published to the Flink website. Would you like me to update FLIP to include 
this information in advance?

Q: That won't cause an issue because a stale registration will be 
overridden/removed by a new job with the same name?
A: Yes exactly, when the consumer name is already registered, either by the 
user, or from a previous error shutdown. The first ListStreamConsumers call 
will find the consumer in an ACTIVE state, retrieve the ConsumerARN and tasks 
will subsequently use that to obtain a subscription (this new subscription will 
invalidate any existing ones).

Thanks,
Danny

On 06/07/2020, 20:36, "Thomas Weise"  wrote:

CAUTION: This email originated from outside of the organization. Do not 
click links or open attachments unless you can confirm the sender and know the 
content is safe.



Thanks for the excellent proposal!

Big +1 for introducing EFO as an incremental feature while retaining
backward compatibility! This will make it easier for users to adopt.

Thanks for mentioning the reasons why one might not want to use EFO.
Regarding "Streams with a single consumer would not benefit from the
dedicated throughput (they already have the full quota)": Do you see the
polling (non-EFO) mode as a permanent option going forward?

Regarding "Registration/De-registration Configuration":

The limit for "ListStreamConsumers" is 5 TPS per [1], which is even lower
than that for "DescribeStream". That limit could cause significant issues
during large scale job startup and the only solution was to switch to
ListShards. Perhaps elaborate a bit more on limitations and reasoning for
the different registration options?

De-registration may never happen when task managers go into a bad state and
are forcefully terminated. That won't cause an issue because a stale
registration will be overridden/removed by a new job with the same name?

Thanks,
Thomas


[1]
https://docs.aws.amazon.com/streams/latest/dev/service-sizes-and-limits.html

On Mon, Jun 22, 2020 at 2:42 AM Cranmer, Danny 
wrote:

> Hello everyone,
> This is a discussion thread for the FLIP [1] regarding Enhanced Fan Out
> for AWS Kinesis Consumers.
>
> Enhanced Fan Out (EFO) allows AWS Kinesis Data Stream (KDS) consumers to
> utilise a dedicated read throughput, rather than a shared quota. HTTP/2
> reduces latency and typically gives a 65% performance boost [2]. EFO is 
not
> currently supported by the Flink Kinesis Consumer. Adding EFO support 
would
> allow Flink applications to reap the benefits, widening Flink adoption.
> Existing applications will be able to optionally perform a backwards
> compatible library upgrade and configuration tweak to inherit the
> performance benefits.
> [1]
> 
https://cwiki.apache.org/confluence/display/FLINK/FLIP-128%3A+Enhanced+Fan+Out+for+AWS+Kinesis+Consumers
> [2] https://aws.amazon.com/blogs/aws/kds-enhanced-fanout/
> I look forward to your feedback,
> Thanks,
>



Re: [ANNOUNCE] New PMC member: Piotr Nowojski

2020-07-07 Thread Piotr Nowojski
Thanks :)

wt., 7 lip 2020 o 15:20 Guowei Ma  napisał(a):

> Congratulations!
>
> Best,
> Guowei
>
>
> Fabian Hueske  于2020年7月7日周二 下午4:34写道:
>
>> Congrats Piotr!
>>
>> Cheers, Fabian
>>
>> Am Di., 7. Juli 2020 um 10:04 Uhr schrieb Marta Paes Moreira <
>> ma...@ververica.com>:
>>
>> > Go Piotr! Congrats!
>> >
>> > On Tue, Jul 7, 2020 at 7:15 AM Biao Liu  wrote:
>> >
>> > > Congrats Piotr! Well deserved!
>> > >
>> > > Thanks,
>> > > Biao /'bɪ.aʊ/
>> > >
>> > >
>> > >
>> > > On Tue, 7 Jul 2020 at 13:03, Congxian Qiu 
>> > wrote:
>> > >
>> > > > Congratulations Piotr!
>> > > >
>> > > > Best,
>> > > > Congxian
>> > > >
>> > > >
>> > > > Zhijiang  于2020年7月7日周二
>> 下午12:25写道:
>> > > >
>> > > > > Congratulations Piotr!
>> > > > >
>> > > > > Best,
>> > > > > Zhijiang
>> > > > >
>> > > > >
>> > > > > --
>> > > > > From:Rui Li 
>> > > > > Send Time:2020年7月7日(星期二) 11:55
>> > > > > To:dev 
>> > > > > Cc:pnowojski 
>> > > > > Subject:Re: [ANNOUNCE] New PMC member: Piotr Nowojski
>> > > > >
>> > > > > Congrats!
>> > > > >
>> > > > > On Tue, Jul 7, 2020 at 11:25 AM Yangze Guo 
>> > wrote:
>> > > > >
>> > > > > > Congratulations!
>> > > > > >
>> > > > > > Best,
>> > > > > > Yangze Guo
>> > > > > >
>> > > > > > On Tue, Jul 7, 2020 at 11:01 AM Jiayi Liao <
>> > buptliaoji...@gmail.com>
>> > > > > > wrote:
>> > > > > > >
>> > > > > > > Congratulations Piotr!
>> > > > > > >
>> > > > > > > Best,
>> > > > > > > Jiayi Liao
>> > > > > > >
>> > > > > > > On Tue, Jul 7, 2020 at 10:54 AM Jark Wu 
>> > wrote:
>> > > > > > >
>> > > > > > > > Congratulations Piotr!
>> > > > > > > >
>> > > > > > > > Best,
>> > > > > > > > Jark
>> > > > > > > >
>> > > > > > > > On Tue, 7 Jul 2020 at 10:50, Yuan Mei <
>> yuanmei.w...@gmail.com>
>> > > > > wrote:
>> > > > > > > >
>> > > > > > > > > Congratulations, Piotr!
>> > > > > > > > >
>> > > > > > > > > On Tue, Jul 7, 2020 at 1:07 AM Stephan Ewen <
>> > se...@apache.org>
>> > > > > > wrote:
>> > > > > > > > >
>> > > > > > > > > > Hi all!
>> > > > > > > > > >
>> > > > > > > > > > It is my pleasure to announce that Piotr Nowojski joined
>> > the
>> > > > > Flink
>> > > > > > PMC.
>> > > > > > > > > >
>> > > > > > > > > > Many of you may know Piotr from the work he does on the
>> > data
>> > > > > > processing
>> > > > > > > > > > runtime and the network stack, from the mailing list, or
>> > the
>> > > > > > release
>> > > > > > > > > > manager work.
>> > > > > > > > > >
>> > > > > > > > > > Congrats, Piotr!
>> > > > > > > > > >
>> > > > > > > > > > Best,
>> > > > > > > > > > Stephan
>> > > > > > > > > >
>> > > > > > > > >
>> > > > > > > >
>> > > > > >
>> > > > >
>> > > > >
>> > > > > --
>> > > > > Best regards!
>> > > > > Rui Li
>> > > > >
>> > > > >
>> > > >
>> > >
>> >
>>
>


[ANNOUNCE] Apache Flink 1.11.0 released

2020-07-07 Thread Zhijiang
The Apache Flink community is very happy to announce the release of Apache 
Flink 1.11.0, which is the latest major release.

Apache Flink(r) is an open-source stream processing framework for distributed, 
high-performing, always-available, and accurate data streaming applications.

The release is available for download at:
https://flink.apache.org/downloads.html

Please check out the release blog post for an overview of the improvements for 
this new major release:
https://flink.apache.org/news/2020/07/06/release-1.11.0.html

The full release notes are available in Jira:
https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12346364

We would like to thank all contributors of the Apache Flink community who made 
this release possible!

Cheers,
Piotr & Zhijiang

[jira] [Created] (FLINK-18519) Propagate exception to client when execution fails for REST submission

2020-07-07 Thread Kostas Kloudas (Jira)
Kostas Kloudas created FLINK-18519:
--

 Summary: Propagate exception to client when execution fails for 
REST submission
 Key: FLINK-18519
 URL: https://issues.apache.org/jira/browse/FLINK-18519
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Web Frontend
Affects Versions: 1.11.0
Reporter: Kostas Kloudas
Assignee: Kostas Kloudas
 Fix For: 1.11.1


Currently when a user submits an application using the REST api and the 
execution fails, the exception is logged, but not sent back to the client. This 
issue aims at propagating the reason back to the client so that it is easier 
for the user to debug his/her application.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-18520) New Table Function type inference fails

2020-07-07 Thread Benchao Li (Jira)
Benchao Li created FLINK-18520:
--

 Summary: New Table Function type inference fails
 Key: FLINK-18520
 URL: https://issues.apache.org/jira/browse/FLINK-18520
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / API
Affects Versions: 1.11.0
Reporter: Benchao Li


For a simple UDTF like 
{code:java}
public class Split extends TableFunction {
public Split(){}
public void eval(String str, String ch) {
if (str == null || str.isEmpty()) {
return;
} else {
String[] ss = str.split(ch);
for (String s : ss) {
collect(s);
}
}
}
}
{code}

register it using new function type inference 
{{tableEnv.createFunction("my_split", Split.class);}} and using it in a simple 
query will fail with following exception:

{code:java}
Exception in thread "main" org.apache.flink.table.api.ValidationException: SQL 
validation failed. From line 1, column 93 to line 1, column 115: No match found 
for function signature my_split(, )
at 
org.apache.flink.table.planner.calcite.FlinkPlannerImpl.org$apache$flink$table$planner$calcite$FlinkPlannerImpl$$validate(FlinkPlannerImpl.scala:146)
at 
org.apache.flink.table.planner.calcite.FlinkPlannerImpl.validate(FlinkPlannerImpl.scala:108)
at 
org.apache.flink.table.planner.operations.SqlToOperationConverter.convert(SqlToOperationConverter.java:187)
at 
org.apache.flink.table.planner.operations.SqlToOperationConverter.convertSqlInsert(SqlToOperationConverter.java:527)
at 
org.apache.flink.table.planner.operations.SqlToOperationConverter.convert(SqlToOperationConverter.java:204)
at 
org.apache.flink.table.planner.delegation.ParserImpl.parse(ParserImpl.java:78)
at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.sqlUpdate(TableEnvironmentImpl.java:716)
at com.bytedance.demo.SqlTest.main(SqlTest.java:64)
Caused by: org.apache.calcite.runtime.CalciteContextException: From line 1, 
column 93 to line 1, column 115: No match found for function signature 
my_split(, )
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at 
org.apache.calcite.runtime.Resources$ExInstWithCause.ex(Resources.java:457)
at org.apache.calcite.sql.SqlUtil.newContextException(SqlUtil.java:839)
at org.apache.calcite.sql.SqlUtil.newContextException(SqlUtil.java:824)
at 
org.apache.calcite.sql.validate.SqlValidatorImpl.newValidationError(SqlValidatorImpl.java:5089)
at 
org.apache.calcite.sql.validate.SqlValidatorImpl.handleUnresolvedFunction(SqlValidatorImpl.java:1882)
at org.apache.calcite.sql.SqlFunction.deriveType(SqlFunction.java:305)
at org.apache.calcite.sql.SqlFunction.deriveType(SqlFunction.java:218)
at 
org.apache.calcite.sql.validate.SqlValidatorImpl$DeriveTypeVisitor.visit(SqlValidatorImpl.java:5858)
at 
org.apache.calcite.sql.validate.SqlValidatorImpl$DeriveTypeVisitor.visit(SqlValidatorImpl.java:5845)
at org.apache.calcite.sql.SqlCall.accept(SqlCall.java:139)
at 
org.apache.calcite.sql.validate.SqlValidatorImpl.deriveTypeImpl(SqlValidatorImpl.java:1800)
at 
org.apache.calcite.sql.validate.ProcedureNamespace.validateImpl(ProcedureNamespace.java:57)
at 
org.apache.calcite.sql.validate.AbstractNamespace.validate(AbstractNamespace.java:84)
at 
org.apache.calcite.sql.validate.SqlValidatorImpl.validateNamespace(SqlValidatorImpl.java:1110)
at 
org.apache.calcite.sql.validate.SqlValidatorImpl.validateQuery(SqlValidatorImpl.java:1084)
at 
org.apache.calcite.sql.validate.SqlValidatorImpl.validateFrom(SqlValidatorImpl.java:3256)
at 
org.apache.calcite.sql.validate.SqlValidatorImpl.validateFrom(SqlValidatorImpl.java:3238)
at 
org.apache.calcite.sql.validate.SqlValidatorImpl.validateJoin(SqlValidatorImpl.java:3303)
at 
org.apache.flink.table.planner.calcite.FlinkCalciteSqlValidator.validateJoin(FlinkCalciteSqlValidator.java:86)
at 
org.apache.calcite.sql.validate.SqlValidatorImpl.validateFrom(SqlValidatorImpl.java:3247)
at 
org.apache.calcite.sql.validate.SqlValidatorImpl.validateSelect(SqlValidatorImpl.java:3510)
at 
org.apache.calcite.sql.validate.SelectNamespace.validateImpl(SelectNamespace.java:60)
at 
org.apache.calcite.sql.validate.AbstractNamespa

[jira] [Created] (FLINK-18521) Add release script for creating snapshot branches

2020-07-07 Thread Chesnay Schepler (Jira)
Chesnay Schepler created FLINK-18521:


 Summary: Add release script for creating snapshot branches
 Key: FLINK-18521
 URL: https://issues.apache.org/jira/browse/FLINK-18521
 Project: Flink
  Issue Type: Improvement
  Components: Release System
Reporter: Chesnay Schepler
Assignee: Chesnay Schepler


The creation of release branches (e.g., release-1.11) is still an entirely 
manual step, both involving the creation of the actual git branch and updating 
the documentation/python/japicmp configuration.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: NullPointer Exception while trying to access or read ReadOnly ctx in processElement method in KeyedBroadCastProcessFunction in Apache Flink

2020-07-07 Thread bujjirahul45 .
Hi,

Thanks for the reply incomingRule is nothing but incomingPattern it's a
typo and yes i am using in my program, here is my scenario i have streaming
of json events which need to be validated against a set of rules i have
multiple streams from which i am consuming data let say if i have Source A
and Source B my Map State has Key the sourcename and value is a map with
patternrule name as key and value is patterncondition  let say A is source
which is Key for Map State and Value is PatternName and PatternCondition
(A,(PatternName,PatternCondition)). In my use case when i have stream of
data my broadcast state have all the patterns for all the sources of data
but to make it more efficient i am trying to apply patterns of a particular
source to the stream, that's why i am trying to get patterns of a
particular source using key as source name
ctx.getBroadcastState(ruleDescriptor).get(Key); but when ever i am running
this in my local i am getting Null pointer exception when trying to access
the rules in broadcast state using key and this not consistent error
sometimes it runs and sometimes it throws null pointer exception when i ran
program in debug mode what i observed  is for successful run i see the
control is first going to processBroadcast method in
KeyedBroadcastProcessFunction and for unsuccessful run its first going to
processElement method and throwing null pointer exception above code how i
am trying to read broadcast state in processElement method i am attaching
the complete error log to the mail please help me in solving the issue i am
trying since long but nothing working for me i have even open method in
KeyedBroadcastProcessFunction where i am initiating the map state

if need any clarity please let me know below are my system details

OS: Windows 10
Java Version: 1.8
Flink Version: 1.10.0
IDE : IntelliJ

Below is stack trace of error

> Task :SignalPatternMatchingApp.main()
12:15:13,413 |-INFO in ch.qos.logback.classic.LoggerContext[default] -
Could NOT find resource [logback-test.xml]
12:15:13,414 |-INFO in ch.qos.logback.classic.LoggerContext[default] -
Could NOT find resource [logback.groovy]
12:15:13,414 |-INFO in ch.qos.logback.classic.LoggerContext[default] -
Found resource [logback.xml] at
[file:/C:/Users/Rahul/IdeaProjectsNew/PatternMatchingbuild/resources/main/logback.xml]
12:15:13,416 |-WARN in ch.qos.logback.classic.LoggerContext[default] -
Resource [logback.xml] occurs multiple times on the classpath.
12:15:13,416 |-WARN in ch.qos.logback.classic.LoggerContext[default] -
Resource [logback.xml] occurs at
[file:/C:/Users/Rahul/IdeaProjectsNew/PatternMatchingbuild/resources/main/logback.xml]
12:15:13,416 |-WARN in ch.qos.logback.classic.LoggerContext[default] -
Resource [logback.xml] occurs at
[jar:file:/C:/Users/Rahul/IdeaProjectsNew/PatternMatchinglibs/state-management1.10_2.12-1.0.0-SNAPSHOT.jar!/logback.xml]
12:15:13,481 |-INFO in ch.qos.logback.core.joran.action.ImplicitModelAction
- Assuming default class name
[ch.qos.logback.classic.encoder.PatternLayoutEncoder] for tag [encoder]
12:15:13,545 |-INFO in
ch.qos.logback.classic.model.processor.LoggerModelHandler@46d56d67 -
Setting level of logger [akka] to WARN
12:15:13,545 |-INFO in
ch.qos.logback.classic.model.processor.LoggerModelHandler@d8355a8 - Setting
level of logger [akka.actor.ActorSystemImpl] to WARN
12:15:13,545 |-INFO in
ch.qos.logback.classic.model.processor.LoggerModelHandler@59fa1d9b -
Setting level of logger [org.apache.kafka] to WARN
12:15:13,546 |-INFO in
ch.qos.logback.classic.model.processor.LoggerModelHandler@28d25987 -
Setting level of logger [org.apache.kafka.clients.producer.ProducerConfig]
to WARN
12:15:13,546 |-INFO in
ch.qos.logback.classic.model.processor.LoggerModelHandler@4501b7af -
Setting level of logger [org.apache.kafka.clients.consumer.ConsumerConfig]
to WARN
12:15:13,546 |-INFO in
ch.qos.logback.classic.model.processor.LoggerModelHandler@523884b2 -
Setting level of logger [org.apache.kafka.common.utils.AppInfoParser] to
ERROR
12:15:13,546 |-INFO in
ch.qos.logback.classic.model.processor.LoggerModelHandler@5b275dab -
Setting level of logger [org.apache.kafka.clients.NetworkClient] to ERROR
12:15:13,546 |-INFO in
ch.qos.logback.classic.model.processor.LoggerModelHandler@61832929 -
Setting level of logger [org.apache.flink.runtime.jobgraph.JobGraph] to
ERROR
12:15:13,546 |-INFO in
ch.qos.logback.classic.model.processor.LoggerModelHandler@29774679 -
Setting level of logger [com.eventdetection.eventfilter] to INFO
12:15:13,546 |-INFO in
ch.qos.logback.classic.model.processor.RootLoggerModelHandler@3ffc5af1 -
Setting level of ROOT logger to INFO
12:15:13,615 |-INFO in
ch.qos.logback.core.model.processor.DefaultProcessor@5e5792a0 - End of
configuration.
12:15:13,616 |-INFO in
ch.qos.logback.classic.joran.JoranConfigurator@26653222 - Registering
current configuration as safe fallback point

2020-07-07 12:15:13,660 INFO  [TypeExtractor] - class org.json.JSONObject
does not contain a getter for field map

[jira] [Created] (FLINK-18522) ZKCheckpointIDCounterMultiServersTest.testRecoveredAfterConnectionLoss failed with "Address already in use"

2020-07-07 Thread Dian Fu (Jira)
Dian Fu created FLINK-18522:
---

 Summary: 
ZKCheckpointIDCounterMultiServersTest.testRecoveredAfterConnectionLoss failed 
with "Address already in use"
 Key: FLINK-18522
 URL: https://issues.apache.org/jira/browse/FLINK-18522
 Project: Flink
  Issue Type: Improvement
  Components: Runtime / Checkpointing, Tests
Affects Versions: 1.10.1
Reporter: Dian Fu


[https://travis-ci.org/github/apache/flink/jobs/705770513]

{code}
15:09:34.674 [ERROR] 
testRecoveredAfterConnectionLoss(org.apache.flink.runtime.checkpoint.ZKCheckpointIDCounterMultiServersTest)
  Time elapsed: 5.74 s  <<< ERROR!
java.net.BindException: Address already in use
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [ANNOUNCE] Apache Flink 1.11.0 released

2020-07-07 Thread Paul Lam
Finally! Thanks for Piotr and Zhijiang being the release managers, and everyone 
that contributed to the release!

Best,
Paul Lam

> 2020年7月7日 22:06,Zhijiang  写道:
> 
> The Apache Flink community is very happy to announce the release of Apache 
> Flink 1.11.0, which is the latest major release.
> 
> Apache Flink® is an open-source stream processing framework for distributed, 
> high-performing, always-available, and accurate data streaming applications.
> 
> The release is available for download at:
> https://flink.apache.org/downloads.html 
> 
> 
> Please check out the release blog post for an overview of the improvements 
> for this new major release:
> https://flink.apache.org/news/2020/07/06/release-1.11.0.html 
> 
> 
> The full release notes are available in Jira:
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12346364
>  
> 
> 
> We would like to thank all contributors of the Apache Flink community who 
> made this release possible!
> 
> Cheers,
> Piotr & Zhijiang



Re: [ANNOUNCE] Apache Flink 1.11.0 released

2020-07-07 Thread Dian Fu
Thanks Piotr and Zhijiang for the great work and everyone who contributed to 
this release!

Regards,
Dian

> 在 2020年7月8日,上午10:12,Paul Lam  写道:
> 
> Finally! Thanks for Piotr and Zhijiang being the release managers, and 
> everyone that contributed to the release!
> 
> Best,
> Paul Lam
> 
>> 2020年7月7日 22:06,Zhijiang > > 写道:
>> 
>> The Apache Flink community is very happy to announce the release of Apache 
>> Flink 1.11.0, which is the latest major release.
>> 
>> Apache Flink® is an open-source stream processing framework for distributed, 
>> high-performing, always-available, and accurate data streaming applications.
>> 
>> The release is available for download at:
>> https://flink.apache.org/downloads.html 
>> 
>> 
>> Please check out the release blog post for an overview of the improvements 
>> for this new major release:
>> https://flink.apache.org/news/2020/07/06/release-1.11.0.html 
>> 
>> 
>> The full release notes are available in Jira:
>> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12346364
>>  
>> 
>> 
>> We would like to thank all contributors of the Apache Flink community who 
>> made this release possible!
>> 
>> Cheers,
>> Piotr & Zhijiang
> 



Re: [ANNOUNCE] Apache Flink 1.11.0 released

2020-07-07 Thread Jark Wu
Congratulations!
Thanks Zhijiang and Piotr for the great work as release manager, and thanks
everyone who makes the release possible!

Best,
Jark

On Wed, 8 Jul 2020 at 10:12, Paul Lam  wrote:

> Finally! Thanks for Piotr and Zhijiang being the release managers, and
> everyone that contributed to the release!
>
> Best,
> Paul Lam
>
> 2020年7月7日 22:06,Zhijiang  写道:
>
> The Apache Flink community is very happy to announce the release of
> Apache Flink 1.11.0, which is the latest major release.
>
> Apache Flink® is an open-source stream processing framework for distributed,
> high-performing, always-available, and accurate data streaming
> applications.
>
> The release is available for download at:
> https://flink.apache.org/downloads.html
>
> Please check out the release blog post for an overview of the improvements for
> this new major release:
> https://flink.apache.org/news/2020/07/06/release-1.11.0.html
>
> The full release notes are available in Jira:
>
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12346364
>
> We would like to thank all contributors of the Apache Flink community who made
> this release possible!
>
> Cheers,
> Piotr & Zhijiang
>
>
>


Re: [ANNOUNCE] Apache Flink 1.11.0 released

2020-07-07 Thread Yangze Guo
Thanks, Zhijiang and Piotr. Congrats to everyone involved!

Best,
Yangze Guo

On Wed, Jul 8, 2020 at 10:19 AM Jark Wu  wrote:
>
> Congratulations!
> Thanks Zhijiang and Piotr for the great work as release manager, and thanks
> everyone who makes the release possible!
>
> Best,
> Jark
>
> On Wed, 8 Jul 2020 at 10:12, Paul Lam  wrote:
>
> > Finally! Thanks for Piotr and Zhijiang being the release managers, and
> > everyone that contributed to the release!
> >
> > Best,
> > Paul Lam
> >
> > 2020年7月7日 22:06,Zhijiang  写道:
> >
> > The Apache Flink community is very happy to announce the release of
> > Apache Flink 1.11.0, which is the latest major release.
> >
> > Apache Flink® is an open-source stream processing framework for distributed,
> > high-performing, always-available, and accurate data streaming
> > applications.
> >
> > The release is available for download at:
> > https://flink.apache.org/downloads.html
> >
> > Please check out the release blog post for an overview of the improvements 
> > for
> > this new major release:
> > https://flink.apache.org/news/2020/07/06/release-1.11.0.html
> >
> > The full release notes are available in Jira:
> >
> > https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12346364
> >
> > We would like to thank all contributors of the Apache Flink community who 
> > made
> > this release possible!
> >
> > Cheers,
> > Piotr & Zhijiang
> >
> >
> >


Re: [ANNOUNCE] Apache Flink 1.11.0 released

2020-07-07 Thread Jingsong Li
Congratulations!

Thanks Zhijiang and Piotr as release managers, and thanks everyone.

Best,
Jingsong

On Wed, Jul 8, 2020 at 10:51 AM chaojianok  wrote:

> Congratulations!
>
> Very happy to make some contributions to Flink!
>
>
>
>
>
> At 2020-07-07 22:06:05, "Zhijiang"  wrote:
>
> The Apache Flink community is very happy to announce the release of
> Apache Flink 1.11.0, which is the latest major release.
>
> Apache Flink® is an open-source stream processing framework for distributed,
> high-performing, always-available, and accurate data streaming
> applications.
>
> The release is available for download at:
> https://flink.apache.org/downloads.html
>
> Please check out the release blog post for an overview of the improvements for
> this new major release:
> https://flink.apache.org/news/2020/07/06/release-1.11.0.html
>
> The full release notes are available in Jira:
>
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12346364
>
> We would like to thank all contributors of the Apache Flink community who made
> this release possible!
>
> Cheers,
> Piotr & Zhijiang
>
>
>
>
>


-- 
Best, Jingsong Lee


Re: [ANNOUNCE] Apache Flink 1.11.0 released

2020-07-07 Thread Leonard Xu
Congratulations!

Thanks Zhijiang and Piotr for the great work, and thanks everyone involved!

Best,
Leonard Xu



[DISCUSS] FLIP-130: Support for Python DataStream API (Stateless Part)

2020-07-07 Thread Shuiqiang Chen
Hi everyone,

As we all know, Flink provides three layered APIs: the ProcessFunctions,
the DataStream API and the SQL & Table API. Each API offers a different
trade-off between conciseness and expressiveness and targets different use
cases[1].

Currently, the SQL & Table API has already been supported in PyFlink. The
API provides relational operations as well as user-defined functions to
provide convenience for users who are familiar with python and relational
programming.

Meanwhile, the DataStream API and ProcessFunctions provide more generic
APIs to implement stream processing applications. The ProcessFunctions
expose time and state which are the fundamental building blocks for any
kind of streaming application.
To cover more use cases, we are planning to cover all these APIs in
PyFlink.

In this discussion(FLIP-130), we propose to support the Python DataStream
API for the stateless part. For more detail, please refer to the FLIP wiki
page here[2]. If interested in the stateful part, you can also take a look
the design doc here[3] for which we are going to discuss in a separate FLIP.


Any comments will be highly appreciated!

[1] https://flink.apache.org/flink-applications.html#layered-apis
[2]
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=158866298
[3]
https://docs.google.com/document/d/1H3hz8wuk228cDBhQmQKNw3m1q5gDAMkwTDEwnj3FBI/edit?usp=sharing

Best,
Shuiqiang


Re: Re:[ANNOUNCE] Apache Flink 1.11.0 released

2020-07-07 Thread Yun Tang
 Congratulations to every who involved and thanks for Zhijiang and Piotr's work 
as release manager.

From: chaojianok 
Sent: Wednesday, July 8, 2020 10:51
To: Zhijiang 
Cc: dev ; u...@flink.apache.org ; 
announce 
Subject: Re:[ANNOUNCE] Apache Flink 1.11.0 released


Congratulations!

Very happy to make some contributions to Flink!





At 2020-07-07 22:06:05, "Zhijiang"  wrote:

The Apache Flink community is very happy to announce the release of Apache 
Flink 1.11.0, which is the latest major release.

Apache Flink® is an open-source stream processing framework for distributed, 
high-performing, always-available, and accurate data streaming applications.

The release is available for download at:
https://flink.apache.org/downloads.html

Please check out the release blog post for an overview of the improvements for 
this new major release:
https://flink.apache.org/news/2020/07/06/release-1.11.0.html

The full release notes are available in Jira:
https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12346364

We would like to thank all contributors of the Apache Flink community who made 
this release possible!

Cheers,
Piotr & Zhijiang






Re: Check pointing for simple pipeline

2020-07-07 Thread Yun Tang
Hi Prasanna

Using incremental checkpoint is always better than not as this is faster and 
less memory consumed.
However, incremental checkpoint is only supported by RocksDB state-backend.


Best
Yun Tang

From: Prasanna kumar 
Sent: Tuesday, July 7, 2020 20:43
To: dev@flink.apache.org ; user 
Subject: Check pointing for simple pipeline

Hi ,

I have pipeline. Source-> Map(JSON transform)-> Sink..

Both source and sink are Kafka.

What is the best checkpoint ing mechanism?

 Is setting checkpoints incremental a good option? What should be careful of?

I am running it on aws emr.

Will checkpoint slow the speed?

Thanks,
Prasanna.


Re: [ANNOUNCE] Apache Flink 1.11.0 released

2020-07-07 Thread Rui Li
Congratulations! Thanks Zhijiang & Piotr for the hard work.

On Tue, Jul 7, 2020 at 10:06 PM Zhijiang  wrote:

> The Apache Flink community is very happy to announce the release of
> Apache Flink 1.11.0, which is the latest major release.
>
> Apache Flink® is an open-source stream processing framework for distributed,
> high-performing, always-available, and accurate data streaming
> applications.
>
> The release is available for download at:
> https://flink.apache.org/downloads.html
>
> Please check out the release blog post for an overview of the improvements for
> this new major release:
> https://flink.apache.org/news/2020/07/06/release-1.11.0.html
>
> The full release notes are available in Jira:
>
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12346364
>
> We would like to thank all contributors of the Apache Flink community who made
> this release possible!
>
> Cheers,
> Piotr & Zhijiang
>


-- 
Best regards!
Rui Li


Re: [DISCUSS] FLIP-130: Support for Python DataStream API (Stateless Part)

2020-07-07 Thread Shuiqiang Chen
Sorry, the 3rd link is broken, please refer to this one: Support Python
DataStream API


Shuiqiang Chen  于2020年7月8日周三 上午11:13写道:

> Hi everyone,
>
> As we all know, Flink provides three layered APIs: the ProcessFunctions,
> the DataStream API and the SQL & Table API. Each API offers a different
> trade-off between conciseness and expressiveness and targets different use
> cases[1].
>
> Currently, the SQL & Table API has already been supported in PyFlink. The
> API provides relational operations as well as user-defined functions to
> provide convenience for users who are familiar with python and relational
> programming.
>
> Meanwhile, the DataStream API and ProcessFunctions provide more generic
> APIs to implement stream processing applications. The ProcessFunctions
> expose time and state which are the fundamental building blocks for any
> kind of streaming application.
> To cover more use cases, we are planning to cover all these APIs in
> PyFlink.
>
> In this discussion(FLIP-130), we propose to support the Python DataStream
> API for the stateless part. For more detail, please refer to the FLIP wiki
> page here[2]. If interested in the stateful part, you can also take a
> look the design doc here[3] for which we are going to discuss in a separate
> FLIP.
>
> Any comments will be highly appreciated!
>
> [1] https://flink.apache.org/flink-applications.html#layered-apis
> [2]
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=158866298
> [3]
> https://docs.google.com/document/d/1H3hz8wuk228cDBhQmQKNw3m1q5gDAMkwTDEwnj3FBI/edit?usp=sharing
>
> Best,
> Shuiqiang
>
>
>
>


Re: [ANNOUNCE] Apache Flink 1.11.0 released

2020-07-07 Thread Benchao Li
Congratulations!  Thanks Zhijiang & Piotr for the great work as release
managers.

Rui Li  于2020年7月8日周三 上午11:38写道:

> Congratulations! Thanks Zhijiang & Piotr for the hard work.
>
> On Tue, Jul 7, 2020 at 10:06 PM Zhijiang 
> wrote:
>
>> The Apache Flink community is very happy to announce the release of
>> Apache Flink 1.11.0, which is the latest major release.
>>
>> Apache Flink® is an open-source stream processing framework for distributed,
>> high-performing, always-available, and accurate data streaming
>> applications.
>>
>> The release is available for download at:
>> https://flink.apache.org/downloads.html
>>
>> Please check out the release blog post for an overview of the
>> improvements for this new major release:
>> https://flink.apache.org/news/2020/07/06/release-1.11.0.html
>>
>> The full release notes are available in Jira:
>>
>> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12346364
>>
>> We would like to thank all contributors of the Apache Flink community who 
>> made
>> this release possible!
>>
>> Cheers,
>> Piotr & Zhijiang
>>
>
>
> --
> Best regards!
> Rui Li
>


-- 

Best,
Benchao Li