[jira] [Created] (FLINK-18235) Improve the checkpoint strategy for Python UDF execution

2020-06-10 Thread Dian Fu (Jira)
Dian Fu created FLINK-18235:
---

 Summary: Improve the checkpoint strategy for Python UDF execution
 Key: FLINK-18235
 URL: https://issues.apache.org/jira/browse/FLINK-18235
 Project: Flink
  Issue Type: Improvement
  Components: API / Python
Reporter: Dian Fu
 Fix For: 1.12.0


Currently, when a checkpoint is triggered for the Python operator, all the data 
buffered will be flushed to the Python worker to be processed. This will 
increase the overall checkpoint time in case there are a lot of elements 
buffered and Python UDF is slow. We should improve the checkpoint strategy to 
improve this, e.g. buffering the data into state instead of flushing them out. 
We can also let users to config the checkpoint strategy if needed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-18236) flink elasticsearch IT test ElasticsearchSinkTestBase.runElasticsearchSink* verify it not right

2020-06-10 Thread jackylau (Jira)
jackylau created FLINK-18236:


 Summary: flink elasticsearch IT test 
ElasticsearchSinkTestBase.runElasticsearchSink* verify it not right
 Key: FLINK-18236
 URL: https://issues.apache.org/jira/browse/FLINK-18236
 Project: Flink
  Issue Type: Bug
  Components: Connectors / ElasticSearch
Affects Versions: 1.10.0
Reporter: jackylau
 Fix For: 1.11.0, 1.12.0


we can see there are diffirent tests

runElasticsearchSinkTest

runElasticsearchSinkCborTest

runElasticsearchSinkSmileTest

runElasticSearchSinkTest

etc.

And use SourceSinkDataTestKit.verifyProducedSinkData(client, index) to ensure 
the correctness of results. But all of them use the same index.

That is to say, if the second unit test sink doesn't send successfully. they 
are also equal when use verifyProducedSinkData

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-18237) IllegalArgumentException when reading filesystem partitioned table with stream mode

2020-06-10 Thread Jingsong Lee (Jira)
Jingsong Lee created FLINK-18237:


 Summary: IllegalArgumentException when reading filesystem 
partitioned table with stream mode
 Key: FLINK-18237
 URL: https://issues.apache.org/jira/browse/FLINK-18237
 Project: Flink
  Issue Type: Bug
  Components: Connectors / FileSystem
Reporter: Jingsong Lee
Assignee: Jingsong Lee
 Fix For: 1.11.0






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-18238) RemoteChannelThroughputBenchmark deadlocks

2020-06-10 Thread Piotr Nowojski (Jira)
Piotr Nowojski created FLINK-18238:
--

 Summary: RemoteChannelThroughputBenchmark deadlocks
 Key: FLINK-18238
 URL: https://issues.apache.org/jira/browse/FLINK-18238
 Project: Flink
  Issue Type: Bug
  Components: Benchmarks, Runtime / Network
Affects Versions: 1.11.0
Reporter: Piotr Nowojski
 Fix For: 1.11.0


In the last couple of days {{RemoteChannelThroughputBenchmark.remoteRebalance}} 
deadlocked for the second time:

http://codespeed.dak8s.net:8080/job/flink-master-benchmarks/6019/





--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-18239) Kubernetes e2e test fails with "Kubernetes 1.18.3 requires conntrack to be installed in root's path"

2020-06-10 Thread Robert Metzger (Jira)
Robert Metzger created FLINK-18239:
--

 Summary: Kubernetes e2e test fails with "Kubernetes 1.18.3 
requires conntrack to be installed in root's path" 
 Key: FLINK-18239
 URL: https://issues.apache.org/jira/browse/FLINK-18239
 Project: Flink
  Issue Type: Bug
  Components: Deployment / Kubernetes, Tests
Affects Versions: 1.12.0
Reporter: Robert Metzger


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=3133&view=logs&j=c88eea3b-64a0-564d-0031-9fdcd7b8abee&t=1e2bbe5b-4657-50be-1f07-d84bfce5b1f5
{code}
Docker version 19.03.11, build 42e35e61f3
docker-compose version 1.26.0, build d4451659
* There is no local cluster named "minikube"
  - To fix this, run: "minikube start"
Starting minikube ...
* minikube v1.11.0 on Ubuntu 16.04
* Using the none driver based on user configuration
X Sorry, Kubernetes 1.18.3 requires conntrack to be installed in root's path
* There is no local cluster named "minikube"
  - To fix this, run: "minikube start"
* There is no local cluster named "minikube"
  - To fix this, run: "minikube start"
Command: start_kubernetes_if_not_running failed. Retrying...
* There is no local cluster named "minikube"
  - To fix this, run: "minikube start"
Starting minikube ...
* minikube v1.11.0 on Ubuntu 16.04
* Using the none driver based on user configuration
X Sorry, Kubernetes 1.18.3 requires conntrack to be installed in root's path
* There is no local cluster named "minikube"
  - To fix this, run: "minikube start"
* There is no local cluster named "minikube"
  - To fix this, run: "minikube start"
Command: start_kubernetes_if_not_running failed. Retrying...
* There is no local cluster named "minikube"
  - To fix this, run: "minikube start"
Starting minikube ...
* minikube v1.11.0 on Ubuntu 16.04
* Using the none driver based on user configuration
X Sorry, Kubernetes 1.18.3 requires conntrack to be installed in root's path
* There is no local cluster named "minikube"
  - To fix this, run: "minikube start"
* There is no local cluster named "minikube"
  - To fix this, run: "minikube start"
Command: start_kubernetes_if_not_running failed. Retrying...
Command: start_kubernetes_if_not_running failed 3 times.
Could not start minikube. Aborting...
Debugging failed Kubernetes test:
Currently existing Kubernetes resources
The connection to the server localhost:8080 was refused - did you specify the 
right host or port?
The connection to the server localhost:8080 was refused - did you specify the 
right host or port?
Flink logs:

{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-18240) Correct remainder function usage in documentation or allow % operator

2020-06-10 Thread Jark Wu (Jira)
Jark Wu created FLINK-18240:
---

 Summary: Correct remainder function usage in documentation or 
allow % operator
 Key: FLINK-18240
 URL: https://issues.apache.org/jira/browse/FLINK-18240
 Project: Flink
  Issue Type: Bug
  Components: Documentation, Table SQL / API
Reporter: Jark Wu


In the documentation: 
https://ci.apache.org/projects/flink/flink-docs-release-1.10/dev/table/sql/queries.html#scan-projection-and-filter

There is an example:

{code}
SELECT * FROM Orders WHERE a % 2 = 0
{code}

But % operator is not allowed in Flink:


{code:java}
org.apache.calcite.sql.parser.SqlParseException: Percent remainder '%' is not 
allowed under the current SQL conformance level
{code}

Either we correct the documentation to use {{MOD}} function, or allow % 
operator. 

This is reported in user-zh ML: 
http://apache-flink.147419.n8.nabble.com/FLINK-SQL-td3822.html



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-18241) Custom OptionsFactory in user code not working when configured via flink-conf.yaml

2020-06-10 Thread Nico Kruber (Jira)
Nico Kruber created FLINK-18241:
---

 Summary: Custom OptionsFactory in user code not working when 
configured via flink-conf.yaml
 Key: FLINK-18241
 URL: https://issues.apache.org/jira/browse/FLINK-18241
 Project: Flink
  Issue Type: Bug
  Components: Runtime / State Backends
Affects Versions: 1.10.1, 1.10.0
Reporter: Nico Kruber
 Attachments: DefaultConfigurableOptionsFactoryWithLog.java

It seems like Flink 1.10 broke custom {{OptionsFactory}} definitions via the 
{{state.backend.rocksdb.options-factory}} configuration if the implementation 
resides in the user-code jar file. This is particularly bad to debug RocksDB 
issues since we disabled its (ever-growing) LOG file in FLINK-15068.

If you look at the stack trace from the error below, you will notice, that 
{{StreamExecutionEnvironment}} is not provided with a user-code classloader and 
will us the one of its own class which is the parent loader that does not know 
about our {{OptionsFactory}}. This exact same code was working with Flink 1.9.3.

(I believe putting the custom {{OptionsFactory}} into a separate jar file 
inside Flink's lib folder may be a workaround but that should ideally not be 
needed).
{code:java}
2020-06-09 16:18:59,409 ERROR 
org.apache.flink.runtime.entrypoint.ClusterEntrypoint[] - Could not 
start cluster entrypoint StandaloneJobClusterEntryPoint.
org.apache.flink.runtime.entrypoint.ClusterEntrypointException: Failed to 
initialize the cluster entrypoint StandaloneJobClusterEntryPoint.
at 
org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:192)
 ~[flink-dist_2.12-1.10.1-stream1.jar:1.10.1-stream1]
at 
org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runClusterEntrypoint(ClusterEntrypoint.java:525)
 [flink-dist_2.12-1.10.1-stream1.jar:1.10.1-stream1]
at 
org.apache.flink.container.entrypoint.StandaloneJobClusterEntryPoint.main(StandaloneJobClusterEntryPoint.java:116)
 [flink-dist_2.12-1.10.1-stream1.jar:1.10.1-stream1]
Caused by: org.apache.flink.util.FlinkException: Could not create the 
DispatcherResourceManagerComponent.
at 
org.apache.flink.runtime.entrypoint.component.DefaultDispatcherResourceManagerComponentFactory.create(DefaultDispatcherResourceManagerComponentFactory.java:261)
 ~[flink-dist_2.12-1.10.1-stream1.jar:1.10.1-stream1]
at 
org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runCluster(ClusterEntrypoint.java:220)
 ~[flink-dist_2.12-1.10.1-stream1.jar:1.10.1-stream1]
at 
org.apache.flink.runtime.entrypoint.ClusterEntrypoint.lambda$startCluster$0(ClusterEntrypoint.java:174)
 ~[flink-dist_2.12-1.10.1-stream1.jar:1.10.1-stream1]
at 
org.apache.flink.runtime.security.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:30)
 ~[flink-dist_2.12-1.10.1-stream1.jar:1.10.1-stream1]
at 
org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:173)
 ~[flink-dist_2.12-1.10.1-stream1.jar:1.10.1-stream1]
... 2 more
Caused by: org.apache.flink.util.FlinkRuntimeException: Could not retrieve the 
JobGraph.
at 
org.apache.flink.runtime.dispatcher.runner.JobDispatcherLeaderProcessFactoryFactory.createFactory(JobDispatcherLeaderProcessFactoryFactory.java:57)
 ~[flink-dist_2.12-1.10.1-stream1.jar:1.10.1-stream1]
at 
org.apache.flink.runtime.dispatcher.runner.DefaultDispatcherRunnerFactory.createDispatcherRunner(DefaultDispatcherRunnerFactory.java:51)
 ~[flink-dist_2.12-1.10.1-stream1.jar:1.10.1-stream1]
at 
org.apache.flink.runtime.entrypoint.component.DefaultDispatcherResourceManagerComponentFactory.create(DefaultDispatcherResourceManagerComponentFactory.java:196)
 ~[flink-dist_2.12-1.10.1-stream1.jar:1.10.1-stream1]
at 
org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runCluster(ClusterEntrypoint.java:220)
 ~[flink-dist_2.12-1.10.1-stream1.jar:1.10.1-stream1]
at 
org.apache.flink.runtime.entrypoint.ClusterEntrypoint.lambda$startCluster$0(ClusterEntrypoint.java:174)
 ~[flink-dist_2.12-1.10.1-stream1.jar:1.10.1-stream1]
at 
org.apache.flink.runtime.security.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:30)
 ~[flink-dist_2.12-1.10.1-stream1.jar:1.10.1-stream1]
at 
org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:173)
 ~[flink-dist_2.12-1.10.1-stream1.jar:1.10.1-stream1]
... 2 more
Caused by: org.apache.flink.util.FlinkException: Could not create the JobGraph 
from the provided user code jar.
at 
org.apache.flink.container.entrypoint.ClassPathJobGraphRetriever.retrieveJobGraph(ClassPathJobGraphRetriever.java:114)
 ~[flink-dist_2.12-1.10.1-stream1.jar:1.10.1-stream1]
at 
org.apache.flink.runtime.dispatcher.runner.JobDispatcherLeaderProcessFactoryFactory.createFactory(JobDispatcherLeaderProcessFactoryFactory.java:55)

[jira] [Created] (FLINK-18242) Custom OptionsFactory in user code not working when configured via code

2020-06-10 Thread Nico Kruber (Jira)
Nico Kruber created FLINK-18242:
---

 Summary: Custom OptionsFactory in user code not working when 
configured via code
 Key: FLINK-18242
 URL: https://issues.apache.org/jira/browse/FLINK-18242
 Project: Flink
  Issue Type: Bug
  Components: Runtime / State Backends
Affects Versions: 1.10.1, 1.10.0
Reporter: Nico Kruber
 Attachments: DefaultConfigurableOptionsFactoryWithLog.java

 When I configure a custom {{OptionsFactory}} for RocksDB like this:
{code:java}
Configuration globalConfig = GlobalConfiguration.loadConfiguration();
String checkpointDataUri = 
globalConfig.getString(CheckpointingOptions.CHECKPOINTS_DIRECTORY);
RocksDBStateBackend stateBackend = new RocksDBStateBackend(checkpointDataUri);
stateBackend.setOptions(new DefaultConfigurableOptionsFactoryWithLog());
env.setStateBackend((StateBackend) stateBackend);{code}
it seems to be loaded
{code:java}
2020-06-10 12:54:20,720 INFO  
org.apache.flink.contrib.streaming.state.RocksDBStateBackend  - Using 
predefined options: DEFAULT.
2020-06-10 12:54:20,721 INFO  
org.apache.flink.contrib.streaming.state.RocksDBStateBackend  - Using 
application-defined options factory: 
DefaultConfigurableOptionsFactoryWithLog{DefaultConfigurableOptionsFactory{configuredOptions={}}}.
 {code}
but it seems like none of the options defined in there is actually used. Just 
as an example, my factory does set the info log level to {{INFO_LEVEL}} but 
this is what you will see in the created RocksDB instance:
{code:java}
> cat /tmp/flink-io-c95e8f48-0daa-4fb9-a9a7-0e4fb42e9135/*/db/OPTIONS*|grep 
> info_log_level
  info_log_level=HEADER_LEVEL
  info_log_level=HEADER_LEVEL{code}
Together with the bug from FLINK-18241, is seems I cannot re-activate the 
RocksDB log that we disabled in FLINK-15068. FLINK-15747 was aiming at changing 
that particular configuration, but the problem seems broader since 
{{setDbLogDir()}} was actually also ignored.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [ANNOUNCE] New Flink Committer: Benchao Li

2020-06-10 Thread Danny Chan
Congrats Benchao!

Best,
Danny Chan
在 2020年6月10日 +0800 AM11:57,dev@flink.apache.org,写道:
>
> Congrats Benchao!


Re: Interested in applying as a technical writer

2020-06-10 Thread Deepak Vohra
 
Marta,
Thanks for directing me to the project  Improve the Table API & SQL 
Documentation. I would be interested in Improve the Table API & SQL 
Documentation. These are more aligned with my background in relational 
databases  as I have used almost all commonly used relational databases. 
Moreover, the API is in Java, and I am Oracle certified Java Programmer. 

DeepakOn Tuesday, June 9, 2020, 10:33:56 p.m. PDT, Marta Paes Moreira 
 wrote:  
 
 Hi, Deepak! 

In reply to your email (below): the project proposal [1] focuses on the Table 
API / SQL, so I'm afraid that working on the DataSet API documentation is out 
of scope for our participation in Google Season of Docs.

Is there anything within the scope of the proposal that you'd like to work on 
instead? 

Otherwise, you can open a JIRA [2] ticket and propose these improvements as a 
regular community contribution. Just for reference, the DataSet API is going to 
be deprecated in the future in favour of a unified batch/streaming API [3].

Let me know if you have any questions.

Marta

[1] https://flink.apache.org/news/2020/05/04/season-of-docs.html
[2] https://issues.apache.org/jira/projects/FLINK/summary
[3] https://flink.apache.org/roadmap.html#batch-and-streaming-unification

"Thanks Marta,
One of the lacking features I noticed is that several example programs are 
missing for Java and Scala as in Data Sources section on page Apache Flink 1.10 
Documentation: Flink DataSet API Programming Guide

| 
| 
|  | 
Apache Flink 1.10 Documentation: Flink DataSet API Programming Guide


 |

 |

 |

I could develop the example programs among other additions.
regards,Deepak"
On Mon, Jun 8, 2020 at 6:56 PM Marta Paes Moreira  wrote:

Hi, Deepak!

At the time, I didn't notice you were potentially not subscribed to the mailing 
list, so you may not have gotten my reply. Just resending it in case you didn't 
get it!

"
| date: | May 15, 2020, 2:53 PM |

Hi, Deepak.

Thanks for the introduction — it's cool to see that you're interested in 
contributing to Flink as part of GSoD!
We're looking forward to receiving your application! Let us know if you have 
any questions, in the meantime.

Marta"

On Fri, May 15, 2020 at 2:53 PM Marta Paes Moreira  wrote:

Hi, Deepak.

Thanks for the introduction — it's cool to see that you're interested in 
contributing to Flink as part of GSoD!
We're looking forward to receiving your application! Let us know if you have 
any questions, in the meantime.

Marta
On Thu, May 14, 2020 at 11:35 PM Deepak Vohra  
wrote:

I am interested in applying as a technical writer to the Apache Flink project 
in Google Season of Docs. In the project exploration phase I would like to 
introduce myself as a potential applicant (when the application opens). I have 
experience using several data processing frameworks and have published dozens 
of articles and a few books on the same. Some books on similar topics :
 1.Practical Hadoop 
Ecosystemhttps://www.amazon.com/gp/product/B01M0NAHU3/ref=dbs_a_def_rwt_hsch_vapi_tkin_p1_i5
2. Apache HBase 
Primerhttps://www.amazon.com/gp/product/B01MTOSTAB/ref=dbs_a_def_rwt_bibl_vppi_i1
I have also published 5 other books on Docker and Kubernetes; Kubernetes being 
a commonly used deployment platform for Apache Flink.
regards,Deepak


  

[DISCUSS] Update our EditorConfig file

2020-06-10 Thread Aljoscha Krettek

Hi,

is anyone actually using our .editorconfig file? IntelliJ has a plugin 
for this that is actually quite powerful.


I managed to write a .editorconfig file that I quite like: 
https://github.com/aljoscha/flink/commits/new-editorconfig. For me to 
use that, we would either need to update our Flink file to what I did 
there or remove the "root = true" part from the file to allow me to 
place my custom .editorconfig in the directory above.


It's probably a lost cause to find consensus on what settings we should 
have in that file but it could be helpful if we all used the same 
settings. For what it's worth, this will format code in such a way that 
it pleases our (very lenient) checkstyle rules.


What do you think?

Best,
Aljoscha


Regarding GSOD opportunity at Apache Flink

2020-06-10 Thread Aditya Kumar Hurkat 4-Yr B.Tech. Mining Engg., IIT (BHU) Varanasi
Hello!

I am Aditya Hurkat a Junior Undergraduate at IIT-BHU, India.
I always have admired the work of open source communities, and now there is
an opportunity for me to work for such communities through GSOD.
I needed help regarding what criteria is being used for selecting students
for GSOD program at Apache Flink.



Best

Aditya

*आदित्य कुमार हुरकट*

*खनन अभियांत्रिकी विभाग*

*भारतीय प्रौद्योगिकी संस्थान (काशी हिन्दू विश्वविधालय)*

*वाराणासी, उत्तरप्रदेश -221005*

*भारत*
*Aditya Kumar Hurkat*
*Department of Mining Engineering*
*Indian Institute of Technology (Banaras Hindu University)*

*Varanasi, Uttar Pradesh -221005*
*India*

This message is intended for the addressee only and may contain
confidential or privileged information. The communication is the property
of IIT (BHU) and its affiliates and may contain copyright material or
intellectual property of IIT(BHU) and/or any of its related entities or of
third parties. If you are not the intended recipient of the communication
or have received the communication in error, please notify the sender
immediately, return the communication (in entirety) and delete the
communication (in entirety and copies included) from your records and
systems. Unauthorized use, disclosure or copying of this communication or
any part thereof is strictly prohibited and may be unlawful.


[jira] [Created] (FLINK-18243) Flink SQL (1.10.0) to elasticsearch 5.6

2020-06-10 Thread Jira
颖 created FLINK-18243:
-

 Summary: Flink SQL (1.10.0) to elasticsearch 5.6
 Key: FLINK-18243
 URL: https://issues.apache.org/jira/browse/FLINK-18243
 Project: Flink
  Issue Type: Bug
  Components: Connectors / ElasticSearch
Affects Versions: 1.10.0
 Environment: flink 1.10.0 elasticsearch 5.6.3
Reporter: 颖


When Flink 1.10.0 Consumer Kafka writes ELASticSearch 5.6.3 through SQL data, 
the following problems occur:
2020-06-10 17:48:00,526 INFO 
org.apache.flink.kafka.shaded.org.apache.kafka.clients.consumer.internals.AbstractCoordinator
 - [Consumer clientId=consumer-3, 
groupId=cop.inke_owt.data_pdl.Flink_SQL_yangxu01_2es] Discovered group 
coordinator c2-dsj-hadoop185.bj:9092 (id: 2147483643 rack: null)
2020-06-10 17:48:00,526 INFO 
org.apache.flink.kafka.shaded.org.apache.kafka.clients.consumer.internals.AbstractCoordinator
 - [Consumer clientId=consumer-4, 
groupId=cop.inke_owt.data_pdl.Flink_SQL_yangxu01_2es] Discovered group 
coordinator c2-dsj-hadoop185.bj:9092 (id: 2147483643 rack: null)
2020-06-10 17:48:35,907 INFO 
org.apache.flink.kafka.shaded.org.apache.kafka.clients.FetchSessionHandler - 
[Consumer clientId=consumer-3, 
groupId=cop.inke_owt.data_pdl.Flink_SQL_yangxu01_2es] Error sending fetch 
request (sessionId=INVALID, epoch=INITIAL) to node 2: 
org.apache.flink.kafka.shaded.org.apache.kafka.common.errors.DisconnectException.
2020-06-10 17:48:35,947 INFO 
org.apache.flink.kafka.shaded.org.apache.kafka.clients.FetchSessionHandler - 
[Consumer clientId=consumer-4, 
groupId=cop.inke_owt.data_pdl.Flink_SQL_yangxu01_2es] Error sending fetch 
request (sessionId=INVALID, epoch=INITIAL) to node 3: 
org.apache.flink.kafka.shaded.org.apache.kafka.common.errors.DisconnectException.
2020-06-10 17:49:16,279 INFO 
org.apache.flink.kafka.shaded.org.apache.kafka.clients.FetchSessionHandler - 
[Consumer clientId=consumer-3, 
groupId=cop.inke_owt.data_pdl.Flink_SQL_yangxu01_2es] Error sending fetch 
request (sessionId=INVALID, epoch=INITIAL) to node 2: 
org.apache.flink.kafka.shaded.org.apache.kafka.common.errors.DisconnectException.
2020-06-10 17:49:55,533 INFO 
org.apache.flink.kafka.shaded.org.apache.kafka.clients.FetchSessionHandler - 
[Consumer clientId=consumer-3, 
groupId=cop.inke_owt.data_pdl.Flink_SQL_yangxu01_2es] Error sending fetch 
request (sessionId=INVALID, epoch=INITIAL) to node 2: 
org.apache.flink.kafka.shaded.org.apache.kafka.common.errors.DisconnectException.
2020-06-10 17:49:58,308 INFO 
org.apache.flink.kafka.shaded.org.apache.kafka.clients.FetchSessionHandler - 
[Consumer clientId=consumer-4, 
groupId=cop.inke_owt.data_pdl.Flink_SQL_yangxu01_2es] Error sending fetch 
request (sessionId=INVALID, epoch=INITIAL) to node 3: 
org.apache.flink.kafka.shaded.org.apache.kafka.common.errors.DisconnectException.
2020-06-10 17:50:33,664 INFO 
org.apache.flink.kafka.shaded.org.apache.kafka.clients.FetchSessionHandler - 
[Consumer clientId=consumer-4, 
groupId=cop.inke_owt.data_pdl.Flink_SQL_yangxu01_2es] Error sending fetch 
request (sessionId=INVALID, epoch=INITIAL) to node 3: 
org.apache.flink.kafka.shaded.org.apache.kafka.common.errors.DisconnectException.
2020-06-10 17:50:34,052 INFO 
org.apache.flink.kafka.shaded.org.apache.kafka.clients.FetchSessionHandler - 
[Consumer clientId=consumer-3, 
groupId=cop.inke_owt.data_pdl.Flink_SQL_yangxu01_2es] Error sending fetch 
request (sessionId=INVALID, epoch=INITIAL) to node 2: 
org.apache.flink.kafka.shaded.org.apache.kafka.common.errors.DisconnectException.
2020-06-10 17:51:19,134 INFO 
org.apache.flink.kafka.shaded.org.apache.kafka.clients.FetchSessionHandler - 
[Consumer clientId=consumer-4, 
groupId=cop.inke_owt.data_pdl.Flink_SQL_yangxu01_2es] Error sending fetch 
request (sessionId=INVALID, epoch=INITIAL) to node 1: 
org.apache.flink.kafka.shaded.org.apache.kafka.common.errors.DisconnectException.
2020-06-10 17:51:25,762 INFO 
org.apache.flink.kafka.shaded.org.apache.kafka.clients.FetchSessionHandler - 
[Consumer clientId=consumer-4, 
groupId=cop.inke_owt.data_pdl.Flink_SQL_yangxu01_2es] Error sending fetch 
request (sessionId=INVALID, epoch=INITIAL) to node 3: 
org.apache.flink.kafka.shaded.org.apache.kafka.common.errors.DisconnectException.
2020-06-10 17:52:00,037 INFO 
org.apache.flink.kafka.shaded.org.apache.kafka.clients.FetchSessionHandler - 
[Consumer clientId=consumer-3, 
groupId=cop.inke_owt.data_pdl.Flink_SQL_yangxu01_2es] Error sending fetch 
request (sessionId=INVALID, epoch=INITIAL) to node 2: 
org.apache.flink.kafka.shaded.org.apache.kafka.common.errors.DisconnectException.
2020-06-10 17:52:11,706 INFO 
org.apache.flink.kafka.shaded.org.apache.kafka.clients.FetchSessionHandler - 
[Consumer clientId=consumer-4, 
groupId=cop.inke_owt.data_pdl.Flink_SQL_yangxu01_2es] Error sending fetch 
request (sessionId=INVALID, epoch=INITIAL) to node 3: 
org.apache.flink.kafka.shaded.org.apache.kafka.common.errors.DisconnectExcept

Re: Interested in applying as a technical writer

2020-06-10 Thread Marta Paes Moreira
Great! Are you familiar with the Google Season of Docs application process?

I'm sharing the link in case you need more information, just in case [1].
You can submit your ideas until *July 9th*.

Thanks again for your interest, Deepak! Let me know if there is anything
else we can do to help.

Marta

[1]
https://developers.google.com/season-of-docs/docs/tech-writer-application-hints

On Wed, Jun 10, 2020 at 2:25 PM Deepak Vohra  wrote:

>
> Marta,
>
> Thanks for directing me to the project  Improve the Table API & SQL
> Documentation. I would be interested in Improve the Table API & SQL
> Documentation. These are more aligned with my background in relational
> databases  as I have used almost all commonly used relational databases.
> Moreover, the API is in Java, and I am Oracle certified Java Programmer.
>
> Deepak
> On Tuesday, June 9, 2020, 10:33:56 p.m. PDT, Marta Paes Moreira <
> ma...@ververica.com> wrote:
>
>
> Hi, Deepak!
>
> In reply to your email (below): the project proposal [1] focuses on the
> Table API / SQL, so I'm afraid that working on the DataSet API
> documentation is out of scope for our participation in Google Season of
> Docs.
>
> Is there anything within the scope of the proposal that you'd like to work
> on instead?
>
> Otherwise, you can open a JIRA [2] ticket and propose these improvements
> as a regular community contribution. Just for reference, the DataSet API is
> going to be deprecated in the future in favour of a unified batch/streaming
> API [3].
>
> Let me know if you have any questions.
>
> Marta
>
> [1] https://flink.apache.org/news/2020/05/04/season-of-docs.html
> [2] https://issues.apache.org/jira/projects/FLINK/summary
> [3] https://flink.apache.org/roadmap.html#batch-and-streaming-unification
>
> "Thanks Marta,
>
> One of the lacking features I noticed is that several example programs are
> missing for Java and Scala as in Data Sources section on page Apache
> Flink 1.10 Documentation: Flink DataSet API Programming Guide
> 
>
> Apache Flink 1.10 Documentation: Flink DataSet API Programming Guide
>
> 
> I could develop the example programs among other additions.
>
> regards,
> Deepak"
>
> On Mon, Jun 8, 2020 at 6:56 PM Marta Paes Moreira 
> wrote:
>
> Hi, Deepak!
>
> At the time, I didn't notice you were potentially not subscribed to the
> mailing list, so you may not have gotten my reply. Just resending it in
> case you didn't get it!
>
> "
> date: May 15, 2020, 2:53 PMHi, Deepak.
>
> Thanks for the introduction — it's cool to see that you're interested in
> contributing to Flink as part of GSoD!
>
> We're looking forward to receiving your application! Let us know if you
> have any questions, in the meantime.
>
> Marta
> "
>
> On Fri, May 15, 2020 at 2:53 PM Marta Paes Moreira 
> wrote:
>
> Hi, Deepak.
>
> Thanks for the introduction — it's cool to see that you're interested in
> contributing to Flink as part of GSoD!
>
> We're looking forward to receiving your application! Let us know if you
> have any questions, in the meantime.
>
> Marta
>
> On Thu, May 14, 2020 at 11:35 PM Deepak Vohra 
> wrote:
>
> I am interested in applying as a technical writer to the Apache Flink
> project in Google Season of Docs. In the project exploration phase I would
> like to introduce myself as a potential applicant (when the application
> opens). I have experience using several data processing frameworks and have
> published dozens of articles and a few books on the same. Some books on
> similar topics :
>  1.Practical Hadoop Ecosystemhttps://
> www.amazon.com/gp/product/B01M0NAHU3/ref=dbs_a_def_rwt_hsch_vapi_tkin_p1_i5
> 2. Apache HBase Primerhttps://
> www.amazon.com/gp/product/B01MTOSTAB/ref=dbs_a_def_rwt_bibl_vppi_i1
> I have also published 5 other books on Docker and Kubernetes; Kubernetes
> being a commonly used deployment platform for Apache Flink.
> regards,Deepak
>
>


Re: Interested in applying as a technical writer

2020-06-10 Thread Deepak Vohra
 
Marta,
I am unless it has changed as I completed a project in 2019 also. Nevertheless, 
thanks for the link. 
 regards,DeepakOn Wednesday, June 10, 2020, 05:50:52 a.m. PDT, Marta Paes 
Moreira  wrote:  
 
 Great! Are you familiar with the Google Season of Docs application process?

I'm sharing the link in case you need more information, just in case [1]. You 
can submit your ideas until July 9th.

Thanks again for your interest, Deepak! Let me know if there is anything else 
we can do to help.

Marta

[1] 
https://developers.google.com/season-of-docs/docs/tech-writer-application-hints
On Wed, Jun 10, 2020 at 2:25 PM Deepak Vohra  wrote:

 
Marta,
Thanks for directing me to the project  Improve the Table API & SQL 
Documentation. I would be interested in Improve the Table API & SQL 
Documentation. These are more aligned with my background in relational 
databases  as I have used almost all commonly used relational databases. 
Moreover, the API is in Java, and I am Oracle certified Java Programmer. 

DeepakOn Tuesday, June 9, 2020, 10:33:56 p.m. PDT, Marta Paes Moreira 
 wrote:  
 
 Hi, Deepak! 

In reply to your email (below): the project proposal [1] focuses on the Table 
API / SQL, so I'm afraid that working on the DataSet API documentation is out 
of scope for our participation in Google Season of Docs.

Is there anything within the scope of the proposal that you'd like to work on 
instead? 

Otherwise, you can open a JIRA [2] ticket and propose these improvements as a 
regular community contribution. Just for reference, the DataSet API is going to 
be deprecated in the future in favour of a unified batch/streaming API [3].

Let me know if you have any questions.

Marta

[1] https://flink.apache.org/news/2020/05/04/season-of-docs.html
[2] https://issues.apache.org/jira/projects/FLINK/summary
[3] https://flink.apache.org/roadmap.html#batch-and-streaming-unification

"Thanks Marta,
One of the lacking features I noticed is that several example programs are 
missing for Java and Scala as in Data Sources section on page Apache Flink 1.10 
Documentation: Flink DataSet API Programming Guide

| 
| 
|  | 
Apache Flink 1.10 Documentation: Flink DataSet API Programming Guide


 |

 |

 |

I could develop the example programs among other additions.
regards,Deepak"
On Mon, Jun 8, 2020 at 6:56 PM Marta Paes Moreira  wrote:

Hi, Deepak!

At the time, I didn't notice you were potentially not subscribed to the mailing 
list, so you may not have gotten my reply. Just resending it in case you didn't 
get it!

"
| date: | May 15, 2020, 2:53 PM |

Hi, Deepak.

Thanks for the introduction — it's cool to see that you're interested in 
contributing to Flink as part of GSoD!
We're looking forward to receiving your application! Let us know if you have 
any questions, in the meantime.

Marta"

On Fri, May 15, 2020 at 2:53 PM Marta Paes Moreira  wrote:

Hi, Deepak.

Thanks for the introduction — it's cool to see that you're interested in 
contributing to Flink as part of GSoD!
We're looking forward to receiving your application! Let us know if you have 
any questions, in the meantime.

Marta
On Thu, May 14, 2020 at 11:35 PM Deepak Vohra  
wrote:

I am interested in applying as a technical writer to the Apache Flink project 
in Google Season of Docs. In the project exploration phase I would like to 
introduce myself as a potential applicant (when the application opens). I have 
experience using several data processing frameworks and have published dozens 
of articles and a few books on the same. Some books on similar topics :
 1.Practical Hadoop 
Ecosystemhttps://www.amazon.com/gp/product/B01M0NAHU3/ref=dbs_a_def_rwt_hsch_vapi_tkin_p1_i5
2. Apache HBase 
Primerhttps://www.amazon.com/gp/product/B01MTOSTAB/ref=dbs_a_def_rwt_bibl_vppi_i1
I have also published 5 other books on Docker and Kubernetes; Kubernetes being 
a commonly used deployment platform for Apache Flink.
regards,Deepak


  
  

Regarding GSOD opportunity at Apache Flink

2020-06-10 Thread Aditya Kumar Hurkat 4-Yr B.Tech. Mining Engg., IIT (BHU) Varanasi
Hello!

I am Aditya Hurkat a Junior Undergraduate at IIT-BHU, India.
I always have admired the work of open source communities, and now there is
an opportunity for me to work for such communities through GSOD.
I needed help regarding what criteria is being used for selecting students
for GSOD program at Apache Flink.



Best

Aditya

*आदित्य कुमार हुरकट*

*खनन अभियांत्रिकी विभाग*

*भारतीय प्रौद्योगिकी संस्थान (काशी हिन्दू विश्वविधालय)*

*वाराणासी, उत्तरप्रदेश -221005*

*भारत*
*Aditya Kumar Hurkat*
*Department of Mining Engineering*
*Indian Institute of Technology (Banaras Hindu University)*

*Varanasi, Uttar Pradesh -221005*
*India*

This message is intended for the addressee only and may contain
confidential or privileged information. The communication is the property
of IIT (BHU) and its affiliates and may contain copyright material or
intellectual property of IIT(BHU) and/or any of its related entities or of
third parties. If you are not the intended recipient of the communication
or have received the communication in error, please notify the sender
immediately, return the communication (in entirety) and delete the
communication (in entirety and copies included) from your records and
systems. Unauthorized use, disclosure or copying of this communication or
any part thereof is strictly prohibited and may be unlawful.


Re: Regarding GSOD opportunity at Apache Flink

2020-06-10 Thread Marta Paes Moreira
Hey, Aditya!

Thanks for reaching out and for being interested in contributing to open
source (and Flink, in particular!) as part of Google Season of Docs (GSoD)!

We will follow GSoD's guidelines [1] to evaluate incoming projects. Does
this clarify your question?

Let me know if there is anything else we can help you with!

[1] https://developers.google.com/season-of-docs/docs/project-selection

On Wed, Jun 10, 2020 at 2:48 PM Aditya Kumar Hurkat 4-Yr B.Tech. Mining
Engg., IIT (BHU) Varanasi 
wrote:

> Hello!
>
> I am Aditya Hurkat a Junior Undergraduate at IIT-BHU, India.
> I always have admired the work of open source communities, and now there is
> an opportunity for me to work for such communities through GSOD.
> I needed help regarding what criteria is being used for selecting students
> for GSOD program at Apache Flink.
>
>
>
> Best
>
> Aditya
>
> *आदित्य कुमार हुरकट*
>
> *खनन अभियांत्रिकी विभाग*
>
> *भारतीय प्रौद्योगिकी संस्थान (काशी हिन्दू विश्वविधालय)*
>
> *वाराणासी, उत्तरप्रदेश -221005*
>
> *भारत*
> *Aditya Kumar Hurkat*
> *Department of Mining Engineering*
> *Indian Institute of Technology (Banaras Hindu University)*
>
> *Varanasi, Uttar Pradesh -221005*
> *India*
>
> This message is intended for the addressee only and may contain
> confidential or privileged information. The communication is the property
> of IIT (BHU) and its affiliates and may contain copyright material or
> intellectual property of IIT(BHU) and/or any of its related entities or of
> third parties. If you are not the intended recipient of the communication
> or have received the communication in error, please notify the sender
> immediately, return the communication (in entirety) and delete the
> communication (in entirety and copies included) from your records and
> systems. Unauthorized use, disclosure or copying of this communication or
> any part thereof is strictly prohibited and may be unlawful.
>


Re: [Discuss] Migrate walkthroughs to flink-playgrounds for 1.11

2020-06-10 Thread Seth Wiesman
It seems like there is general consensus. I have opened FLINK-18194 and
corresponding PRs against the docs and flink-playgrounds

Seth

[1] https://github.com/apache/flink/pull/12592
[2] https://github.com/apache/flink-playgrounds/pull/13

On Tue, Jun 9, 2020 at 2:50 PM Konstantin Knauf 
wrote:

> Hi Seth,
>
> I see, that's neat. I think it is a good idea moving the archetypes out of
> the main repository into flink-playgrounds.
>
> Best,
>
> Konstantin
>
>
>
> On Tue, Jun 9, 2020 at 9:47 PM Seth Wiesman  wrote:
>
> > Hi Konstantin,
> >
> > I am linking my working branch for what the playground might look like.
> It
> > contains a Java application you can import into your IDE and then fill
> out
> > the same way you do currently. It is still a code walkthrough. Docker
> > allows users to actually run the completed app locally and not just in
> > their IDE.
> >
> > [1] https://github.com/sjwiesman/flink-playgrounds/tree/walkthroughs
> >
> > On Tue, Jun 9, 2020 at 2:39 PM Konstantin Knauf 
> wrote:
> >
> > > Hi Seth,
> > >
> > > good catch! I agree this is important to fix.
> > >
> > > I don't quite understand the resolution you are proposing. The current
> > > walkthrough for the Table API is a *code *walkthrough. How do you plan
> to
> > > replace this by a dockerized playground? Drop the code walkthrough and
> > add
> > > a pure SQL playground?
> > > Best,
> > >
> > > Kosntantin
> > >
> > >
> > > On Tue, Jun 9, 2020 at 4:27 PM Dawid Wysakowicz <
> dwysakow...@apache.org>
> > > wrote:
> > >
> > >> I am very much in favour of this effort.
> > >>
> > >> Best,
> > >>
> > >> Dawid
> > >>
> > >> On 09/06/2020 16:06, Seth Wiesman wrote:
> > >> > Hi Everyone
> > >> >
> > >> > I am currently going through the documentation for 1.11 and noticed
> an
> > >> > issue with the walkthroughs.
> > >> >
> > >> > Flink currently contains two walkthroughs for new users, one for
> data
> > >> > stream and one for table[1, 2]. They both have corresponding maven
> > >> > archetypes for users to create template projects with the proper
> > >> > dependencies and follow along[3, 4].
> > >> >
> > >> > The table walkthrough was added several releases ago and now
> actively
> > >> > exposes deprecated and even internal apis. These are meant to serve
> as
> > >> > users first exposure to Flink and is not a good impression.
> > >> >
> > >> > To that end, I want to propose dropping the maven archetypes from
> 1.11
> > >> and
> > >> > adding new dockerized playgrounds to go along with the docs in the
> > >> > flink-playgrounds repo.
> > >> >
> > >> > While we are past the feature freeze I think this is important and
> > would
> > >> > not affect release testing or 1.11's stability. I am volunteering to
> > >> ensure
> > >> > the work is completed before the release.
> > >> >
> > >> > Seth
> > >> >
> > >> > [1]
> > >> >
> > >>
> >
> https://ci.apache.org/projects/flink/flink-docs-master/getting-started/walkthroughs/table_api.html
> > >> > [2]
> > >> >
> > >>
> >
> https://github.com/apache/flink/blob/master/flink-walkthroughs/flink-walkthrough-table-java/src/main/resources/archetype-resources/src/main/java/SpendReport.java
> > >> >
> > >>
> > >>
> > >
> > > --
> > >
> > > Konstantin Knauf
> > >
> > > https://twitter.com/snntrable
> > >
> > > https://github.com/knaufk
> > >
> >
>
>
> --
>
> Konstantin Knauf | Head of Product
>
> +49 160 91394525
>
>
> Follow us @VervericaData Ververica 
>
>
> --
>
> Join Flink Forward  - The Apache Flink
> Conference
>
> Stream Processing | Event Driven | Real Time
>
> --
>
> Ververica GmbH | Invalidenstrasse 115, 10115 Berlin, Germany
>
> --
> Ververica GmbH
> Registered at Amtsgericht Charlottenburg: HRB 158244 B
> Managing Directors: Timothy Alexander Steinert, Yip Park Tung Jason, Ji
> (Tony) Cheng
>


Re: Re: [ANNOUNCE] New Flink Committer: Benchao Li

2020-06-10 Thread Yun Gao
Congratulations, Benchao!

Best,
 Yun

--
Sender:Danny Chan
Date:2020/06/10 20:01:01
Recipient:
Theme:Re: [ANNOUNCE] New Flink Committer: Benchao Li

Congrats Benchao!

Best,
Danny Chan
在 2020年6月10日 +0800 AM11:57,dev@flink.apache.org,写道:
>
> Congrats Benchao!



Re: Re: [ANNOUNCE] New Flink Committer: Benchao Li

2020-06-10 Thread godfrey he
Congratulations, Benchao!

Best,
Godfrey

Yun Gao  于2020年6月11日周四 上午8:57写道:

> Congratulations, Benchao!
>
> Best,
>  Yun
>
> --
> Sender:Danny Chan
> Date:2020/06/10 20:01:01
> Recipient:
> Theme:Re: [ANNOUNCE] New Flink Committer: Benchao Li
>
> Congrats Benchao!
>
> Best,
> Danny Chan
> 在 2020年6月10日 +0800 AM11:57,dev@flink.apache.org,写道:
> >
> > Congrats Benchao!
>
>


Re: [DISCUSS] Add Japanese translation of the flink.apache.org website

2020-06-10 Thread Congxian Qiu
Share some experience for the current translation(and review) process from
my side. for documentation review, we can pull the patch locally and set up
a local server[1][2], after that, we can reference the rendered
documentation in `localhost:4000`

[1]
https://flink.apache.org/contributing/contribute-documentation.html#update-or-extend-the-documentation
[2]
https://flink.apache.org/contributing/improve-website.html#update-or-extend-the-documentation

Best,
Congxian


Yun Tang  于2020年6月9日周二 下午10:34写道:

> Hi
>
> I think supporting website with another language should be okay. However,
> I'm afraid Flink community lacks of resources
> to maintain technical documentations of another languages currently.
>
> From my experiences of translating documentation to Chinese since the past
> year, I found
> Chinese documentation is easy to be out-of-date and containing broken
> links. Some developers
> might only update English version and forget to update the related Chinese
> version.
>
> Since Jark talked about re-investigating tools for translating, I just
> want to share some painful experiences
> when translating documentations.
> First of all, I think code review should be totally different from doc
> review. Github lacks such kind of power
> to make the markdown review more readable, we often need to read a long
> line without wrapping and give advice for
> just some of the characters.
> Secondly, user who wants to contributes cannot leverage any given powerful
> tool.
>
> I think Crodwin might be a good choice.
>
>   1.  Many popular projects already use this: node.js [1], gitlab [2],
> Minecraft [3].
>   2.  This tool is free for open-source project [4]
>
> [1] https://crowdin.com/project/nodejs-website
> [2] https://crowdin.com/project/gitlab-ee
> [3] https://crowdin.com/project/minecraft
> [4] https://crowdin.com/page/open-source-project-setup-request
>
> Best
> Yun Tang
>
> 
> From: Marta Paes Moreira 
> Sent: Tuesday, June 9, 2020 22:13
> To: dev 
> Subject: Re: [DISCUSS] Add Japanese translation of the flink.apache.org
> website
>
> I had a second look at the PR and maybe it'd be important to clarify the
> scope.
>
> It seems to only target the website and, if that's the case, then
> synchronization might be less of an issue as the core of the website is
> pretty static. I'm not sure how useful it is to have a translated website
> without any other kind of support (e.g. mailing list, documentation) in the
> same language, but the contributor might be able to shed some light on
> this.
>
> Just wanted to amend my original reply as it was pretty blindsided by the
> previous comments. It'd be worth understanding the end goal of the
> contribution first.
>
> Thanks,
>
> Marta
>
> On Tue, Jun 9, 2020 at 2:22 PM Congxian Qiu 
> wrote:
>
> > Hi
> >
> > Thanks for bringing this up, Robert.
> >
> > I agree we may need to investigate better translation/synchronization
> tools
> > again.
> >
> > I'll share some experience when translating the documentation or
> reviewing
> > the translation.
> > 1. Translate documentation using the current procedure needs a lot of
> > resources to keep the translated version fresh, I'm glad that we have so
> > many contributors who want to translate the documentation to Chinese.
> > 2. The documentation may out of sync between the English version and the
> > translated version, this needs the reviewer to know about Flink and the
> > English version doc very well(especially when the documentation has been
> > restructured)
> >
> > Best,
> > Congxian
> >
> >
> > Jark Wu  于2020年6月9日周二 下午6:02写道:
> >
> > > I agree with Dawid and others' opinions.
> > >
> > > We may not have enough resources to maintain more languages.
> > > Maybe it's time to investigate better translation/synchronization tools
> > > again.
> > >
> > > I want to share some background about the current translation process.
> In
> > > the initial proposal of Chinese translation FLIP-35 [1],
> > > we have considered Docusaurus/Crowdin as the localization tool, but it
> > > seems that Crowdin doesn't fit well with Jekyll (Liquid codes).
> > > But it's been a year and a half, maybe it's time to re-investigate them
> > or
> > > other tools.
> > >
> > > Here is the list of how other ASF projects are dealing translation of
> > what
> > > I know:
> > > - Apache Pulsar uses Crowdin:
> > https://github.com/apache/pulsar-translation
> > > - Apache Kylin: the similar way of Flink:
> > > https://github.com/apache/kylin/tree/document/website
> > > - Apache RocketMQ: a separate repository, synchronize manually and
> > > periodically: https://github.com/apache/rocketmq/tree/master/docs/cn
> > >
> > > Here is the list of localization tool of what I know:
> > > - Docusaurus: https://docusaurus.io/
> > > - Crowdin: https://crowdin.com/
> > > - GitLocalize: https://gitlocalize.com/
> > >
> > > Best,
> > > Jark
> > >
> > > On Tue, 9 Jun 2020 at 16:24, Marta Paes Moreira 
> > > wrote:
> > >
> > > > Thanks for 

Re: [DISCUSS] Update our EditorConfig file

2020-06-10 Thread Jingsong Li
+1 looks more friendly to Flink newbies.

Best,
Jingsong Lee

On Wed, Jun 10, 2020 at 8:38 PM Aljoscha Krettek 
wrote:

> Hi,
>
> is anyone actually using our .editorconfig file? IntelliJ has a plugin
> for this that is actually quite powerful.
>
> I managed to write a .editorconfig file that I quite like:
> https://github.com/aljoscha/flink/commits/new-editorconfig. For me to
> use that, we would either need to update our Flink file to what I did
> there or remove the "root = true" part from the file to allow me to
> place my custom .editorconfig in the directory above.
>
> It's probably a lost cause to find consensus on what settings we should
> have in that file but it could be helpful if we all used the same
> settings. For what it's worth, this will format code in such a way that
> it pleases our (very lenient) checkstyle rules.
>
> What do you think?
>
> Best,
> Aljoscha
>


-- 
Best, Jingsong Lee


Re: [DISCUSS] Update our EditorConfig file

2020-06-10 Thread tison
> is anyone actually using our .editorconfig file?

I think IDEA already takes this file into consideration. So far it works
well for me.

Best,
tison.


Jingsong Li  于2020年6月11日周四 上午10:26写道:

> +1 looks more friendly to Flink newbies.
>
> Best,
> Jingsong Lee
>
> On Wed, Jun 10, 2020 at 8:38 PM Aljoscha Krettek 
> wrote:
>
> > Hi,
> >
> > is anyone actually using our .editorconfig file? IntelliJ has a plugin
> > for this that is actually quite powerful.
> >
> > I managed to write a .editorconfig file that I quite like:
> > https://github.com/aljoscha/flink/commits/new-editorconfig. For me to
> > use that, we would either need to update our Flink file to what I did
> > there or remove the "root = true" part from the file to allow me to
> > place my custom .editorconfig in the directory above.
> >
> > It's probably a lost cause to find consensus on what settings we should
> > have in that file but it could be helpful if we all used the same
> > settings. For what it's worth, this will format code in such a way that
> > it pleases our (very lenient) checkstyle rules.
> >
> > What do you think?
> >
> > Best,
> > Aljoscha
> >
>
>
> --
> Best, Jingsong Lee
>


[jira] [Created] (FLINK-18244) Support setup customized system environment before submitting test job

2020-06-10 Thread ShenDa (Jira)
ShenDa created FLINK-18244:
--

 Summary: Support setup customized system environment before 
submitting test job
 Key: FLINK-18244
 URL: https://issues.apache.org/jira/browse/FLINK-18244
 Project: Flink
  Issue Type: Improvement
  Components: Tests
Reporter: ShenDa


The new approach to implement e2e test suggests developer to use 
FlinkDistribution to submit test job. But at present, we can't specify system 
environment by invoking submitSqlJob() or submitJob(). This result in that some 
connectors can not work if needful system environment not setup, suck like 
hbase connector needs HADOOP_CLASSPATH.
So I think we can do the work below:
1)Add a new method in AutoClosableProcess and it's builder class for putting 
specified system environment.
2)Add a new interface that just used to configure system environment and let 
class SQLJobSubmission and JobSubmission extends this interface.
3) Modify the methods, submitJob() and submitSQLJob(), in FlinkDistribution to 
setup system environment before involing runBlocking() or runNonBlocking()



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-18245) Support to parse -1 for MemorySize and Duration ConfigOption

2020-06-10 Thread Jark Wu (Jira)
Jark Wu created FLINK-18245:
---

 Summary: Support to parse -1 for MemorySize and Duration 
ConfigOption
 Key: FLINK-18245
 URL: https://issues.apache.org/jira/browse/FLINK-18245
 Project: Flink
  Issue Type: Bug
  Components: API / Core
Reporter: Jark Wu


Currently, MemorySize and Duration ConfigOption doesn't support to parse {{-1}} 
or {{-1s}}. That makes us can't to use {{-1}} as a disabled value, and have to 
use {{0}} which may confuse users at some senarios. 

There is some discussion around this topic in 
:https://github.com/apache/flink/pull/12536#discussion_r438019632



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: Re: [ANNOUNCE] New Flink Committer: Benchao Li

2020-06-10 Thread Rui Li
Congrats!

On Thu, Jun 11, 2020 at 9:41 AM godfrey he  wrote:

> Congratulations, Benchao!
>
> Best,
> Godfrey
>
> Yun Gao  于2020年6月11日周四 上午8:57写道:
>
> > Congratulations, Benchao!
> >
> > Best,
> >  Yun
> >
> > --
> > Sender:Danny Chan
> > Date:2020/06/10 20:01:01
> > Recipient:
> > Theme:Re: [ANNOUNCE] New Flink Committer: Benchao Li
> >
> > Congrats Benchao!
> >
> > Best,
> > Danny Chan
> > 在 2020年6月10日 +0800 AM11:57,dev@flink.apache.org,写道:
> > >
> > > Congrats Benchao!
> >
> >
>


-- 
Best regards!
Rui Li


[jira] [Created] (FLINK-18246) PyFlink e2e fails with java version mismatch on JDK11 nightly build

2020-06-10 Thread Robert Metzger (Jira)
Robert Metzger created FLINK-18246:
--

 Summary: PyFlink e2e fails with java version mismatch on JDK11 
nightly build
 Key: FLINK-18246
 URL: https://issues.apache.org/jira/browse/FLINK-18246
 Project: Flink
  Issue Type: Bug
  Components: API / Python, Build System, Tests
Affects Versions: 1.12.0
Reporter: Robert Metzger


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=3213&view=logs&j=6caf31d6-847a-526e-9624-468e053467d6&t=679407b1-ea2c-5965-2c8d-146fff88

{code}
Preparing transaction: ...working... done
Verifying transaction: ...working... done
Executing transaction: ...working... done
Error: A JNI error has occurred, please check your installation and try again
Exception in thread "main" java.lang.UnsupportedClassVersionError: 
org/apache/flink/client/cli/CliFrontend has been compiled by a more recent 
version of the Java Runtime (class file version 55.0), this version of the Java 
Runtime only recognizes class file versions up to 52.0
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
at 
java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at sun.launcher.LauncherHelper.checkAndLoadMain(LauncherHelper.java:495)
No taskexecutor daemon (pid: 123813) is running anymore on fv-az670.
No standalonesession daemon to stop on host fv-az670.

{code}




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-18247) Unstable test: TableITCase.testCollectWithClose:122 expected: but was:

2020-06-10 Thread Robert Metzger (Jira)
Robert Metzger created FLINK-18247:
--

 Summary: Unstable test: TableITCase.testCollectWithClose:122 
expected: but was:
 Key: FLINK-18247
 URL: https://issues.apache.org/jira/browse/FLINK-18247
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / API, Tests
Affects Versions: 1.12.0
Reporter: Robert Metzger


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=3202&view=logs&j=e25d5e7e-2a9c-5589-4940-0b638d75a414&t=a6e0f756-5bb9-5ea8-a468-5f60db442a29

{code}
[ERROR] Tests run: 9, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 14.159 
s <<< FAILURE! - in org.apache.flink.table.api.TableITCase
[ERROR] 
testCollectWithClose[TableEnvironment:isStream=false](org.apache.flink.table.api.TableITCase)
  Time elapsed: 0.567 s  <<< FAILURE!
java.lang.AssertionError: expected: but was:
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:834)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:144)
at 
org.apache.flink.table.api.TableITCase.testCollectWithClose(TableITCase.scala:122)

{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)