[GitHub] [flink] flinkbot edited a comment on issue #10654: [FLINK-14081][table-planner-blink] Support precision of TimeType

2020-01-08 Thread GitBox
flinkbot edited a comment on issue #10654: [FLINK-14081][table-planner-blink] 
Support precision of TimeType
URL: https://github.com/apache/flink/pull/10654#issuecomment-568150600
 
 
   
   ## CI report:
   
   * 210649f07986a5c815bd39bb532a86f06ee412b8 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/141996849) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3838)
 
   * 817a1efa635ff3201c943fbf579432b4ef5b8c7b Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/142075142) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3849)
 
   * a942e8e4a3b9f458b4f903280ccef3ccf2364770 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/142375460) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3937)
 
   * 383d6c9e55ecc74580f998b20ac60cf7990779d1 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/143376910) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4161)
 
   * a04621c54d57075fd9f747642b071c1d6a68767b Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/143499280) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4173)
 
   * cfa0072c7b98c2ffcc1aced9fbc148cc12755bad Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/143509092) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4177)
 
   * 3bf4ff084432ced2e7da115aea1eb17cf925096b UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-13554) ResourceManager should have a timeout on starting new TaskExecutors.

2020-01-08 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-13554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17010451#comment-17010451
 ] 

Xintong Song commented on FLINK-13554:
--

We have confirmed that the release-1.10 blocker FLINK-15456 is actually caused 
by the problem described in this ticket.
Since this problem is not introduced in 1.10, I believe it should not be a 
blocker. But how do we fix the problem, and whether it needs to be fixed in 
1.10 still need to be discussed.
I'm setting this ticket to be release-1.10 critical for now, to avoid 
overlooking it before a decision being made.

> ResourceManager should have a timeout on starting new TaskExecutors.
> 
>
> Key: FLINK-13554
> URL: https://issues.apache.org/jira/browse/FLINK-13554
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Affects Versions: 1.9.0
>Reporter: Xintong Song
>Priority: Critical
> Fix For: 1.10.0
>
>
> Recently, we encountered a case that one TaskExecutor get stuck during 
> launching on Yarn (without fail), causing that job cannot recover from 
> continuous failovers.
> The reason the TaskExecutor gets stuck is due to our environment problem. The 
> TaskExecutor gets stuck somewhere after the ResourceManager starts the 
> TaskExecutor and waiting for the TaskExecutor to be brought up and register. 
> Later when the slot request timeouts, the job fails over and requests slots 
> from ResourceManager again, the ResourceManager still see a TaskExecutor (the 
> stuck one) is being started and will not request new container from Yarn. 
> Therefore, the job can not recover from failure.
> I think to avoid such unrecoverable status, the ResourceManager need to have 
> a timeout on starting new TaskExecutor. If the starting of TaskExecutor takes 
> too long, it should just fail the TaskExecutor and starts a new one.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-13554) ResourceManager should have a timeout on starting new TaskExecutors.

2020-01-08 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-13554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17010451#comment-17010451
 ] 

Xintong Song edited comment on FLINK-13554 at 1/8/20 8:08 AM:
--

We have confirmed that the release-1.10 blocker FLINK-15456 is actually caused 
by the problem described in this ticket.
Since this problem is not introduced in 1.10, I believe it should not be a 
blocker. But how do we fix the problem, and whether it needs to be fixed in 
1.10 still need to be discussed.
I'm setting this ticket to be release-1.10 critical for now, to avoid 
overlooking it before a decision being made.
cc [~gjy] [~liyu] [~zhuzh] [~chesnay] [~trohrmann] [~karmagyz]


was (Author: xintongsong):
We have confirmed that the release-1.10 blocker FLINK-15456 is actually caused 
by the problem described in this ticket.
Since this problem is not introduced in 1.10, I believe it should not be a 
blocker. But how do we fix the problem, and whether it needs to be fixed in 
1.10 still need to be discussed.
I'm setting this ticket to be release-1.10 critical for now, to avoid 
overlooking it before a decision being made.

> ResourceManager should have a timeout on starting new TaskExecutors.
> 
>
> Key: FLINK-13554
> URL: https://issues.apache.org/jira/browse/FLINK-13554
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Affects Versions: 1.9.0
>Reporter: Xintong Song
>Priority: Critical
> Fix For: 1.10.0
>
>
> Recently, we encountered a case that one TaskExecutor get stuck during 
> launching on Yarn (without fail), causing that job cannot recover from 
> continuous failovers.
> The reason the TaskExecutor gets stuck is due to our environment problem. The 
> TaskExecutor gets stuck somewhere after the ResourceManager starts the 
> TaskExecutor and waiting for the TaskExecutor to be brought up and register. 
> Later when the slot request timeouts, the job fails over and requests slots 
> from ResourceManager again, the ResourceManager still see a TaskExecutor (the 
> stuck one) is being started and will not request new container from Yarn. 
> Therefore, the job can not recover from failure.
> I think to avoid such unrecoverable status, the ResourceManager need to have 
> a timeout on starting new TaskExecutor. If the starting of TaskExecutor takes 
> too long, it should just fail the TaskExecutor and starts a new one.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] AHeise commented on issue #10779: [FLINK-15327][runtime] No warning of InterruptedException during cancel.

2020-01-08 Thread GitBox
AHeise commented on issue #10779: [FLINK-15327][runtime] No warning of 
InterruptedException during cancel.
URL: https://github.com/apache/flink/pull/10779#issuecomment-571937160
 
 
   @flinkbot run travis


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] WeiZhong94 commented on issue #10772: [FLINK-15338][python] Cherry-pick NETTY#8955 and BEAM-9006(#10462) to fix the TM Metaspace memory leak problem when submitting PyFlink UDF jobs m

2020-01-08 Thread GitBox
WeiZhong94 commented on issue #10772: [FLINK-15338][python] Cherry-pick 
NETTY#8955 and BEAM-9006(#10462) to fix the TM Metaspace memory leak problem 
when submitting PyFlink UDF jobs multiple times.
URL: https://github.com/apache/flink/pull/10772#issuecomment-571937340
 
 
   @dianfu I have added the comments to the copied files.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-15512) Refactor the mechanism of how to constructure the cache and write buffer manager shared across RocksDB instances

2020-01-08 Thread Yu Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17010457#comment-17010457
 ] 

Yu Li commented on FLINK-15512:
---

More details, please refer to the 
[discussion|https://issues.apache.org/jira/browse/FLINK-15368?focusedCommentId=17007221&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17007221]
 in FLINK-15368

> Refactor the mechanism of how to constructure the cache and write buffer 
> manager shared across RocksDB instances 
> -
>
> Key: FLINK-15512
> URL: https://issues.apache.org/jira/browse/FLINK-15512
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / State Backends
>Reporter: Yun Tang
>Priority: Critical
> Fix For: 1.10.0
>
>
> FLINK-14484 introduce a {{LRUCache}} to share among RocksDB instances, so 
> that the memory usage by RocksDB could be controlled well. However, due to 
> the implementation and some bugs in RocksDB 
> ([issue-6247|https://github.com/facebook/rocksdb/issues/6247]), we cannot 
> limit the memory strictly.
> The way to walk around this issue is to consider the buffer which memtable 
> would overuse (1/2 write buffer manager size). By introducing this, the 
> actual cache size for user to share is not the same as the managed off-heap 
> memory or user configured memory.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #10772: [FLINK-15338][python] Cherry-pick NETTY#8955 and BEAM-9006(#10462) to fix the TM Metaspace memory leak problem when submitting PyFlink UDF j

2020-01-08 Thread GitBox
flinkbot edited a comment on issue #10772: [FLINK-15338][python] Cherry-pick 
NETTY#8955 and BEAM-9006(#10462) to fix the TM Metaspace memory leak problem 
when submitting PyFlink UDF jobs multiple times.
URL: https://github.com/apache/flink/pull/10772#issuecomment-571035348
 
 
   
   ## CI report:
   
   * ae56dd2d9b0d281ee691f3a7e7667dd960eda232 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/143204260) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4112)
 
   * 9e73df7bdf3ce66c243004ad8fb0b78eacc3776f Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/143212537) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4118)
 
   * 58ef3c7a98a47f98cf15d69b3c373f8e6928175b Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/143519857) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4182)
 
   * 41b9d57a5a85793eb5036e8943605d0c1e089807 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10779: [FLINK-15327][runtime] No warning of InterruptedException during cancel.

2020-01-08 Thread GitBox
flinkbot edited a comment on issue #10779: [FLINK-15327][runtime] No warning of 
InterruptedException during cancel.
URL: https://github.com/apache/flink/pull/10779#issuecomment-571250275
 
 
   
   ## CI report:
   
   * b30a4d297496a904582cb24d036cbb4c5b647149 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/143281199) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4137)
 
   * e48e4c08207d6e135026bf683c1af8a9d1310a76 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/143356039) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4152)
 
   * 19d4be405c97cf220c291475d6804e4183bedf68 Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/143466456) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4172)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #10796: [FLINK-15424][StateBackend] Make AppendintState#add refuse to add null element

2020-01-08 Thread GitBox
flinkbot commented on issue #10796: [FLINK-15424][StateBackend] Make 
AppendintState#add refuse to add null element
URL: https://github.com/apache/flink/pull/10796#issuecomment-571939777
 
 
   
   ## CI report:
   
   * 3c4dece4d584ae396c69247e3bcf978807b9f232 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-13554) ResourceManager should have a timeout on starting new TaskExecutors.

2020-01-08 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-13554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17010460#comment-17010460
 ] 

Xintong Song commented on FLINK-13554:
--

IMO, I think a clean solution should be RM monitors a timeout for starting new 
TMs. But this approach includes introducing config options for the timeout, 
monitoring timeout asynchronously, properly un-monitoring on TM registration, 
which may not be suitable to add after the feature freeze. 

Also, it seems not to be a common case. We do not see any report of this bug 
from the users. We run into this problem (both this ticket and FLINK-15456) 
only when testing the stability of Flink with ChaosMonkey intentionally 
breaking the network connections.

Therefore, I'm in favor of not fixing this problem in release 1.10.0.

> ResourceManager should have a timeout on starting new TaskExecutors.
> 
>
> Key: FLINK-13554
> URL: https://issues.apache.org/jira/browse/FLINK-13554
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Affects Versions: 1.9.0
>Reporter: Xintong Song
>Priority: Critical
> Fix For: 1.10.0
>
>
> Recently, we encountered a case that one TaskExecutor get stuck during 
> launching on Yarn (without fail), causing that job cannot recover from 
> continuous failovers.
> The reason the TaskExecutor gets stuck is due to our environment problem. The 
> TaskExecutor gets stuck somewhere after the ResourceManager starts the 
> TaskExecutor and waiting for the TaskExecutor to be brought up and register. 
> Later when the slot request timeouts, the job fails over and requests slots 
> from ResourceManager again, the ResourceManager still see a TaskExecutor (the 
> stuck one) is being started and will not request new container from Yarn. 
> Therefore, the job can not recover from failure.
> I think to avoid such unrecoverable status, the ResourceManager need to have 
> a timeout on starting new TaskExecutor. If the starting of TaskExecutor takes 
> too long, it should just fail the TaskExecutor and starts a new one.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #10795: [FLINK-15510] Pretty Print StreamGraph JSON Plan

2020-01-08 Thread GitBox
flinkbot edited a comment on issue #10795: [FLINK-15510] Pretty Print 
StreamGraph JSON Plan
URL: https://github.com/apache/flink/pull/10795#issuecomment-571921686
 
 
   
   ## CI report:
   
   * 989a1cfecbe74568d9fc4508a884eddfb4f990a2 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/143517107) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4180)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-15511) export org.apache.flink.table.api.TableException when flink 1.10 connect hive

2020-01-08 Thread chenchencc (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chenchencc updated FLINK-15511:
---
Labels: flink hive  (was: )

> export org.apache.flink.table.api.TableException when flink 1.10 connect hive 
> --
>
> Key: FLINK-15511
> URL: https://issues.apache.org/jira/browse/FLINK-15511
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.10.0
> Environment: flink master
> hive 1.2.1
>  
>Reporter: chenchencc
>Priority: Major
>  Labels: flink, hive
>
> *run scripts:*
> bin/start-scala-shell.sh yarn -qu bi -jm 1024m -tm 2048m
> import org.apache.flink.table.catalog.hive.HiveCatalog
>  val name = "myhive"
>  val defaultDatabase = "test"
>  val hiveConfDir = "/etc/hive/conf"
>  val version = "1.2.1" // or 1.2.1 2.3.4
>  val hive = new HiveCatalog(name, defaultDatabase, hiveConfDir, version)
>  stenv.registerCatalog("myhive", hive)
>  stenv.useCatalog("myhive")
>  stenv.listTables
>  stenv.sqlQuery("select * from gsp_test3").toAppendStream[Row].print
>  *gsp_test3 table columns:*
> id int 
> name string
> *gsp_test3  table  storage:*
> txt file
>  
> *scripts run message*
> scala> import org.apache.flink.table.catalog.hive.HiveCatalog
>  import org.apache.flink.table.catalog.hive.HiveCatalog
> scala> val name = "myhive"
>  name: String = myhive
> scala> val defaultDatabase = "test"
>  defaultDatabase: String = test
> scala> val hiveConfDir = "/etc/hive/conf"
>  hiveConfDir: String = /etc/hive/conf
> scala> val version = "1.2.1" // or 1.2.1 2.3.4
>  version: String = 1.2.1
> scala> val hive = new HiveCatalog(name, defaultDatabase, hiveConfDir, version)
>  20/01/08 14:36:10 INFO hive.HiveCatalog: Setting hive conf dir as 
> /etc/hive/conf
>  20/01/08 14:36:10 WARN conf.HiveConf: HiveConf of name 
> hive.server2.enable.impersonation does not exist
>  20/01/08 14:36:10 WARN conf.HiveConf: HiveConf of name 
> hive.mapred.supports.subdirectories does not exist
>  20/01/08 14:36:10 INFO hive.HiveCatalog: Created HiveCatalog 'myhive'
>  hive: org.apache.flink.table.catalog.hive.HiveCatalog = 
> org.apache.flink.table.catalog.hive.HiveCatalog@60729135
> scala> stenv.registerCatalog("myhive", hive)
>  20/01/08 14:36:10 INFO hive.metastore: Trying to connect to metastore with 
> URI thrift://bgnode4:9083
>  20/01/08 14:36:10 INFO hive.metastore: Connected to metastore.
>  20/01/08 14:36:10 INFO hive.HiveCatalog: Connected to Hive metastore
> scala> stenv.useCatalog("myhive")
>  20/01/08 14:36:10 INFO catalog.CatalogManager: Set the current default 
> catalog as [myhive] and the current default database as [test].
> scala> stenv.listTables
>  res6: Array[String] = Array(amazonproductscore_test, 
> amazonproductscore_test_tmp, amazonshopmanagerkpi, bucketed_user, 
> bulkload_spark_gross_profit_items_zcm, dim_date_test, 
> dw_gross_profit_items_phoenix_test, dw_gross_profit_items_phoenix_test2, 
> dw_gross_profit_items_phoenix_test3, dw_gross_profit_items_phoenix_test4, 
> dw_gross_profit_items_phoenix_test5, gsp_test12, gsp_test2, gsp_test3, 
> hive_phoenix, ni, orderparent_test, orderparent_test2, 
> phoenix_orderparent_id_put_tb, phoenix_orderparent_id_put_tb2, 
> phoenix_orderparent_id_tb, productdailysales, result20190404, 
> result20190404_2, result20190404_3, result20190404_4_5_9, result20190404_5, 
> result20190404vat, result20190404vat11, result20190404vat12, 
> result20190404vat13, result20190404vat5, result20190404vat6_2, ...
>  scala> stenv.sqlQuery("select * from gsp_test3").toAppendStream[Row].print
>  20/01/08 14:36:13 INFO typeutils.TypeExtractor: class 
> org.apache.flink.types.Row does not contain a getter for field fields
>  20/01/08 14:36:13 INFO typeutils.TypeExtractor: class 
> org.apache.flink.types.Row does not contain a setter for field fields
>  20/01/08 14:36:13 INFO typeutils.TypeExtractor: Class class 
> org.apache.flink.types.Row cannot be used as a POJO type because not all 
> fields are valid POJO fields, and must be processed as GenericType. Please 
> read the Flink documentation on "Data Types & Serialization" for details of 
> the effect on performance.
>  20/01/08 14:36:13 WARN conf.HiveConf: HiveConf of name 
> hive.server2.enable.impersonation does not exist
>  20/01/08 14:36:13 WARN conf.HiveConf: HiveConf of name 
> hive.mapred.supports.subdirectories does not exist
>  20/01/08 14:36:13 INFO hive.metastore: Trying to connect to metastore with 
> URI thrift://bgnode3:9083
>  20/01/08 14:36:13 INFO hive.metastore: Connected to metastore.
>  20/01/08 14:36:13 INFO configuration.GlobalConfiguration: Loading 
> configuration property: jobmanager.rpc.address, localhost
>  20/01/08 14:36:13 INFO configuration.GlobalConfiguration: Loading 
> configuration property: jobmanage

[jira] [Commented] (FLINK-15511) export org.apache.flink.table.api.TableException when flink 1.10 connect hive

2020-01-08 Thread Rui Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17010463#comment-17010463
 ] 

Rui Li commented on FLINK-15511:


Hi [~chenchencc], could you try using blink planner and see if that solves your 
problem? Hive connector doesn't support the old planner in 1.10. You can create 
a {{TableEnvironment}} with blink planner like this:
{code}
EnvironmentSettings settings = 
EnvironmentSettings.newInstance().useBlinkPlanner().inBatchMode().build();
TableEnvironment tableEnv = TableEnvironment.create(settings);
{code}

> export org.apache.flink.table.api.TableException when flink 1.10 connect hive 
> --
>
> Key: FLINK-15511
> URL: https://issues.apache.org/jira/browse/FLINK-15511
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.10.0
> Environment: flink master
> hive 1.2.1
>  
>Reporter: chenchencc
>Priority: Major
>  Labels: flink, hive
>
> *run scripts:*
> bin/start-scala-shell.sh yarn -qu bi -jm 1024m -tm 2048m
> import org.apache.flink.table.catalog.hive.HiveCatalog
>  val name = "myhive"
>  val defaultDatabase = "test"
>  val hiveConfDir = "/etc/hive/conf"
>  val version = "1.2.1" // or 1.2.1 2.3.4
>  val hive = new HiveCatalog(name, defaultDatabase, hiveConfDir, version)
>  stenv.registerCatalog("myhive", hive)
>  stenv.useCatalog("myhive")
>  stenv.listTables
>  stenv.sqlQuery("select * from gsp_test3").toAppendStream[Row].print
>  *gsp_test3 table columns:*
> id int 
> name string
> *gsp_test3  table  storage:*
> txt file
>  
> *scripts run message*
> scala> import org.apache.flink.table.catalog.hive.HiveCatalog
>  import org.apache.flink.table.catalog.hive.HiveCatalog
> scala> val name = "myhive"
>  name: String = myhive
> scala> val defaultDatabase = "test"
>  defaultDatabase: String = test
> scala> val hiveConfDir = "/etc/hive/conf"
>  hiveConfDir: String = /etc/hive/conf
> scala> val version = "1.2.1" // or 1.2.1 2.3.4
>  version: String = 1.2.1
> scala> val hive = new HiveCatalog(name, defaultDatabase, hiveConfDir, version)
>  20/01/08 14:36:10 INFO hive.HiveCatalog: Setting hive conf dir as 
> /etc/hive/conf
>  20/01/08 14:36:10 WARN conf.HiveConf: HiveConf of name 
> hive.server2.enable.impersonation does not exist
>  20/01/08 14:36:10 WARN conf.HiveConf: HiveConf of name 
> hive.mapred.supports.subdirectories does not exist
>  20/01/08 14:36:10 INFO hive.HiveCatalog: Created HiveCatalog 'myhive'
>  hive: org.apache.flink.table.catalog.hive.HiveCatalog = 
> org.apache.flink.table.catalog.hive.HiveCatalog@60729135
> scala> stenv.registerCatalog("myhive", hive)
>  20/01/08 14:36:10 INFO hive.metastore: Trying to connect to metastore with 
> URI thrift://bgnode4:9083
>  20/01/08 14:36:10 INFO hive.metastore: Connected to metastore.
>  20/01/08 14:36:10 INFO hive.HiveCatalog: Connected to Hive metastore
> scala> stenv.useCatalog("myhive")
>  20/01/08 14:36:10 INFO catalog.CatalogManager: Set the current default 
> catalog as [myhive] and the current default database as [test].
> scala> stenv.listTables
>  res6: Array[String] = Array(amazonproductscore_test, 
> amazonproductscore_test_tmp, amazonshopmanagerkpi, bucketed_user, 
> bulkload_spark_gross_profit_items_zcm, dim_date_test, 
> dw_gross_profit_items_phoenix_test, dw_gross_profit_items_phoenix_test2, 
> dw_gross_profit_items_phoenix_test3, dw_gross_profit_items_phoenix_test4, 
> dw_gross_profit_items_phoenix_test5, gsp_test12, gsp_test2, gsp_test3, 
> hive_phoenix, ni, orderparent_test, orderparent_test2, 
> phoenix_orderparent_id_put_tb, phoenix_orderparent_id_put_tb2, 
> phoenix_orderparent_id_tb, productdailysales, result20190404, 
> result20190404_2, result20190404_3, result20190404_4_5_9, result20190404_5, 
> result20190404vat, result20190404vat11, result20190404vat12, 
> result20190404vat13, result20190404vat5, result20190404vat6_2, ...
>  scala> stenv.sqlQuery("select * from gsp_test3").toAppendStream[Row].print
>  20/01/08 14:36:13 INFO typeutils.TypeExtractor: class 
> org.apache.flink.types.Row does not contain a getter for field fields
>  20/01/08 14:36:13 INFO typeutils.TypeExtractor: class 
> org.apache.flink.types.Row does not contain a setter for field fields
>  20/01/08 14:36:13 INFO typeutils.TypeExtractor: Class class 
> org.apache.flink.types.Row cannot be used as a POJO type because not all 
> fields are valid POJO fields, and must be processed as GenericType. Please 
> read the Flink documentation on "Data Types & Serialization" for details of 
> the effect on performance.
>  20/01/08 14:36:13 WARN conf.HiveConf: HiveConf of name 
> hive.server2.enable.impersonation does not exist
>  20/01/08 14:36:13 WARN conf.HiveConf: HiveConf of name 
> hive.mapre

[jira] [Comment Edited] (FLINK-15511) export org.apache.flink.table.api.TableException when flink 1.10 connect hive

2020-01-08 Thread chenchencc (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17010468#comment-17010468
 ] 

chenchencc edited comment on FLINK-15511 at 1/8/20 8:29 AM:


Hi [~lirui]

I do it,then meet this problem:

scala> val tableEnv = TableEnvironment.create(settings)
 :72: error: Static methods in interface require -target:jvm-1.8
 val tableEnv = TableEnvironment.create(settings)


was (Author: chenchencc):
Hi Rui Li

I do it,then meet this problem:

scala> val tableEnv = TableEnvironment.create(settings)
 :72: error: Static methods in interface require -target:jvm-1.8
 val tableEnv = TableEnvironment.create(settings)

> export org.apache.flink.table.api.TableException when flink 1.10 connect hive 
> --
>
> Key: FLINK-15511
> URL: https://issues.apache.org/jira/browse/FLINK-15511
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.10.0
> Environment: flink master
> hive 1.2.1
>  
>Reporter: chenchencc
>Priority: Major
>  Labels: flink, hive
>
> *run scripts:*
> bin/start-scala-shell.sh yarn -qu bi -jm 1024m -tm 2048m
> import org.apache.flink.table.catalog.hive.HiveCatalog
>  val name = "myhive"
>  val defaultDatabase = "test"
>  val hiveConfDir = "/etc/hive/conf"
>  val version = "1.2.1" // or 1.2.1 2.3.4
>  val hive = new HiveCatalog(name, defaultDatabase, hiveConfDir, version)
>  stenv.registerCatalog("myhive", hive)
>  stenv.useCatalog("myhive")
>  stenv.listTables
>  stenv.sqlQuery("select * from gsp_test3").toAppendStream[Row].print
>  *gsp_test3 table columns:*
> id int 
> name string
> *gsp_test3  table  storage:*
> txt file
>  
> *scripts run message*
> scala> import org.apache.flink.table.catalog.hive.HiveCatalog
>  import org.apache.flink.table.catalog.hive.HiveCatalog
> scala> val name = "myhive"
>  name: String = myhive
> scala> val defaultDatabase = "test"
>  defaultDatabase: String = test
> scala> val hiveConfDir = "/etc/hive/conf"
>  hiveConfDir: String = /etc/hive/conf
> scala> val version = "1.2.1" // or 1.2.1 2.3.4
>  version: String = 1.2.1
> scala> val hive = new HiveCatalog(name, defaultDatabase, hiveConfDir, version)
>  20/01/08 14:36:10 INFO hive.HiveCatalog: Setting hive conf dir as 
> /etc/hive/conf
>  20/01/08 14:36:10 WARN conf.HiveConf: HiveConf of name 
> hive.server2.enable.impersonation does not exist
>  20/01/08 14:36:10 WARN conf.HiveConf: HiveConf of name 
> hive.mapred.supports.subdirectories does not exist
>  20/01/08 14:36:10 INFO hive.HiveCatalog: Created HiveCatalog 'myhive'
>  hive: org.apache.flink.table.catalog.hive.HiveCatalog = 
> org.apache.flink.table.catalog.hive.HiveCatalog@60729135
> scala> stenv.registerCatalog("myhive", hive)
>  20/01/08 14:36:10 INFO hive.metastore: Trying to connect to metastore with 
> URI thrift://bgnode4:9083
>  20/01/08 14:36:10 INFO hive.metastore: Connected to metastore.
>  20/01/08 14:36:10 INFO hive.HiveCatalog: Connected to Hive metastore
> scala> stenv.useCatalog("myhive")
>  20/01/08 14:36:10 INFO catalog.CatalogManager: Set the current default 
> catalog as [myhive] and the current default database as [test].
> scala> stenv.listTables
>  res6: Array[String] = Array(amazonproductscore_test, 
> amazonproductscore_test_tmp, amazonshopmanagerkpi, bucketed_user, 
> bulkload_spark_gross_profit_items_zcm, dim_date_test, 
> dw_gross_profit_items_phoenix_test, dw_gross_profit_items_phoenix_test2, 
> dw_gross_profit_items_phoenix_test3, dw_gross_profit_items_phoenix_test4, 
> dw_gross_profit_items_phoenix_test5, gsp_test12, gsp_test2, gsp_test3, 
> hive_phoenix, ni, orderparent_test, orderparent_test2, 
> phoenix_orderparent_id_put_tb, phoenix_orderparent_id_put_tb2, 
> phoenix_orderparent_id_tb, productdailysales, result20190404, 
> result20190404_2, result20190404_3, result20190404_4_5_9, result20190404_5, 
> result20190404vat, result20190404vat11, result20190404vat12, 
> result20190404vat13, result20190404vat5, result20190404vat6_2, ...
>  scala> stenv.sqlQuery("select * from gsp_test3").toAppendStream[Row].print
>  20/01/08 14:36:13 INFO typeutils.TypeExtractor: class 
> org.apache.flink.types.Row does not contain a getter for field fields
>  20/01/08 14:36:13 INFO typeutils.TypeExtractor: class 
> org.apache.flink.types.Row does not contain a setter for field fields
>  20/01/08 14:36:13 INFO typeutils.TypeExtractor: Class class 
> org.apache.flink.types.Row cannot be used as a POJO type because not all 
> fields are valid POJO fields, and must be processed as GenericType. Please 
> read the Flink documentation on "Data Types & Serialization" for details of 
> the effect on performance.
>  20/01/08 14:36:13 WARN conf.HiveConf: HiveConf of name 
> hive.server2.enable.impersonation do

[jira] [Commented] (FLINK-15511) export org.apache.flink.table.api.TableException when flink 1.10 connect hive

2020-01-08 Thread chenchencc (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17010468#comment-17010468
 ] 

chenchencc commented on FLINK-15511:


I do it,then meet this problem:

scala> val tableEnv = TableEnvironment.create(settings)
:72: error: Static methods in interface require -target:jvm-1.8
 val tableEnv = TableEnvironment.create(settings)

> export org.apache.flink.table.api.TableException when flink 1.10 connect hive 
> --
>
> Key: FLINK-15511
> URL: https://issues.apache.org/jira/browse/FLINK-15511
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.10.0
> Environment: flink master
> hive 1.2.1
>  
>Reporter: chenchencc
>Priority: Major
>  Labels: flink, hive
>
> *run scripts:*
> bin/start-scala-shell.sh yarn -qu bi -jm 1024m -tm 2048m
> import org.apache.flink.table.catalog.hive.HiveCatalog
>  val name = "myhive"
>  val defaultDatabase = "test"
>  val hiveConfDir = "/etc/hive/conf"
>  val version = "1.2.1" // or 1.2.1 2.3.4
>  val hive = new HiveCatalog(name, defaultDatabase, hiveConfDir, version)
>  stenv.registerCatalog("myhive", hive)
>  stenv.useCatalog("myhive")
>  stenv.listTables
>  stenv.sqlQuery("select * from gsp_test3").toAppendStream[Row].print
>  *gsp_test3 table columns:*
> id int 
> name string
> *gsp_test3  table  storage:*
> txt file
>  
> *scripts run message*
> scala> import org.apache.flink.table.catalog.hive.HiveCatalog
>  import org.apache.flink.table.catalog.hive.HiveCatalog
> scala> val name = "myhive"
>  name: String = myhive
> scala> val defaultDatabase = "test"
>  defaultDatabase: String = test
> scala> val hiveConfDir = "/etc/hive/conf"
>  hiveConfDir: String = /etc/hive/conf
> scala> val version = "1.2.1" // or 1.2.1 2.3.4
>  version: String = 1.2.1
> scala> val hive = new HiveCatalog(name, defaultDatabase, hiveConfDir, version)
>  20/01/08 14:36:10 INFO hive.HiveCatalog: Setting hive conf dir as 
> /etc/hive/conf
>  20/01/08 14:36:10 WARN conf.HiveConf: HiveConf of name 
> hive.server2.enable.impersonation does not exist
>  20/01/08 14:36:10 WARN conf.HiveConf: HiveConf of name 
> hive.mapred.supports.subdirectories does not exist
>  20/01/08 14:36:10 INFO hive.HiveCatalog: Created HiveCatalog 'myhive'
>  hive: org.apache.flink.table.catalog.hive.HiveCatalog = 
> org.apache.flink.table.catalog.hive.HiveCatalog@60729135
> scala> stenv.registerCatalog("myhive", hive)
>  20/01/08 14:36:10 INFO hive.metastore: Trying to connect to metastore with 
> URI thrift://bgnode4:9083
>  20/01/08 14:36:10 INFO hive.metastore: Connected to metastore.
>  20/01/08 14:36:10 INFO hive.HiveCatalog: Connected to Hive metastore
> scala> stenv.useCatalog("myhive")
>  20/01/08 14:36:10 INFO catalog.CatalogManager: Set the current default 
> catalog as [myhive] and the current default database as [test].
> scala> stenv.listTables
>  res6: Array[String] = Array(amazonproductscore_test, 
> amazonproductscore_test_tmp, amazonshopmanagerkpi, bucketed_user, 
> bulkload_spark_gross_profit_items_zcm, dim_date_test, 
> dw_gross_profit_items_phoenix_test, dw_gross_profit_items_phoenix_test2, 
> dw_gross_profit_items_phoenix_test3, dw_gross_profit_items_phoenix_test4, 
> dw_gross_profit_items_phoenix_test5, gsp_test12, gsp_test2, gsp_test3, 
> hive_phoenix, ni, orderparent_test, orderparent_test2, 
> phoenix_orderparent_id_put_tb, phoenix_orderparent_id_put_tb2, 
> phoenix_orderparent_id_tb, productdailysales, result20190404, 
> result20190404_2, result20190404_3, result20190404_4_5_9, result20190404_5, 
> result20190404vat, result20190404vat11, result20190404vat12, 
> result20190404vat13, result20190404vat5, result20190404vat6_2, ...
>  scala> stenv.sqlQuery("select * from gsp_test3").toAppendStream[Row].print
>  20/01/08 14:36:13 INFO typeutils.TypeExtractor: class 
> org.apache.flink.types.Row does not contain a getter for field fields
>  20/01/08 14:36:13 INFO typeutils.TypeExtractor: class 
> org.apache.flink.types.Row does not contain a setter for field fields
>  20/01/08 14:36:13 INFO typeutils.TypeExtractor: Class class 
> org.apache.flink.types.Row cannot be used as a POJO type because not all 
> fields are valid POJO fields, and must be processed as GenericType. Please 
> read the Flink documentation on "Data Types & Serialization" for details of 
> the effect on performance.
>  20/01/08 14:36:13 WARN conf.HiveConf: HiveConf of name 
> hive.server2.enable.impersonation does not exist
>  20/01/08 14:36:13 WARN conf.HiveConf: HiveConf of name 
> hive.mapred.supports.subdirectories does not exist
>  20/01/08 14:36:13 INFO hive.metastore: Trying to connect to metastore with 
> URI thrift://bgnode3:9083
>  20/01/08 14:36:13 INFO hive.metastore: Connected to metastore.
>  20/

[jira] [Comment Edited] (FLINK-15511) export org.apache.flink.table.api.TableException when flink 1.10 connect hive

2020-01-08 Thread chenchencc (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17010468#comment-17010468
 ] 

chenchencc edited comment on FLINK-15511 at 1/8/20 8:29 AM:


Hi Rui Li

I do it,then meet this problem:

scala> val tableEnv = TableEnvironment.create(settings)
 :72: error: Static methods in interface require -target:jvm-1.8
 val tableEnv = TableEnvironment.create(settings)


was (Author: chenchencc):
I do it,then meet this problem:

scala> val tableEnv = TableEnvironment.create(settings)
:72: error: Static methods in interface require -target:jvm-1.8
 val tableEnv = TableEnvironment.create(settings)

> export org.apache.flink.table.api.TableException when flink 1.10 connect hive 
> --
>
> Key: FLINK-15511
> URL: https://issues.apache.org/jira/browse/FLINK-15511
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.10.0
> Environment: flink master
> hive 1.2.1
>  
>Reporter: chenchencc
>Priority: Major
>  Labels: flink, hive
>
> *run scripts:*
> bin/start-scala-shell.sh yarn -qu bi -jm 1024m -tm 2048m
> import org.apache.flink.table.catalog.hive.HiveCatalog
>  val name = "myhive"
>  val defaultDatabase = "test"
>  val hiveConfDir = "/etc/hive/conf"
>  val version = "1.2.1" // or 1.2.1 2.3.4
>  val hive = new HiveCatalog(name, defaultDatabase, hiveConfDir, version)
>  stenv.registerCatalog("myhive", hive)
>  stenv.useCatalog("myhive")
>  stenv.listTables
>  stenv.sqlQuery("select * from gsp_test3").toAppendStream[Row].print
>  *gsp_test3 table columns:*
> id int 
> name string
> *gsp_test3  table  storage:*
> txt file
>  
> *scripts run message*
> scala> import org.apache.flink.table.catalog.hive.HiveCatalog
>  import org.apache.flink.table.catalog.hive.HiveCatalog
> scala> val name = "myhive"
>  name: String = myhive
> scala> val defaultDatabase = "test"
>  defaultDatabase: String = test
> scala> val hiveConfDir = "/etc/hive/conf"
>  hiveConfDir: String = /etc/hive/conf
> scala> val version = "1.2.1" // or 1.2.1 2.3.4
>  version: String = 1.2.1
> scala> val hive = new HiveCatalog(name, defaultDatabase, hiveConfDir, version)
>  20/01/08 14:36:10 INFO hive.HiveCatalog: Setting hive conf dir as 
> /etc/hive/conf
>  20/01/08 14:36:10 WARN conf.HiveConf: HiveConf of name 
> hive.server2.enable.impersonation does not exist
>  20/01/08 14:36:10 WARN conf.HiveConf: HiveConf of name 
> hive.mapred.supports.subdirectories does not exist
>  20/01/08 14:36:10 INFO hive.HiveCatalog: Created HiveCatalog 'myhive'
>  hive: org.apache.flink.table.catalog.hive.HiveCatalog = 
> org.apache.flink.table.catalog.hive.HiveCatalog@60729135
> scala> stenv.registerCatalog("myhive", hive)
>  20/01/08 14:36:10 INFO hive.metastore: Trying to connect to metastore with 
> URI thrift://bgnode4:9083
>  20/01/08 14:36:10 INFO hive.metastore: Connected to metastore.
>  20/01/08 14:36:10 INFO hive.HiveCatalog: Connected to Hive metastore
> scala> stenv.useCatalog("myhive")
>  20/01/08 14:36:10 INFO catalog.CatalogManager: Set the current default 
> catalog as [myhive] and the current default database as [test].
> scala> stenv.listTables
>  res6: Array[String] = Array(amazonproductscore_test, 
> amazonproductscore_test_tmp, amazonshopmanagerkpi, bucketed_user, 
> bulkload_spark_gross_profit_items_zcm, dim_date_test, 
> dw_gross_profit_items_phoenix_test, dw_gross_profit_items_phoenix_test2, 
> dw_gross_profit_items_phoenix_test3, dw_gross_profit_items_phoenix_test4, 
> dw_gross_profit_items_phoenix_test5, gsp_test12, gsp_test2, gsp_test3, 
> hive_phoenix, ni, orderparent_test, orderparent_test2, 
> phoenix_orderparent_id_put_tb, phoenix_orderparent_id_put_tb2, 
> phoenix_orderparent_id_tb, productdailysales, result20190404, 
> result20190404_2, result20190404_3, result20190404_4_5_9, result20190404_5, 
> result20190404vat, result20190404vat11, result20190404vat12, 
> result20190404vat13, result20190404vat5, result20190404vat6_2, ...
>  scala> stenv.sqlQuery("select * from gsp_test3").toAppendStream[Row].print
>  20/01/08 14:36:13 INFO typeutils.TypeExtractor: class 
> org.apache.flink.types.Row does not contain a getter for field fields
>  20/01/08 14:36:13 INFO typeutils.TypeExtractor: class 
> org.apache.flink.types.Row does not contain a setter for field fields
>  20/01/08 14:36:13 INFO typeutils.TypeExtractor: Class class 
> org.apache.flink.types.Row cannot be used as a POJO type because not all 
> fields are valid POJO fields, and must be processed as GenericType. Please 
> read the Flink documentation on "Data Types & Serialization" for details of 
> the effect on performance.
>  20/01/08 14:36:13 WARN conf.HiveConf: HiveConf of name 
> hive.server2.enable.impersonation does not exist
>

[jira] [Commented] (FLINK-15500) "SQL Client end-to-end test for Kafka" failed in my local environment

2020-01-08 Thread Yu Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17010469#comment-17010469
 ] 

Yu Li commented on FLINK-15500:
---

[~jark] Just to confirm: is this some problem observed in master nightly run or 
release-1.10? There's no failure in recent release-1.10 nightly runs, so it 
would be strange if the issue could be reproduced in local env but not in 
release-1.10 nightly run.

> "SQL Client end-to-end test for Kafka" failed in my local environment
> -
>
> Key: FLINK-15500
> URL: https://issues.apache.org/jira/browse/FLINK-15500
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Reporter: Jark Wu
>Priority: Critical
> Fix For: 1.10.0
>
>
> The "SQL Client end-to-end test for modern Kafka" (aka. 
> {{test_sql_client_kafka.sh}}) test is failed in my local environment with 
> following exception:
> {code:java}
> Exception in thread "main" org.apache.flink.table.client.SqlClientException: 
> Unexpected exception. This is a bug. Please consider filing an issue.
>   at org.apache.flink.table.client.SqlClient.main(SqlClient.java:190)
> Caused by: org.apache.flink.table.client.gateway.SqlExecutionException: Could 
> not create execution context.
>   at 
> org.apache.flink.table.client.gateway.local.ExecutionContext$Builder.build(ExecutionContext.java:759)
>   at 
> org.apache.flink.table.client.gateway.local.LocalExecutor.openSession(LocalExecutor.java:228)
>   at org.apache.flink.table.client.SqlClient.start(SqlClient.java:98)
>   at org.apache.flink.table.client.SqlClient.main(SqlClient.java:178)
> Caused by: java.lang.NoClassDefFoundError: org/apache/avro/io/DatumReader
>   at 
> org.apache.flink.formats.avro.AvroRowFormatFactory.createDeserializationSchema(AvroRowFormatFactory.java:64)
>   at 
> org.apache.flink.streaming.connectors.kafka.KafkaTableSourceSinkFactoryBase.getDeserializationSchema(KafkaTableSourceSinkFactoryBase.java:281)
>   at 
> org.apache.flink.streaming.connectors.kafka.KafkaTableSourceSinkFactoryBase.createStreamTableSource(KafkaTableSourceSinkFactoryBase.java:161)
>   at 
> org.apache.flink.table.factories.StreamTableSourceFactory.createTableSource(StreamTableSourceFactory.java:49)
>   at 
> org.apache.flink.table.client.gateway.local.ExecutionContext.createTableSource(ExecutionContext.java:371)
>   at 
> org.apache.flink.table.client.gateway.local.ExecutionContext.lambda$initializeCatalogs$6(ExecutionContext.java:552)
>   at java.util.LinkedHashMap.forEach(LinkedHashMap.java:684)
>   at 
> org.apache.flink.table.client.gateway.local.ExecutionContext.initializeCatalogs(ExecutionContext.java:550)
>   at 
> org.apache.flink.table.client.gateway.local.ExecutionContext.initializeTableEnvironment(ExecutionContext.java:487)
>   at 
> org.apache.flink.table.client.gateway.local.ExecutionContext.(ExecutionContext.java:159)
>   at 
> org.apache.flink.table.client.gateway.local.ExecutionContext.(ExecutionContext.java:118)
>   at 
> org.apache.flink.table.client.gateway.local.ExecutionContext$Builder.build(ExecutionContext.java:748)
>   ... 3 more
> Caused by: java.lang.ClassNotFoundException: org.apache.avro.io.DatumReader
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   ... 15 more
> [FAIL] Test script contains errors.
> Checking of logs skipped.
> [FAIL] 'flink-end-to-end-tests/test-scripts/test_sql_client_kafka.sh' failed 
> after 0 minutes and 27 seconds! Test exited with exit code 1
> {code}
> I guess the reason why nightly travis didn't report this is that "e2e - misc 
> - hadoop 2.8" is failed on the "Streaming File Sink s3 end-to-end test", that 
> result in all the following cases (including SQL Client end-to-end tests) are 
> not triggered. For example https://api.travis-ci.org/v3/job/633275285/log.txt



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #10654: [FLINK-14081][table-planner-blink] Support precision of TimeType

2020-01-08 Thread GitBox
flinkbot edited a comment on issue #10654: [FLINK-14081][table-planner-blink] 
Support precision of TimeType
URL: https://github.com/apache/flink/pull/10654#issuecomment-568150600
 
 
   
   ## CI report:
   
   * 210649f07986a5c815bd39bb532a86f06ee412b8 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/141996849) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3838)
 
   * 817a1efa635ff3201c943fbf579432b4ef5b8c7b Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/142075142) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3849)
 
   * a942e8e4a3b9f458b4f903280ccef3ccf2364770 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/142375460) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3937)
 
   * 383d6c9e55ecc74580f998b20ac60cf7990779d1 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/143376910) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4161)
 
   * a04621c54d57075fd9f747642b071c1d6a68767b Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/143499280) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4173)
 
   * cfa0072c7b98c2ffcc1aced9fbc148cc12755bad Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/143509092) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4177)
 
   * 3bf4ff084432ced2e7da115aea1eb17cf925096b Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/143519873) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4183)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10665: [FLINK-15354] Start and stop minikube only in kubernetes related e2e tests

2020-01-08 Thread GitBox
flinkbot edited a comment on issue #10665: [FLINK-15354] Start and stop 
minikube only in kubernetes related e2e tests
URL: https://github.com/apache/flink/pull/10665#issuecomment-568440738
 
 
   
   ## CI report:
   
   * 5a0d5d3d499347ca216e19175ff5f066a6d9b458 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142099952) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3861)
 
   * 255d89be8069b36be2b980ea6dba4798568160bb Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/143507134) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4176)
 
   * 9168102e928bacaa8026407f77a33b80a8ddeae4 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/143514708) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4179)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] pnowojski commented on issue #10778: [Flink-15355][plugins] Added parent first patterns for plugins.

2020-01-08 Thread GitBox
pnowojski commented on issue #10778: [Flink-15355][plugins] Added parent first 
patterns for plugins.
URL: https://github.com/apache/flink/pull/10778#issuecomment-571946609
 
 
   Travis looks green. Can you remove the first commit so I could merge it?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Assigned] (FLINK-15172) Optimize the operator algorithm to lazily allocate memory

2020-01-08 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu reassigned FLINK-15172:
---

Assignee: Jingsong Lee

> Optimize the operator algorithm to lazily allocate memory
> -
>
> Key: FLINK-15172
> URL: https://issues.apache.org/jira/browse/FLINK-15172
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Runtime
>Reporter: Jingsong Lee
>Assignee: Jingsong Lee
>Priority: Major
> Fix For: 1.11.0
>
>
> Now after FLINK-14063 , operators will get all manage memory of TaskManager,  
> The cost of over allocate memory is very high, lead to performance regression 
> of small batch sql jobs:
>  * Allocate memory will have the cost of memory management algorithm.
>  * Allocate memory will have the cost of memory initialization, will set all 
> memory to zero. And this initialization will require the operating system to 
> actually allocate physical memory.
>  * Over allocate memory will squash the file cache too.
> We can optimize the operator algorithm, apply lazy allocation, and avoid 
> meaningless memory allocation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #10779: [FLINK-15327][runtime] No warning of InterruptedException during cancel.

2020-01-08 Thread GitBox
flinkbot edited a comment on issue #10779: [FLINK-15327][runtime] No warning of 
InterruptedException during cancel.
URL: https://github.com/apache/flink/pull/10779#issuecomment-571250275
 
 
   
   ## CI report:
   
   * b30a4d297496a904582cb24d036cbb4c5b647149 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/143281199) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4137)
 
   * e48e4c08207d6e135026bf683c1af8a9d1310a76 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/143356039) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4152)
 
   * 19d4be405c97cf220c291475d6804e4183bedf68 Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/143466456) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4172)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10772: [FLINK-15338][python] Cherry-pick NETTY#8955 and BEAM-9006(#10462) to fix the TM Metaspace memory leak problem when submitting PyFlink UDF j

2020-01-08 Thread GitBox
flinkbot edited a comment on issue #10772: [FLINK-15338][python] Cherry-pick 
NETTY#8955 and BEAM-9006(#10462) to fix the TM Metaspace memory leak problem 
when submitting PyFlink UDF jobs multiple times.
URL: https://github.com/apache/flink/pull/10772#issuecomment-571035348
 
 
   
   ## CI report:
   
   * ae56dd2d9b0d281ee691f3a7e7667dd960eda232 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/143204260) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4112)
 
   * 9e73df7bdf3ce66c243004ad8fb0b78eacc3776f Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/143212537) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4118)
 
   * 58ef3c7a98a47f98cf15d69b3c373f8e6928175b Travis: 
[CANCELED](https://travis-ci.com/flink-ci/flink/builds/143519857) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4182)
 
   * 41b9d57a5a85793eb5036e8943605d0c1e089807 Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/143523160) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4184)
 
   * 5dcfc4d530298996cf35ca64e4494572abad4106 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] JingsongLi opened a new pull request #10797: [FLINK-15172][table-blink] Optimize the operator algorithm to lazily allocate memory

2020-01-08 Thread GitBox
JingsongLi opened a new pull request #10797: [FLINK-15172][table-blink] 
Optimize the operator algorithm to lazily allocate memory
URL: https://github.com/apache/flink/pull/10797
 
 
   
   ## What is the purpose of the change
   
   Now after FLINK-14063 , operators will get all manage memory of TaskManager, 
 The cost of over allocate memory is very high, lead to performance regression 
of small batch sql jobs:
   
   Allocate memory will have the cost of memory management algorithm.
   Allocate memory will have the cost of memory initialization, will set all 
memory to zero. And this initialization will require the operating system to 
actually allocate physical memory.
   Over allocate memory will squash the file cache too.
   We can optimize the operator algorithm, apply lazy allocation, and avoid 
meaningless memory allocation.
   
   ## Brief change log
   
   - Introduce LazyMemorySegmentPool
   - Improve operators to lazily allocate memory
   
   ## Verifying this change
   
   This change is already covered by existing tests.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): (no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no
 - The S3 file system connector: (no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes)
 - If yes, how is the feature documented? JavaDocs
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] AHeise opened a new pull request #10798: [FLINK-15355][plugins] Added parent first patterns for plugins.

2020-01-08 Thread GitBox
AHeise opened a new pull request #10798: [FLINK-15355][plugins] Added parent 
first patterns for plugins.
URL: https://github.com/apache/flink/pull/10798
 
 
   Forwards port of https://github.com/apache/flink/pull/10778 . Main commit 
untouched and only the commit that reenables tests is dropped as they were 
never disabled on master.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10794: [FLINK-15490][kafka][test-stability] Enable idempotence producing in …

2020-01-08 Thread GitBox
flinkbot edited a comment on issue #10794: [FLINK-15490][kafka][test-stability] 
Enable idempotence producing in …
URL: https://github.com/apache/flink/pull/10794#issuecomment-571913385
 
 
   
   ## CI report:
   
   * df73f505165e0f64756c396403d8738b3ae07bcc Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/143514690) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4178)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10796: [FLINK-15424][StateBackend] Make AppendintState#add refuse to add null element

2020-01-08 Thread GitBox
flinkbot edited a comment on issue #10796: [FLINK-15424][StateBackend] Make 
AppendintState#add refuse to add null element
URL: https://github.com/apache/flink/pull/10796#issuecomment-571939777
 
 
   
   ## CI report:
   
   * 3c4dece4d584ae396c69247e3bcf978807b9f232 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/143523185) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4185)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10795: [FLINK-15510] Pretty Print StreamGraph JSON Plan

2020-01-08 Thread GitBox
flinkbot edited a comment on issue #10795: [FLINK-15510] Pretty Print 
StreamGraph JSON Plan
URL: https://github.com/apache/flink/pull/10795#issuecomment-571921686
 
 
   
   ## CI report:
   
   * 989a1cfecbe74568d9fc4508a884eddfb4f990a2 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/143517107) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4180)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-15448) Log host informations for TaskManager failures.

2020-01-08 Thread Victor Wong (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17010485#comment-17010485
 ] 

Victor Wong commented on FLINK-15448:
-

I agreed that using extended ResourceID would be a great help, but not by 
TaskManagerID/JobManagerID. Maybe we should distinguish different resources by 
their underlying deployment service, e.g. 
YarnResourceID/KubernetesResourceID/MesosResourceID.

```
class YarnResourceID extends ResourceID {
  public YarnResourceID(Container container) {
...
  }
}
```


> Log host informations for TaskManager failures.
> ---
>
> Key: FLINK-15448
> URL: https://issues.apache.org/jira/browse/FLINK-15448
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Affects Versions: 1.9.1
>Reporter: Victor Wong
>Assignee: Victor Wong
>Priority: Minor
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> With Flink on Yarn, sometimes we ran into an exception like this:
> {code:java}
> java.util.concurrent.TimeoutException: The heartbeat of TaskManager with id 
> container_  timed out.
> {code}
> We'd like to find out the host of the lost TaskManager to log into it for 
> more details, we have to check the previous logs for the host information, 
> which is a little time-consuming.
> Maybe we can add more descriptive information to ResourceID of Yarn 
> containers, e.g. "container_xxx@host_name:port_number".
> Here's the demo:
> {code:java}
> class ResourceID {
>   final String resourceId;
>   final String details;
>   public ResourceID(String resourceId) {
> this.resourceId = resourceId;
> this.details = resourceId;
>   }
>   public ResourceID(String resourceId, String details) {
> this.resourceId = resourceId;
> this.details = details;
>   }
>   public String toString() {
> return details;
>   } 
> }
> // in flink-yarn
> private void startTaskExecutorInContainer(Container container) {
>   final String containerIdStr = container.getId().toString();
>   final String containerDetail = container.getId() + "@" + 
> container.getNodeId();  
>   final ResourceID resourceId = new ResourceID(containerIdStr, 
> containerDetail);
>   ...
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-15172) Optimize the operator algorithm to lazily allocate memory

2020-01-08 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-15172:
---
Labels: pull-request-available  (was: )

> Optimize the operator algorithm to lazily allocate memory
> -
>
> Key: FLINK-15172
> URL: https://issues.apache.org/jira/browse/FLINK-15172
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Runtime
>Reporter: Jingsong Lee
>Assignee: Jingsong Lee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>
> Now after FLINK-14063 , operators will get all manage memory of TaskManager,  
> The cost of over allocate memory is very high, lead to performance regression 
> of small batch sql jobs:
>  * Allocate memory will have the cost of memory management algorithm.
>  * Allocate memory will have the cost of memory initialization, will set all 
> memory to zero. And this initialization will require the operating system to 
> actually allocate physical memory.
>  * Over allocate memory will squash the file cache too.
> We can optimize the operator algorithm, apply lazy allocation, and avoid 
> meaningless memory allocation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #9782: [FLINK-14241][test] Add aarch64 support for container e2e test

2020-01-08 Thread GitBox
flinkbot edited a comment on issue #9782: [FLINK-14241][test] Add aarch64 
support for container e2e test
URL: https://github.com/apache/flink/pull/9782#issuecomment-535826739
 
 
   
   ## CI report:
   
   * d48b95539070679639d5e8c4e640b9a710d7 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/129403938) 
   * bc7ff380b3c3deb9751c0a596c8fef46c3b48ef3 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/129413892) 
   * 58fe983f436f82e015d7c3635708d60235b9f078 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/131251050) 
   * c568fa423b04cccfd439fa5aa0a9bd9d032806f7 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/131302469) 
   * aa2b0e844d60c92dfcacb6e226034a9d1457298a Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/131417190) 
   * 05b11cacd5473e9a0248d4eb7d02761d5e3f427e Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/138967122) 
   * 938ebf9eb90ebbd2ec75dd8d709d3e2990c68319 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/139079022) 
   * 8c72f98dee4218e65841d5f8850dc8063b16439a Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/139083440) 
   * 0739a623232a0c45e761c2db3f7fd4eba8d84bce Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/141526489) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3703)
 
   * 515f012f8565b4a86959e77a6d19b94ce78d3cbc Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142068529) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3846)
 
   * 408305ba95416d7da6fb0f7143927d3f4123633b Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/143517119) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4181)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #10798: [FLINK-15355][plugins] Added parent first patterns for plugins.

2020-01-08 Thread GitBox
flinkbot commented on issue #10798: [FLINK-15355][plugins] Added parent first 
patterns for plugins.
URL: https://github.com/apache/flink/pull/10798#issuecomment-571951464
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit b3e2049026c1f90b8feb4ab3e86e470e1ca6 (Wed Jan 08 
08:54:25 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #10797: [FLINK-15172][table-blink] Optimize the operator algorithm to lazily allocate memory

2020-01-08 Thread GitBox
flinkbot commented on issue #10797: [FLINK-15172][table-blink] Optimize the 
operator algorithm to lazily allocate memory
URL: https://github.com/apache/flink/pull/10797#issuecomment-571951479
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit e904ab518c4d3b26bf27a89c61b9df30af54b6ed (Wed Jan 08 
08:54:28 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wangyang0918 commented on a change in pull request #10746: [FLINK-15417] Remove the docker volume or mount when starting Mesos e…

2020-01-08 Thread GitBox
wangyang0918 commented on a change in pull request #10746: [FLINK-15417] Remove 
the docker volume or mount when starting Mesos e…
URL: https://github.com/apache/flink/pull/10746#discussion_r364120873
 
 

 ##
 File path: flink-end-to-end-tests/test-scripts/common_mesos_docker.sh
 ##
 @@ -52,11 +54,19 @@ function start_flink_cluster_with_mesos() {
 set_config_key "jobmanager.rpc.address" "mesos-master"
 set_config_key "rest.address" "mesos-master"
 
-docker exec -itd mesos-master bash -c "${FLINK_DIR}/bin/mesos-appmaster.sh 
-Dmesos.master=mesos-master:5050"
+docker cp ${FLINK_DIR} mesos-master:$MESOS_FLINK_DIR
+docker cp ${END_TO_END_DIR}/test-scripts mesos-master:$MESOS_END_TO_END_DIR
+docker cp ${END_TO_END_DIR}/test-scripts mesos-slave:$MESOS_END_TO_END_DIR
+
+docker exec -itd mesos-master bash -c 
"${MESOS_FLINK_DIR}/bin/mesos-appmaster.sh -Dmesos.master=mesos-master:5050"
 wait_rest_endpoint_up "http://${NODENAME}:8081/taskmanagers"; "Dispatcher" 
"\{\"taskmanagers\":\[.*\]\}"
 return 0
 }
 
+function copy_logs_from_container () {
 
 Review comment:
   If the logs could show up when test failed and be cleaned up when succeed, 
it will make sense to me.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] sjwiesman commented on a change in pull request #10777: [FLINK-10939][doc] Add documents for running Flink cluster natively on Kubernetes

2020-01-08 Thread GitBox
sjwiesman commented on a change in pull request #10777: [FLINK-10939][doc] Add 
documents for running Flink cluster natively on Kubernetes
URL: https://github.com/apache/flink/pull/10777#discussion_r364122086
 
 

 ##
 File path: docs/ops/deployment/native_kubernetes.zh.md
 ##
 @@ -0,0 +1,199 @@
+---
+title:  "Native Kubernetes 安装"
+nav-title: Native Kubernetes
+nav-parent_id: deployment
+is_beta: true
+nav-pos: 7
+---
+
+
+This page describes how to deploy a Flink session cluster natively on 
[Kubernetes](https://kubernetes.io).
+
+* This will be replaced by the TOC
+{:toc}
+
+
+Flink's native Kubernetes integration is still experimental. There may be 
changes in the configuration and CLI flags in latter versions. Job clusters are 
not yet supported.
+
+
+## Requirements
+
+- Kubernetes 1.9 or above.
+- KubeConfig, which has access to list, create, delete pods and services, 
configurable via `~/.kube/config`. You can verify permissions by running 
`kubectl auth can-i  pods`.
+- Kubernetes DNS enabled.
+- A service Account with [RBAC](#rbac) permissions to create, delete pods.
+
+## Flink Kubernetes Session
+
+### Start Flink Session
+
+Follow these instructions to start a Flink Session within your Kubernetes 
cluster.
+
+A session will start all required Flink services (JobManager and TaskManagers) 
so that you can submit programs to the cluster.
+Note that you can run multiple programs per session.
+
+{% highlight bash %}
+$ ./bin/kubernetes-session.sh
+{% endhighlight %}
+
+All the Kubernetes configuration options can be found in our [configuration 
guide]({{ site.baseurl }}/zh/ops/config.html#kubernetes).
+
+**Example**: Issue the following command to start a session cluster with 4 GB 
of memory and 2 CPUs with 4 slots per TaskManager:
+
+{% highlight bash %}
+./bin/kubernetes-session.sh \
+  -Dkubernetes.cluster-id= \
+  -Dtaskmanager.memory.process.size=4096m \
+  -Dkubernetes.taskmanager.cpu=2 \
+  -Dtaskmanager.numberOfTaskSlots=4
+{% endhighlight %}
+
+The system will use the configuration in `conf/flink-conf.yaml`.
+Please follow our [configuration guide]({{ site.baseurl }}/zh/ops/config.html) 
if you want to change something.
 
 Review comment:
   ```suggestion
   Please follow our [configuration guide]({{ site.baseurl }}/ops/config.html) 
if you want to change something.
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] sjwiesman commented on a change in pull request #10777: [FLINK-10939][doc] Add documents for running Flink cluster natively on Kubernetes

2020-01-08 Thread GitBox
sjwiesman commented on a change in pull request #10777: [FLINK-10939][doc] Add 
documents for running Flink cluster natively on Kubernetes
URL: https://github.com/apache/flink/pull/10777#discussion_r364122253
 
 

 ##
 File path: docs/ops/deployment/kubernetes.zh.md
 ##
 @@ -28,6 +28,8 @@ This page describes how to deploy a Flink job and session 
cluster on [Kubernetes
 * This will be replaced by the TOC
 {:toc}
 
+{% info %} This page describes deploying a [standalone](#cluster_setup.html) 
Flink session on top of Kubernetes. For information on native Kubernetes 
deployments read [here]({{ site.baseurl 
}}/zh/ops/deployment/native_kubernetes.html).
 
 Review comment:
   ```suggestion
   {% info %} This page describes deploying a [standalone](#cluster_setup.html) 
Flink session on top of Kubernetes. For information on native Kubernetes 
deployments read [here]({{ site.baseurl 
}}/ops/deployment/native_kubernetes.html).
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] sjwiesman commented on a change in pull request #10777: [FLINK-10939][doc] Add documents for running Flink cluster natively on Kubernetes

2020-01-08 Thread GitBox
sjwiesman commented on a change in pull request #10777: [FLINK-10939][doc] Add 
documents for running Flink cluster natively on Kubernetes
URL: https://github.com/apache/flink/pull/10777#discussion_r364122364
 
 

 ##
 File path: docs/ops/deployment/native_kubernetes.zh.md
 ##
 @@ -0,0 +1,199 @@
+---
+title:  "Native Kubernetes 安装"
+nav-title: Native Kubernetes
+nav-parent_id: deployment
+is_beta: true
+nav-pos: 7
+---
+
+
+This page describes how to deploy a Flink session cluster natively on 
[Kubernetes](https://kubernetes.io).
+
+* This will be replaced by the TOC
+{:toc}
+
+
+Flink's native Kubernetes integration is still experimental. There may be 
changes in the configuration and CLI flags in latter versions. Job clusters are 
not yet supported.
+
+
+## Requirements
+
+- Kubernetes 1.9 or above.
+- KubeConfig, which has access to list, create, delete pods and services, 
configurable via `~/.kube/config`. You can verify permissions by running 
`kubectl auth can-i  pods`.
+- Kubernetes DNS enabled.
+- A service Account with [RBAC](#rbac) permissions to create, delete pods.
+
+## Flink Kubernetes Session
+
+### Start Flink Session
+
+Follow these instructions to start a Flink Session within your Kubernetes 
cluster.
+
+A session will start all required Flink services (JobManager and TaskManagers) 
so that you can submit programs to the cluster.
+Note that you can run multiple programs per session.
+
+{% highlight bash %}
+$ ./bin/kubernetes-session.sh
+{% endhighlight %}
+
+All the Kubernetes configuration options can be found in our [configuration 
guide]({{ site.baseurl }}/zh/ops/config.html#kubernetes).
 
 Review comment:
   ```suggestion
   All the Kubernetes configuration options can be found in our [configuration 
guide]({{ site.baseurl }}/ops/config.html#kubernetes).
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] azagrebin closed pull request #10608: [FLINK-15300][Runtime] Fix sanity check to not fail if shuffle memory fraction is out of min/max range

2020-01-08 Thread GitBox
azagrebin closed pull request #10608: [FLINK-15300][Runtime] Fix sanity check 
to not fail if shuffle memory fraction is out of min/max range
URL: https://github.com/apache/flink/pull/10608
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] azagrebin commented on issue #10608: [FLINK-15300][Runtime] Fix sanity check to not fail if shuffle memory fraction is out of min/max range

2020-01-08 Thread GitBox
azagrebin commented on issue #10608: [FLINK-15300][Runtime] Fix sanity check to 
not fail if shuffle memory fraction is out of min/max range
URL: https://github.com/apache/flink/pull/10608#issuecomment-571954210
 
 
   Thanks again for the reviews @xintongsong @tillrohrmann 
   Test failure is a [known 
instability](https://issues.apache.org/jira/browse/FLINK-15247), passed in my 
[CI](https://travis-ci.org/azagrebin/flink/builds/629296531)
   merged into master by d22fdc39a86496ebfc74914a72916d8a0ea7ab89
   merged into 1.10 by a342e418a2d8df52645dd75588f8b9f74a07ad63


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] sunjincheng121 opened a new pull request #10799: [hotfix] Change the text of PyPI Author from `Flink Developers` to `A…

2020-01-08 Thread GitBox
sunjincheng121 opened a new pull request #10799: [hotfix] Change the text of 
PyPI Author from `Flink Developers` to `A…
URL: https://github.com/apache/flink/pull/10799
 
 
   
   
   ## What is the purpose of the change
   Change the text of PyPI Author from `Flink Developers` to `Apache Software 
Foundation`.
   
   ## Brief change log
   
 - Change the text of PyPI Author from `Flink Developers` to `Apache 
Software Foundation`. in `setup.py`
   
   ## Verifying this change
   
   *(Please pick either of the following options)*
   
   This change do not need any test coverage.
   
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not documented)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Closed] (FLINK-15300) Shuffle memory fraction sanity check does not account for its min/max limit

2020-01-08 Thread Andrey Zagrebin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15300?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Zagrebin closed FLINK-15300.
---
Resolution: Fixed

merged into master by d22fdc39a86496ebfc74914a72916d8a0ea7ab89
merged into 1.10 by a342e418a2d8df52645dd75588f8b9f74a07ad63

> Shuffle memory fraction sanity check does not account for its min/max limit
> ---
>
> Key: FLINK-15300
> URL: https://issues.apache.org/jira/browse/FLINK-15300
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Configuration
>Reporter: Andrey Zagrebin
>Assignee: Andrey Zagrebin
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> If we have a configuration which results in setting shuffle memory size to 
> its min or max, not fraction during TM startup then starting TM parses 
> generated dynamic properties and while doing the sanity check 
> (TaskExecutorResourceUtils#sanityCheckShuffleMemory) it fails because it 
> checks the exact fraction for min/max value.
> Example, start TM with the following Flink config:
> {code:java}
> taskmanager.memory.total-flink.size: 350m
> taskmanager.memory.framework.heap.size: 16m
> taskmanager.memory.shuffle.fraction: 0.1{code}
> The calculation will happen for total Flink memory and will result in the 
> following extra program args:
> {code:java}
> taskmanager.memory.shuffle.max: 67108864b
> taskmanager.memory.framework.off-heap.size: 134217728b
> taskmanager.memory.managed.size: 146800642b
> taskmanager.cpu.cores: 1.0
> taskmanager.memory.task.heap.size: 2097150b
> taskmanager.memory.task.off-heap.size: 0b
> taskmanager.memory.shuffle.min: 67108864b{code}
> where the derived fraction is less than shuffle memory min size (64mb), so it 
> was set to the min value: 64mb.
> While TM starts, the calculation happens now for the explicit task heap and 
> managed memory but also with the explicit total Flink memory and 
> TaskExecutorResourceUtils#sanityCheckShuffleMemory throws the following 
> exception:
> {code:java}
> org.apache.flink.configuration.IllegalConfigurationException:
> Derived Shuffle Memory size(64 Mb (67108864 bytes)) does not match configured 
> Shuffle Memory fraction (0.1000149011612).
> at 
> org.apache.flink.runtime.clusterframework.TaskExecutorResourceUtils.sanityCheckShuffleMemory(TaskExecutorResourceUtils.java:552)
> at 
> org.apache.flink.runtime.clusterframework.TaskExecutorResourceUtils.deriveResourceSpecWithExplicitTaskAndManagedMemory(TaskExecutorResourceUtils.java:183)
> at 
> org.apache.flink.runtime.clusterframework.TaskExecutorResourceUtils.resourceSpecFromConfig(TaskExecutorResourceUtils.java:135)
> {code}
> This can be fixed by checking whether the fraction to assert is within the 
> min/max range.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #10654: [FLINK-14081][table-planner-blink] Support precision of TimeType

2020-01-08 Thread GitBox
flinkbot edited a comment on issue #10654: [FLINK-14081][table-planner-blink] 
Support precision of TimeType
URL: https://github.com/apache/flink/pull/10654#issuecomment-568150600
 
 
   
   ## CI report:
   
   * 210649f07986a5c815bd39bb532a86f06ee412b8 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/141996849) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3838)
 
   * 817a1efa635ff3201c943fbf579432b4ef5b8c7b Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/142075142) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3849)
 
   * a942e8e4a3b9f458b4f903280ccef3ccf2364770 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/142375460) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3937)
 
   * 383d6c9e55ecc74580f998b20ac60cf7990779d1 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/143376910) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4161)
 
   * a04621c54d57075fd9f747642b071c1d6a68767b Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/143499280) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4173)
 
   * cfa0072c7b98c2ffcc1aced9fbc148cc12755bad Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/143509092) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4177)
 
   * 3bf4ff084432ced2e7da115aea1eb17cf925096b Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/143519873) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4183)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #10799: [hotfix] Change the text of PyPI Author from `Flink Developers` to `A…

2020-01-08 Thread GitBox
flinkbot commented on issue #10799: [hotfix] Change the text of PyPI Author 
from `Flink Developers` to `A…
URL: https://github.com/apache/flink/pull/10799#issuecomment-571955452
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 6953dc4f81781e23f750b9774a4dda17e4de833d (Wed Jan 08 
09:05:36 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wangyang0918 commented on issue #10777: [FLINK-10939][doc] Add documents for running Flink cluster natively on Kubernetes

2020-01-08 Thread GitBox
wangyang0918 commented on issue #10777: [FLINK-10939][doc] Add documents for 
running Flink cluster natively on Kubernetes
URL: https://github.com/apache/flink/pull/10777#issuecomment-571955880
 
 
   @sjwiesman I am not sure whether we could remove `/zh`. Since i find lots of 
usage `{{ site.baseurl }}/zh` in the doc module and they works well. 
   
   
https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/api_concepts.html


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] sunjincheng121 merged pull request #10799: [hotfix] [python]Change the text of PyPI Author from `Flink Developers` to `A…

2020-01-08 Thread GitBox
sunjincheng121 merged pull request #10799: [hotfix] [python]Change the text of 
PyPI Author from `Flink Developers` to `A…
URL: https://github.com/apache/flink/pull/10799
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-15174) FLINK security using PKI mutual auth needs certificate pinning or Private CA

2020-01-08 Thread Stephan Ewen (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17010492#comment-17010492
 ] 

Stephan Ewen commented on FLINK-15174:
--

[~dasbh] Thank you for confirming.

Sounds like a strange restriction in the JDK (unless there is a deeper reason I 
am not immediately seeing).

In that case, your proposal sounds like a good workaround. Will try to 
review/merge this soon.

> FLINK security using PKI mutual auth needs certificate pinning or Private CA
> 
>
> Key: FLINK-15174
> URL: https://issues.apache.org/jira/browse/FLINK-15174
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Configuration, Runtime / REST
>Affects Versions: 1.9.0, 1.9.1, 1.10.0
>Reporter: Bhagavan
>Assignee: Bhagavan
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.9.2, 1.10.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The current design for Flink security for internal/REST relies on PKI mutual 
> authentication. However, the design is not robust if CA used for generating 
> certificates are public CA or Firwide internal CA. This is due to how the 
> chain of trust works whilst validating the client certificate. i.e. Any 
> certificate signed by same CA would be able to make a connection to internal 
> Flink network.
> Proposed improvement.
> An environment where operators are constrained to use firmwide Internal 
> public CA, Allow the operator to specify the certificate fingerprint to 
> further protect the cluster allowing only specific certificate.
> This change should be a backward compatible change where one can use just 
> certificate with private CA.
> Changes are easy to implement as all network communications are done using 
> netty and netty provides FingerprintTrustManagerFactory.
> Happy to send PR if we agree on the change.
> Document corrections.
> From security documentation.
> [https://ci.apache.org/projects/flink/flink-docs-stable/ops/security-ssl.html]
> _"All internal connections are SSL authenticated and encrypted. The 
> connections use *mutual authentication*, meaning both server and client-side 
> of each connection need to present the certificate to each other. The 
> certificate acts effectively as a shared secret."_
> _-_ This not exactly true. Any party who obtains the client certificate from 
> CA would be able to form the connection even though the certificate 
> public/private keys are different. So it's not *a* shared secret ( merely a 
> common signature)
> _Further doc says - "A common setup is to generate a dedicated certificate 
> (maybe self-signed) for a Flink deployment._
> - I think this is the only way to make the cluster secure. i.e. create 
> private CA just for the cluster.
>  
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] pnowojski merged pull request #10778: [Flink-15355][plugins] Added parent first patterns for plugins.

2020-01-08 Thread GitBox
pnowojski merged pull request #10778: [Flink-15355][plugins] Added parent first 
patterns for plugins.
URL: https://github.com/apache/flink/pull/10778
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Closed] (FLINK-15355) Nightly streaming file sink fails with unshaded hadoop

2020-01-08 Thread Piotr Nowojski (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Piotr Nowojski closed FLINK-15355.
--
Fix Version/s: 1.11.0
   Resolution: Fixed

Merged to master as d46b07bc6ef44dde058f4a15710938d29cdc1798
Merged to release-1.10 as 
4c7562bf84c865859710dc79f05ae09541fa3335..14c26e489d1e8f7ae02080b3b69810b59e3ebe32

> Nightly streaming file sink fails with unshaded hadoop
> --
>
> Key: FLINK-15355
> URL: https://issues.apache.org/jira/browse/FLINK-15355
> Project: Flink
>  Issue Type: Bug
>  Components: FileSystems
>Affects Versions: 1.10.0, 1.11.0
>Reporter: Arvid Heise
>Assignee: Arvid Heise
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.10.0, 1.11.0
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> {code:java}
> org.apache.flink.client.program.ProgramInvocationException: The main method 
> caused an error: java.util.concurrent.ExecutionException: 
> org.apache.flink.runtime.client.JobSubmissionException: Failed to submit 
> JobGraph.
>  at 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:335)
>  at 
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:205)
>  at org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:138)
>  at 
> org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:664)
>  at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:213)
>  at 
> org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:895)
>  at 
> org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:968)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:422)
>  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836)
>  at 
> org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
>  at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:968)
> Caused by: java.lang.RuntimeException: 
> java.util.concurrent.ExecutionException: 
> org.apache.flink.runtime.client.JobSubmissionException: Failed to submit 
> JobGraph.
>  at org.apache.flink.util.ExceptionUtils.rethrow(ExceptionUtils.java:199)
>  at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.executeAsync(StreamExecutionEnvironment.java:1751)
>  at 
> org.apache.flink.streaming.api.environment.StreamContextEnvironment.executeAsync(StreamContextEnvironment.java:94)
>  at 
> org.apache.flink.streaming.api.environment.StreamContextEnvironment.execute(StreamContextEnvironment.java:63)
>  at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1628)
>  at StreamingFileSinkProgram.main(StreamingFileSinkProgram.java:77)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498)
>  at 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:321)
>  ... 11 more
> Caused by: java.util.concurrent.ExecutionException: 
> org.apache.flink.runtime.client.JobSubmissionException: Failed to submit 
> JobGraph.
>  at 
> java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
>  at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1895)
>  at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.executeAsync(StreamExecutionEnvironment.java:1746)
>  ... 20 more
> Caused by: org.apache.flink.runtime.client.JobSubmissionException: Failed to 
> submit JobGraph.
>  at 
> org.apache.flink.client.program.rest.RestClusterClient.lambda$submitJob$7(RestClusterClient.java:326)
>  at 
> java.util.concurrent.CompletableFuture.uniExceptionally(CompletableFuture.java:870)
>  at 
> java.util.concurrent.CompletableFuture$UniExceptionally.tryFire(CompletableFuture.java:852)
>  at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
>  at 
> java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
>  at 
> org.apache.flink.runtime.concurrent.FutureUtils.lambda$retryOperationWithDelay$8(FutureUtils.java:274)
>  at 
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
>  at 
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
>  at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
>  at 
> java.util.concurrent.CompletableFutu

[GitHub] [flink] pnowojski commented on issue #10798: [FLINK-15355][plugins] Added parent first patterns for plugins.

2020-01-08 Thread GitBox
pnowojski commented on issue #10798: [FLINK-15355][plugins] Added parent first 
patterns for plugins.
URL: https://github.com/apache/flink/pull/10798#issuecomment-571962981
 
 
   Backported manually while merging #10778 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10772: [FLINK-15338][python] Cherry-pick NETTY#8955 and BEAM-9006(#10462) to fix the TM Metaspace memory leak problem when submitting PyFlink UDF j

2020-01-08 Thread GitBox
flinkbot edited a comment on issue #10772: [FLINK-15338][python] Cherry-pick 
NETTY#8955 and BEAM-9006(#10462) to fix the TM Metaspace memory leak problem 
when submitting PyFlink UDF jobs multiple times.
URL: https://github.com/apache/flink/pull/10772#issuecomment-571035348
 
 
   
   ## CI report:
   
   * ae56dd2d9b0d281ee691f3a7e7667dd960eda232 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/143204260) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4112)
 
   * 9e73df7bdf3ce66c243004ad8fb0b78eacc3776f Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/143212537) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4118)
 
   * 58ef3c7a98a47f98cf15d69b3c373f8e6928175b Travis: 
[CANCELED](https://travis-ci.com/flink-ci/flink/builds/143519857) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4182)
 
   * 41b9d57a5a85793eb5036e8943605d0c1e089807 Travis: 
[CANCELED](https://travis-ci.com/flink-ci/flink/builds/143523160) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4184)
 
   * 5dcfc4d530298996cf35ca64e4494572abad4106 Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/143527712) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4186)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Comment Edited] (FLINK-15355) Nightly streaming file sink fails with unshaded hadoop

2020-01-08 Thread Piotr Nowojski (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17010505#comment-17010505
 ] 

Piotr Nowojski edited comment on FLINK-15355 at 1/8/20 9:24 AM:


We have introduced a separate config options for parent first pattern in 
plugins.

Merged to master as d46b07bc6ef44dde058f4a15710938d29cdc1798
Merged to release-1.10 as 
4c7562bf84c865859710dc79f05ae09541fa3335..14c26e489d1e8f7ae02080b3b69810b59e3ebe32


was (Author: pnowojski):
Merged to master as d46b07bc6ef44dde058f4a15710938d29cdc1798
Merged to release-1.10 as 
4c7562bf84c865859710dc79f05ae09541fa3335..14c26e489d1e8f7ae02080b3b69810b59e3ebe32

> Nightly streaming file sink fails with unshaded hadoop
> --
>
> Key: FLINK-15355
> URL: https://issues.apache.org/jira/browse/FLINK-15355
> Project: Flink
>  Issue Type: Bug
>  Components: FileSystems
>Affects Versions: 1.10.0, 1.11.0
>Reporter: Arvid Heise
>Assignee: Arvid Heise
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.10.0, 1.11.0
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> {code:java}
> org.apache.flink.client.program.ProgramInvocationException: The main method 
> caused an error: java.util.concurrent.ExecutionException: 
> org.apache.flink.runtime.client.JobSubmissionException: Failed to submit 
> JobGraph.
>  at 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:335)
>  at 
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:205)
>  at org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:138)
>  at 
> org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:664)
>  at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:213)
>  at 
> org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:895)
>  at 
> org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:968)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:422)
>  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836)
>  at 
> org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
>  at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:968)
> Caused by: java.lang.RuntimeException: 
> java.util.concurrent.ExecutionException: 
> org.apache.flink.runtime.client.JobSubmissionException: Failed to submit 
> JobGraph.
>  at org.apache.flink.util.ExceptionUtils.rethrow(ExceptionUtils.java:199)
>  at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.executeAsync(StreamExecutionEnvironment.java:1751)
>  at 
> org.apache.flink.streaming.api.environment.StreamContextEnvironment.executeAsync(StreamContextEnvironment.java:94)
>  at 
> org.apache.flink.streaming.api.environment.StreamContextEnvironment.execute(StreamContextEnvironment.java:63)
>  at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1628)
>  at StreamingFileSinkProgram.main(StreamingFileSinkProgram.java:77)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498)
>  at 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:321)
>  ... 11 more
> Caused by: java.util.concurrent.ExecutionException: 
> org.apache.flink.runtime.client.JobSubmissionException: Failed to submit 
> JobGraph.
>  at 
> java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
>  at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1895)
>  at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.executeAsync(StreamExecutionEnvironment.java:1746)
>  ... 20 more
> Caused by: org.apache.flink.runtime.client.JobSubmissionException: Failed to 
> submit JobGraph.
>  at 
> org.apache.flink.client.program.rest.RestClusterClient.lambda$submitJob$7(RestClusterClient.java:326)
>  at 
> java.util.concurrent.CompletableFuture.uniExceptionally(CompletableFuture.java:870)
>  at 
> java.util.concurrent.CompletableFuture$UniExceptionally.tryFire(CompletableFuture.java:852)
>  at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
>  at 
> java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
>  at 
> org.apache.flink.runtime.concurrent.FutureUtils.lambda$retryOperationWithDelay$8(FutureUtils

[GitHub] [flink] pnowojski closed pull request #10798: [FLINK-15355][plugins] Added parent first patterns for plugins.

2020-01-08 Thread GitBox
pnowojski closed pull request #10798: [FLINK-15355][plugins] Added parent first 
patterns for plugins.
URL: https://github.com/apache/flink/pull/10798
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] sjwiesman commented on issue #10777: [FLINK-10939][doc] Add documents for running Flink cluster natively on Kubernetes

2020-01-08 Thread GitBox
sjwiesman commented on issue #10777: [FLINK-10939][doc] Add documents for 
running Flink cluster natively on Kubernetes
URL: https://github.com/apache/flink/pull/10777#issuecomment-571963755
 
 
   You are correct, please disregard my comment. +1


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #10797: [FLINK-15172][table-blink] Optimize the operator algorithm to lazily allocate memory

2020-01-08 Thread GitBox
flinkbot commented on issue #10797: [FLINK-15172][table-blink] Optimize the 
operator algorithm to lazily allocate memory
URL: https://github.com/apache/flink/pull/10797#issuecomment-571963932
 
 
   
   ## CI report:
   
   * e904ab518c4d3b26bf27a89c61b9df30af54b6ed UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #10798: [FLINK-15355][plugins] Added parent first patterns for plugins.

2020-01-08 Thread GitBox
flinkbot commented on issue #10798: [FLINK-15355][plugins] Added parent first 
patterns for plugins.
URL: https://github.com/apache/flink/pull/10798#issuecomment-571964018
 
 
   
   ## CI report:
   
   * b3e2049026c1f90b8feb4ab3e86e470e1ca6 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-14925) the return type of TO_TIMESTAMP should be Timestamp(9) instead of Timestamp(3)

2020-01-08 Thread Zhenghua Gao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17010512#comment-17010512
 ] 

Zhenghua Gao commented on FLINK-14925:
--

[~jark] Make sense. Is there any ticket to track supporting TIMESTAMP(9) as 
rowtime attribute?

> the return type of TO_TIMESTAMP should be Timestamp(9) instead of Timestamp(3)
> --
>
> Key: FLINK-14925
> URL: https://issues.apache.org/jira/browse/FLINK-14925
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.10.0
>Reporter: Zhenghua Gao
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wangyang0918 commented on issue #10777: [FLINK-10939][doc] Add documents for running Flink cluster natively on Kubernetes

2020-01-08 Thread GitBox
wangyang0918 commented on issue #10777: [FLINK-10939][doc] Add documents for 
running Flink cluster natively on Kubernetes
URL: https://github.com/apache/flink/pull/10777#issuecomment-571966712
 
 
   @sjwiesman Aha, thanks for your quick response. I have checked the zh 
document in the PR, it works well. 
   `./build_docs.sh -i -z`
   
   @aljoscha Please help to squash the first three commits and then merge. Many 
thanks.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Closed] (FLINK-14058) FLIP-53 Fine Grained Operator Resource Management

2020-01-08 Thread Zhu Zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhu Zhu closed FLINK-14058.
---
Release Note: 
The memory configs for table operators, including 
 * table.exec.resource.external-buffer-memory, 
 * table.exec.resource.hash-agg.memory, 
 * table.exec.resource.hash-join.memory, and
 * table.exec.resource.sort.memory, 
 are just weight hints now rather than absolute memory requirements. In this 
way, operators would be able to make use of all the managed memory of a slot. 
This helps the task to run more stable if the slot managed memory is limited, 
or more efficient if their are adequate slot managed memory.
  Resolution: Fixed

> FLIP-53 Fine Grained Operator Resource Management
> -
>
> Key: FLINK-14058
> URL: https://issues.apache.org/jira/browse/FLINK-14058
> Project: Flink
>  Issue Type: New Feature
>  Components: Runtime / Coordination
>Affects Versions: 1.9.0
>Reporter: Xintong Song
>Assignee: Zhu Zhu
>Priority: Major
>  Labels: Umbrella
> Fix For: 1.10.0
>
>
> This is the umbrella issue of 'FLIP-53: Fine Grained Operator Resource 
> Management'.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-14058) FLIP-53 Fine Grained Operator Resource Management

2020-01-08 Thread Zhu Zhu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17010516#comment-17010516
 ] 

Zhu Zhu commented on FLINK-14058:
-

Hi [~lzljs3620320], would you help to verify the release note of FLIP-53?
Looks to me the only visible part to users are the table operator config 
changes.

> FLIP-53 Fine Grained Operator Resource Management
> -
>
> Key: FLINK-14058
> URL: https://issues.apache.org/jira/browse/FLINK-14058
> Project: Flink
>  Issue Type: New Feature
>  Components: Runtime / Coordination
>Affects Versions: 1.9.0
>Reporter: Xintong Song
>Assignee: Zhu Zhu
>Priority: Major
>  Labels: Umbrella
> Fix For: 1.10.0
>
>
> This is the umbrella issue of 'FLIP-53: Fine Grained Operator Resource 
> Management'.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-14058) FLIP-53 Fine Grained Operator Resource Management

2020-01-08 Thread Zhu Zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhu Zhu updated FLINK-14058:

Release Note: 
The memory configs for table operators are weight hints now rather than 
absolute memory requirements. The configs includes
 * table.exec.resource.external-buffer-memory, 
 * table.exec.resource.hash-agg.memory, 
 * table.exec.resource.hash-join.memory, and
 * table.exec.resource.sort.memory.
In this way, operators would be able to make use of all the managed memory of a 
slot. It helps the task to run more stable if the slot managed memory is 
limited, or more efficient if their are adequate slot managed memory.

  was:
The memory configs for table operators, including 
 * table.exec.resource.external-buffer-memory, 
 * table.exec.resource.hash-agg.memory, 
 * table.exec.resource.hash-join.memory, and
 * table.exec.resource.sort.memory, 
 are just weight hints now rather than absolute memory requirements. In this 
way, operators would be able to make use of all the managed memory of a slot. 
This helps the task to run more stable if the slot managed memory is limited, 
or more efficient if their are adequate slot managed memory.


> FLIP-53 Fine Grained Operator Resource Management
> -
>
> Key: FLINK-14058
> URL: https://issues.apache.org/jira/browse/FLINK-14058
> Project: Flink
>  Issue Type: New Feature
>  Components: Runtime / Coordination
>Affects Versions: 1.9.0
>Reporter: Xintong Song
>Assignee: Zhu Zhu
>Priority: Major
>  Labels: Umbrella
> Fix For: 1.10.0
>
>
> This is the umbrella issue of 'FLIP-53: Fine Grained Operator Resource 
> Management'.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15511) export org.apache.flink.table.api.TableException when flink 1.10 connect hive

2020-01-08 Thread Rui Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17010518#comment-17010518
 ] 

Rui Li commented on FLINK-15511:


[~chenchencc] Please try the following. Note that I changed to use streaming 
mode, which means you can't write to Hive tables. Due to the issues of scala 
shell, I think SQL CLI and custom program are better options to try out the 
hive connector.
{noformat}
scala> import org.apache.flink.table.api.EnvironmentSettings
import org.apache.flink.table.api.EnvironmentSettings

scala> val settings = 
EnvironmentSettings.newInstance().useBlinkPlanner().inStreamingMode().build()
settings: org.apache.flink.table.api.EnvironmentSettings = 
org.apache.flink.table.api.EnvironmentSettings@55d8f6bb

scala> import org.apache.flink.table.api.TableConfig
import org.apache.flink.table.api.TableConfig

scala> val config = new TableConfig()
config: org.apache.flink.table.api.TableConfig = 
org.apache.flink.table.api.TableConfig@51a69e8f

scala> import 
org.apache.flink.table.api.scala.internal.StreamTableEnvironmentImpl
import org.apache.flink.table.api.scala.internal.StreamTableEnvironmentImpl

scala> val tableEnv = StreamTableEnvironmentImpl.create(senv, settings, config)
tableEnv: org.apache.flink.table.api.scala.internal.StreamTableEnvironmentImpl 
= org.apache.flink.table.api.scala.internal.StreamTableEnvironmentImpl@2220c5f7
{noformat}

> export org.apache.flink.table.api.TableException when flink 1.10 connect hive 
> --
>
> Key: FLINK-15511
> URL: https://issues.apache.org/jira/browse/FLINK-15511
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.10.0
> Environment: flink master
> hive 1.2.1
>  
>Reporter: chenchencc
>Priority: Major
>  Labels: flink, hive
>
> *run scripts:*
> bin/start-scala-shell.sh yarn -qu bi -jm 1024m -tm 2048m
> import org.apache.flink.table.catalog.hive.HiveCatalog
>  val name = "myhive"
>  val defaultDatabase = "test"
>  val hiveConfDir = "/etc/hive/conf"
>  val version = "1.2.1" // or 1.2.1 2.3.4
>  val hive = new HiveCatalog(name, defaultDatabase, hiveConfDir, version)
>  stenv.registerCatalog("myhive", hive)
>  stenv.useCatalog("myhive")
>  stenv.listTables
>  stenv.sqlQuery("select * from gsp_test3").toAppendStream[Row].print
>  *gsp_test3 table columns:*
> id int 
> name string
> *gsp_test3  table  storage:*
> txt file
>  
> *scripts run message*
> scala> import org.apache.flink.table.catalog.hive.HiveCatalog
>  import org.apache.flink.table.catalog.hive.HiveCatalog
> scala> val name = "myhive"
>  name: String = myhive
> scala> val defaultDatabase = "test"
>  defaultDatabase: String = test
> scala> val hiveConfDir = "/etc/hive/conf"
>  hiveConfDir: String = /etc/hive/conf
> scala> val version = "1.2.1" // or 1.2.1 2.3.4
>  version: String = 1.2.1
> scala> val hive = new HiveCatalog(name, defaultDatabase, hiveConfDir, version)
>  20/01/08 14:36:10 INFO hive.HiveCatalog: Setting hive conf dir as 
> /etc/hive/conf
>  20/01/08 14:36:10 WARN conf.HiveConf: HiveConf of name 
> hive.server2.enable.impersonation does not exist
>  20/01/08 14:36:10 WARN conf.HiveConf: HiveConf of name 
> hive.mapred.supports.subdirectories does not exist
>  20/01/08 14:36:10 INFO hive.HiveCatalog: Created HiveCatalog 'myhive'
>  hive: org.apache.flink.table.catalog.hive.HiveCatalog = 
> org.apache.flink.table.catalog.hive.HiveCatalog@60729135
> scala> stenv.registerCatalog("myhive", hive)
>  20/01/08 14:36:10 INFO hive.metastore: Trying to connect to metastore with 
> URI thrift://bgnode4:9083
>  20/01/08 14:36:10 INFO hive.metastore: Connected to metastore.
>  20/01/08 14:36:10 INFO hive.HiveCatalog: Connected to Hive metastore
> scala> stenv.useCatalog("myhive")
>  20/01/08 14:36:10 INFO catalog.CatalogManager: Set the current default 
> catalog as [myhive] and the current default database as [test].
> scala> stenv.listTables
>  res6: Array[String] = Array(amazonproductscore_test, 
> amazonproductscore_test_tmp, amazonshopmanagerkpi, bucketed_user, 
> bulkload_spark_gross_profit_items_zcm, dim_date_test, 
> dw_gross_profit_items_phoenix_test, dw_gross_profit_items_phoenix_test2, 
> dw_gross_profit_items_phoenix_test3, dw_gross_profit_items_phoenix_test4, 
> dw_gross_profit_items_phoenix_test5, gsp_test12, gsp_test2, gsp_test3, 
> hive_phoenix, ni, orderparent_test, orderparent_test2, 
> phoenix_orderparent_id_put_tb, phoenix_orderparent_id_put_tb2, 
> phoenix_orderparent_id_tb, productdailysales, result20190404, 
> result20190404_2, result20190404_3, result20190404_4_5_9, result20190404_5, 
> result20190404vat, result20190404vat11, result20190404vat12, 
> result20190404vat13, result20190404vat5, result20190404vat6_2, ...
>  scala> stenv.sqlQuery("select 

[jira] [Commented] (FLINK-14058) FLIP-53 Fine Grained Operator Resource Management

2020-01-08 Thread Jingsong Lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17010520#comment-17010520
 ] 

Jingsong Lee commented on FLINK-14058:
--

Hi [~zhuzh] Consider to clarify the impact?

"After the user configures these config options, the actual memory allocated 
may be smaller or larger than the config options, which depends on the actual 
running slot memory capacity."

> FLIP-53 Fine Grained Operator Resource Management
> -
>
> Key: FLINK-14058
> URL: https://issues.apache.org/jira/browse/FLINK-14058
> Project: Flink
>  Issue Type: New Feature
>  Components: Runtime / Coordination
>Affects Versions: 1.9.0
>Reporter: Xintong Song
>Assignee: Zhu Zhu
>Priority: Major
>  Labels: Umbrella
> Fix For: 1.10.0
>
>
> This is the umbrella issue of 'FLIP-53: Fine Grained Operator Resource 
> Management'.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-14058) FLIP-53 Fine Grained Operator Resource Management

2020-01-08 Thread Jingsong Lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17010520#comment-17010520
 ] 

Jingsong Lee edited comment on FLINK-14058 at 1/8/20 9:43 AM:
--

Hi [~zhuzh] Consider to clarify the impact?

"If the user configures these config options, the actual memory allocated may 
be smaller or larger than the config options, which depends on the actual 
running slot memory capacity."


was (Author: lzljs3620320):
Hi [~zhuzh] Consider to clarify the impact?

"After the user configures these config options, the actual memory allocated 
may be smaller or larger than the config options, which depends on the actual 
running slot memory capacity."

> FLIP-53 Fine Grained Operator Resource Management
> -
>
> Key: FLINK-14058
> URL: https://issues.apache.org/jira/browse/FLINK-14058
> Project: Flink
>  Issue Type: New Feature
>  Components: Runtime / Coordination
>Affects Versions: 1.9.0
>Reporter: Xintong Song
>Assignee: Zhu Zhu
>Priority: Major
>  Labels: Umbrella
> Fix For: 1.10.0
>
>
> This is the umbrella issue of 'FLIP-53: Fine Grained Operator Resource 
> Management'.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-13554) ResourceManager should have a timeout on starting new TaskExecutors.

2020-01-08 Thread Zhu Zhu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-13554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17010522#comment-17010522
 ] 

Zhu Zhu commented on FLINK-13554:
-

This issue is triggered only when a TM is stuck in launching before registering 
to RM. Currently we only see this case in our stability tests which break 
zookeeper and network connections intentionally.
So I agree that we can postpone it as long as we do not encounter this issue in 
production.

> ResourceManager should have a timeout on starting new TaskExecutors.
> 
>
> Key: FLINK-13554
> URL: https://issues.apache.org/jira/browse/FLINK-13554
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Affects Versions: 1.9.0
>Reporter: Xintong Song
>Priority: Critical
> Fix For: 1.10.0
>
>
> Recently, we encountered a case that one TaskExecutor get stuck during 
> launching on Yarn (without fail), causing that job cannot recover from 
> continuous failovers.
> The reason the TaskExecutor gets stuck is due to our environment problem. The 
> TaskExecutor gets stuck somewhere after the ResourceManager starts the 
> TaskExecutor and waiting for the TaskExecutor to be brought up and register. 
> Later when the slot request timeouts, the job fails over and requests slots 
> from ResourceManager again, the ResourceManager still see a TaskExecutor (the 
> stuck one) is being started and will not request new container from Yarn. 
> Therefore, the job can not recover from failure.
> I think to avoid such unrecoverable status, the ResourceManager need to have 
> a timeout on starting new TaskExecutor. If the starting of TaskExecutor takes 
> too long, it should just fail the TaskExecutor and starts a new one.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] JingsongLi commented on issue #10769: [FLINK-15479]Override explainSource method for JDBCTableSource

2020-01-08 Thread GitBox
JingsongLi commented on issue #10769: [FLINK-15479]Override explainSource 
method for JDBCTableSource
URL: https://github.com/apache/flink/pull/10769#issuecomment-571973022
 
 
   @wangxlong Yes, you can.
   LGTM +1
   CC: @wuchong 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-14058) FLIP-53 Fine Grained Operator Resource Management

2020-01-08 Thread Zhu Zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhu Zhu updated FLINK-14058:

Release Note: 
The memory configs for table operators are weight hints now rather than 
absolute memory requirements. This means that the actual managed memory 
allocated may be smaller or larger than the config values, which depends on the 
actual slot managed memory capacity.
The configs includes
 * table.exec.resource.external-buffer-memory, 
 * table.exec.resource.hash-agg.memory, 
 * table.exec.resource.hash-join.memory, and
 * table.exec.resource.sort.memory.
This ensures that operators do not over allocate memory or leave available 
memory unallocated. It helps the task to run more stable if the slot managed 
memory is limited, or more efficient if their are adequate slot managed memory.


  was:
The memory configs for table operators are weight hints now rather than 
absolute memory requirements. The configs includes
 * table.exec.resource.external-buffer-memory, 
 * table.exec.resource.hash-agg.memory, 
 * table.exec.resource.hash-join.memory, and
 * table.exec.resource.sort.memory.
In this way, operators would be able to make use of all the managed memory of a 
slot. It helps the task to run more stable if the slot managed memory is 
limited, or more efficient if their are adequate slot managed memory.


> FLIP-53 Fine Grained Operator Resource Management
> -
>
> Key: FLINK-14058
> URL: https://issues.apache.org/jira/browse/FLINK-14058
> Project: Flink
>  Issue Type: New Feature
>  Components: Runtime / Coordination
>Affects Versions: 1.9.0
>Reporter: Xintong Song
>Assignee: Zhu Zhu
>Priority: Major
>  Labels: Umbrella
> Fix For: 1.10.0
>
>
> This is the umbrella issue of 'FLIP-53: Fine Grained Operator Resource 
> Management'.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-14058) FLIP-53 Fine Grained Operator Resource Management

2020-01-08 Thread Zhu Zhu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17010529#comment-17010529
 ] 

Zhu Zhu commented on FLINK-14058:
-

[~lzljs3620320], Thanks a lot!
Updated the release note. Adjusted your suggestion a bit, since the configs 
have default values which take effects even if they are not configured.

> FLIP-53 Fine Grained Operator Resource Management
> -
>
> Key: FLINK-14058
> URL: https://issues.apache.org/jira/browse/FLINK-14058
> Project: Flink
>  Issue Type: New Feature
>  Components: Runtime / Coordination
>Affects Versions: 1.9.0
>Reporter: Xintong Song
>Assignee: Zhu Zhu
>Priority: Major
>  Labels: Umbrella
> Fix For: 1.10.0
>
>
> This is the umbrella issue of 'FLIP-53: Fine Grained Operator Resource 
> Management'.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-14058) FLIP-53 Fine Grained Operator Resource Management

2020-01-08 Thread Zhu Zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhu Zhu updated FLINK-14058:

Release Note: 
The memory configs for table operators are weight hints now rather than 
absolute memory requirements. This means that the actual managed memory 
allocated may be smaller or larger than the config values, which depends on the 
actual slot managed memory capacity.
The configs include
 * table.exec.resource.external-buffer-memory, 
 * table.exec.resource.hash-agg.memory, 
 * table.exec.resource.hash-join.memory, and
 * table.exec.resource.sort.memory.
This ensures that operators do not over allocate memory or leave available 
memory unallocated. It helps the task to run more stable if the slot managed 
memory is limited, or more efficient if their are adequate slot managed memory.


  was:
The memory configs for table operators are weight hints now rather than 
absolute memory requirements. This means that the actual managed memory 
allocated may be smaller or larger than the config values, which depends on the 
actual slot managed memory capacity.
The configs includes
 * table.exec.resource.external-buffer-memory, 
 * table.exec.resource.hash-agg.memory, 
 * table.exec.resource.hash-join.memory, and
 * table.exec.resource.sort.memory.
This ensures that operators do not over allocate memory or leave available 
memory unallocated. It helps the task to run more stable if the slot managed 
memory is limited, or more efficient if their are adequate slot managed memory.



> FLIP-53 Fine Grained Operator Resource Management
> -
>
> Key: FLINK-14058
> URL: https://issues.apache.org/jira/browse/FLINK-14058
> Project: Flink
>  Issue Type: New Feature
>  Components: Runtime / Coordination
>Affects Versions: 1.9.0
>Reporter: Xintong Song
>Assignee: Zhu Zhu
>Priority: Major
>  Labels: Umbrella
> Fix For: 1.10.0
>
>
> This is the umbrella issue of 'FLIP-53: Fine Grained Operator Resource 
> Management'.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-14058) FLIP-53 Fine Grained Operator Resource Management

2020-01-08 Thread Zhu Zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhu Zhu updated FLINK-14058:

Release Note: 
The memory configs for table operators are weight hints now rather than 
absolute memory requirements. This means that the actual managed memory 
allocated may be smaller or larger than the config values, which depends on the 
actual slot managed memory capacity.
The affected configs are {{table.exec.resource.external-buffer-memory}}, 
{{table.exec.resource.hash-agg.memory}}, 
{{table.exec.resource.hash-join.memory}}, and 
{{table.exec.resource.sort.memory}}.
This ensures that operators do not over allocate memory or leave available 
memory unallocated. It helps the task to run more stable if the slot managed 
memory is limited, or more efficient if their are adequate slot managed memory.


  was:
The memory configs for table operators are weight hints now rather than 
absolute memory requirements. This means that the actual managed memory 
allocated may be smaller or larger than the config values, which depends on the 
actual slot managed memory capacity.
The configs include
 * table.exec.resource.external-buffer-memory, 
 * table.exec.resource.hash-agg.memory, 
 * table.exec.resource.hash-join.memory, and
 * table.exec.resource.sort.memory.
This ensures that operators do not over allocate memory or leave available 
memory unallocated. It helps the task to run more stable if the slot managed 
memory is limited, or more efficient if their are adequate slot managed memory.



> FLIP-53 Fine Grained Operator Resource Management
> -
>
> Key: FLINK-14058
> URL: https://issues.apache.org/jira/browse/FLINK-14058
> Project: Flink
>  Issue Type: New Feature
>  Components: Runtime / Coordination
>Affects Versions: 1.9.0
>Reporter: Xintong Song
>Assignee: Zhu Zhu
>Priority: Major
>  Labels: Umbrella
> Fix For: 1.10.0
>
>
> This is the umbrella issue of 'FLIP-53: Fine Grained Operator Resource 
> Management'.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-14058) FLIP-53 Fine Grained Operator Resource Management

2020-01-08 Thread Zhu Zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhu Zhu updated FLINK-14058:

Release Note: 
The memory configs for table operators are weight hints now rather than 
absolute memory requirements. This means that the actual managed memory 
allocated may be smaller or larger than the config values, which depends on the 
actual slot managed memory capacity.
The affected configs are table.exec.resource.external-buffer-memory, 
table.exec.resource.hash-agg.memory, table.exec.resource.hash-join.memory, and 
table.exec.resource.sort.memory.
This ensures that operators do not over allocate memory or leave available 
memory unallocated. It helps the task to run more stable if the slot managed 
memory is limited, or more efficient if their are adequate slot managed memory.


  was:
The memory configs for table operators are weight hints now rather than 
absolute memory requirements. This means that the actual managed memory 
allocated may be smaller or larger than the config values, which depends on the 
actual slot managed memory capacity.
The affected configs are {{table.exec.resource.external-buffer-memory}}, 
{{table.exec.resource.hash-agg.memory}}, 
{{table.exec.resource.hash-join.memory}}, and 
{{table.exec.resource.sort.memory}}.
This ensures that operators do not over allocate memory or leave available 
memory unallocated. It helps the task to run more stable if the slot managed 
memory is limited, or more efficient if their are adequate slot managed memory.



> FLIP-53 Fine Grained Operator Resource Management
> -
>
> Key: FLINK-14058
> URL: https://issues.apache.org/jira/browse/FLINK-14058
> Project: Flink
>  Issue Type: New Feature
>  Components: Runtime / Coordination
>Affects Versions: 1.9.0
>Reporter: Xintong Song
>Assignee: Zhu Zhu
>Priority: Major
>  Labels: Umbrella
> Fix For: 1.10.0
>
>
> This is the umbrella issue of 'FLIP-53: Fine Grained Operator Resource 
> Management'.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #10772: [FLINK-15338][python] Cherry-pick NETTY#8955 and BEAM-9006(#10462) to fix the TM Metaspace memory leak problem when submitting PyFlink UDF j

2020-01-08 Thread GitBox
flinkbot edited a comment on issue #10772: [FLINK-15338][python] Cherry-pick 
NETTY#8955 and BEAM-9006(#10462) to fix the TM Metaspace memory leak problem 
when submitting PyFlink UDF jobs multiple times.
URL: https://github.com/apache/flink/pull/10772#issuecomment-571035348
 
 
   
   ## CI report:
   
   * ae56dd2d9b0d281ee691f3a7e7667dd960eda232 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/143204260) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4112)
 
   * 9e73df7bdf3ce66c243004ad8fb0b78eacc3776f Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/143212537) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4118)
 
   * 58ef3c7a98a47f98cf15d69b3c373f8e6928175b Travis: 
[CANCELED](https://travis-ci.com/flink-ci/flink/builds/143519857) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4182)
 
   * 41b9d57a5a85793eb5036e8943605d0c1e089807 Travis: 
[CANCELED](https://travis-ci.com/flink-ci/flink/builds/143523160) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4184)
 
   * 5dcfc4d530298996cf35ca64e4494572abad4106 Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/143527712) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4186)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10797: [FLINK-15172][table-blink] Optimize the operator algorithm to lazily allocate memory

2020-01-08 Thread GitBox
flinkbot edited a comment on issue #10797: [FLINK-15172][table-blink] Optimize 
the operator algorithm to lazily allocate memory
URL: https://github.com/apache/flink/pull/10797#issuecomment-571963932
 
 
   
   ## CI report:
   
   * e904ab518c4d3b26bf27a89c61b9df30af54b6ed Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/143532669) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4187)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-15448) Log host informations for TaskManager failures.

2020-01-08 Thread Zhu Zhu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17010537#comment-17010537
 ] 

Zhu Zhu commented on FLINK-15448:
-

[~victor-wong] not pretty sure what's the benefit to have different ResourceID 
extends for different deployments?

> Log host informations for TaskManager failures.
> ---
>
> Key: FLINK-15448
> URL: https://issues.apache.org/jira/browse/FLINK-15448
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Affects Versions: 1.9.1
>Reporter: Victor Wong
>Assignee: Victor Wong
>Priority: Minor
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> With Flink on Yarn, sometimes we ran into an exception like this:
> {code:java}
> java.util.concurrent.TimeoutException: The heartbeat of TaskManager with id 
> container_  timed out.
> {code}
> We'd like to find out the host of the lost TaskManager to log into it for 
> more details, we have to check the previous logs for the host information, 
> which is a little time-consuming.
> Maybe we can add more descriptive information to ResourceID of Yarn 
> containers, e.g. "container_xxx@host_name:port_number".
> Here's the demo:
> {code:java}
> class ResourceID {
>   final String resourceId;
>   final String details;
>   public ResourceID(String resourceId) {
> this.resourceId = resourceId;
> this.details = resourceId;
>   }
>   public ResourceID(String resourceId, String details) {
> this.resourceId = resourceId;
> this.details = details;
>   }
>   public String toString() {
> return details;
>   } 
> }
> // in flink-yarn
> private void startTaskExecutorInContainer(Container container) {
>   final String containerIdStr = container.getId().toString();
>   final String containerDetail = container.getId() + "@" + 
> container.getNodeId();  
>   final ResourceID resourceId = new ResourceID(containerIdStr, 
> containerDetail);
>   ...
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wangyang0918 commented on issue #10777: [FLINK-10939][doc] Add documents for running Flink cluster natively on Kubernetes

2020-01-08 Thread GitBox
wangyang0918 commented on issue #10777: [FLINK-10939][doc] Add documents for 
running Flink cluster natively on Kubernetes
URL: https://github.com/apache/flink/pull/10777#issuecomment-571982116
 
 
   @aljoscha Yes, we need to add the `/zh` in `{{ site.baseurl }}`. The current 
behavior is right, i will not update it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10654: [FLINK-14081][table-planner-blink] Support precision of TimeType

2020-01-08 Thread GitBox
flinkbot edited a comment on issue #10654: [FLINK-14081][table-planner-blink] 
Support precision of TimeType
URL: https://github.com/apache/flink/pull/10654#issuecomment-568150600
 
 
   
   ## CI report:
   
   * 210649f07986a5c815bd39bb532a86f06ee412b8 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/141996849) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3838)
 
   * 817a1efa635ff3201c943fbf579432b4ef5b8c7b Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/142075142) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3849)
 
   * a942e8e4a3b9f458b4f903280ccef3ccf2364770 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/142375460) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3937)
 
   * 383d6c9e55ecc74580f998b20ac60cf7990779d1 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/143376910) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4161)
 
   * a04621c54d57075fd9f747642b071c1d6a68767b Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/143499280) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4173)
 
   * cfa0072c7b98c2ffcc1aced9fbc148cc12755bad Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/143509092) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4177)
 
   * 3bf4ff084432ced2e7da115aea1eb17cf925096b Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/143519873) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4183)
 
   * 140543d71dbc8456479ead209ef74fddb12bbd92 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-15355) Nightly streaming file sink fails with unshaded hadoop

2020-01-08 Thread Gary Yao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Yao updated FLINK-15355:
-
Fix Version/s: (was: 1.11.0)

> Nightly streaming file sink fails with unshaded hadoop
> --
>
> Key: FLINK-15355
> URL: https://issues.apache.org/jira/browse/FLINK-15355
> Project: Flink
>  Issue Type: Bug
>  Components: FileSystems
>Affects Versions: 1.10.0, 1.11.0
>Reporter: Arvid Heise
>Assignee: Arvid Heise
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> {code:java}
> org.apache.flink.client.program.ProgramInvocationException: The main method 
> caused an error: java.util.concurrent.ExecutionException: 
> org.apache.flink.runtime.client.JobSubmissionException: Failed to submit 
> JobGraph.
>  at 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:335)
>  at 
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:205)
>  at org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:138)
>  at 
> org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:664)
>  at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:213)
>  at 
> org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:895)
>  at 
> org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:968)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:422)
>  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836)
>  at 
> org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
>  at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:968)
> Caused by: java.lang.RuntimeException: 
> java.util.concurrent.ExecutionException: 
> org.apache.flink.runtime.client.JobSubmissionException: Failed to submit 
> JobGraph.
>  at org.apache.flink.util.ExceptionUtils.rethrow(ExceptionUtils.java:199)
>  at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.executeAsync(StreamExecutionEnvironment.java:1751)
>  at 
> org.apache.flink.streaming.api.environment.StreamContextEnvironment.executeAsync(StreamContextEnvironment.java:94)
>  at 
> org.apache.flink.streaming.api.environment.StreamContextEnvironment.execute(StreamContextEnvironment.java:63)
>  at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1628)
>  at StreamingFileSinkProgram.main(StreamingFileSinkProgram.java:77)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498)
>  at 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:321)
>  ... 11 more
> Caused by: java.util.concurrent.ExecutionException: 
> org.apache.flink.runtime.client.JobSubmissionException: Failed to submit 
> JobGraph.
>  at 
> java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
>  at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1895)
>  at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.executeAsync(StreamExecutionEnvironment.java:1746)
>  ... 20 more
> Caused by: org.apache.flink.runtime.client.JobSubmissionException: Failed to 
> submit JobGraph.
>  at 
> org.apache.flink.client.program.rest.RestClusterClient.lambda$submitJob$7(RestClusterClient.java:326)
>  at 
> java.util.concurrent.CompletableFuture.uniExceptionally(CompletableFuture.java:870)
>  at 
> java.util.concurrent.CompletableFuture$UniExceptionally.tryFire(CompletableFuture.java:852)
>  at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
>  at 
> java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
>  at 
> org.apache.flink.runtime.concurrent.FutureUtils.lambda$retryOperationWithDelay$8(FutureUtils.java:274)
>  at 
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
>  at 
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
>  at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
>  at 
> java.util.concurrent.CompletableFuture.postFire(CompletableFuture.java:561)
>  at 
> java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:929)
>  at 
> java.util.concurrent.CompletableFuture$Completion.run(Comp

[jira] [Assigned] (FLINK-14163) Execution#producedPartitions is possibly not assigned when used

2020-01-08 Thread Andrey Zagrebin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Zagrebin reassigned FLINK-14163:
---

Assignee: Yuan Mei

> Execution#producedPartitions is possibly not assigned when used
> ---
>
> Key: FLINK-14163
> URL: https://issues.apache.org/jira/browse/FLINK-14163
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.9.0, 1.10.0
>Reporter: Zhu Zhu
>Assignee: Yuan Mei
>Priority: Major
> Fix For: 1.10.0
>
>
> Currently {{Execution#producedPartitions}} is assigned after the partitions 
> have completed the registration to shuffle master in 
> {{Execution#registerProducedPartitions(...)}}.
> The partition registration is an async interface 
> ({{ShuffleMaster#registerPartitionWithProducer(...)}}), so 
> {{Execution#producedPartitions}} is possible[1] not set when used. 
> Usages includes:
> 1. deploying this task, so that the task may be deployed without its result 
> partitions assigned, and the job would hang. (DefaultScheduler issue only, 
> since legacy scheduler handled this case)
> 2. generating input descriptors for downstream tasks: 
> 3. retrieve {{ResultPartitionID}} for partition releasing: 
> [1] If a user uses Flink default shuffle master {{NettyShuffleMaster}}, it is 
> not problematic at the moment since it returns a completed future on 
> registration, so that it would be a synchronized process. However, if users 
> implement their own shuffle service in which the 
> {{ShuffleMaster#registerPartitionWithProducer}} returns an pending future, it 
> can be a problem. This is possible since customizable shuffle service is open 
> to users since 1.9 (via config "shuffle-service-factory.class").
> To avoid issues to happen, we may either 
> 1. fix all the usages of {{Execution#producedPartitions}} regarding the async 
> assigning, or 
> 2. change {{ShuffleMaster#registerPartitionWithProducer(...)}} to a sync 
> interface



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-15497) Streaming TopN operator doesn't reduce outputs when rank number is not required

2020-01-08 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu reassigned FLINK-15497:
---

Assignee: Jing Zhang

> Streaming TopN operator doesn't reduce outputs when rank number is not 
> required 
> 
>
> Key: FLINK-15497
> URL: https://issues.apache.org/jira/browse/FLINK-15497
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.9.1
>Reporter: Kurt Young
>Assignee: Jing Zhang
>Priority: Major
> Fix For: 1.9.2, 1.10.0
>
>
> As we described in the doc: 
> [https://ci.apache.org/projects/flink/flink-docs-release-1.9/dev/table/sql.html#top-n]
> when rank number is not required, we can reduce some output, like unnecessary 
> retract messages. 
> Here is an example which can re-produce:
> {code:java}
> val data = List(
>   ("aaa", 97.0, 200.0),
>   ("bbb", 67.0, 200.0),
>   ("bbb", 162.0, 200.0)
> )
> val ds = failingDataSource(data).toTable(tEnv, 'guid, 'a, 'b)
> tEnv.registerTable("T", ds)
> val aggreagtedTable = tEnv.sqlQuery(
>   """
> |select guid,
> |sum(a) as reached_score,
> |sum(b) as max_score,
> |sum(a) / sum(b) as score
> |from T group by guid
> |""".stripMargin
> )
> tEnv.registerTable("T2", aggreagtedTable)
> val sql =
>   """
> |SELECT guid, reached_score, max_score, score
> |FROM (
> |  SELECT *,
> |  ROW_NUMBER() OVER (ORDER BY score DESC) as rank_num
> |  FROM T2)
> |WHERE rank_num <= 5
>   """.stripMargin
> val sink = new TestingRetractSink
> tEnv.sqlQuery(sql).toRetractStream[Row].addSink(sink).setParallelism(1)
> env.execute()
> {code}
> In this case, the output is:
> {code:java}
> (true,aaa,97.0,200.0,0.485)
> (true,bbb,67.0,200.0,0.335) 
> (false,bbb,67.0,200.0,0.335) 
> (true,bbb,229.0,400.0,0.5725) 
> (false,aaa,97.0,200.0,0.485) 
> (true,aaa,97.0,200.0,0.485)
> {code}
> But the last 2 messages are unnecessary. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-15406) RocksDB savepoints with heap timers cannot be restored by non-process functions

2020-01-08 Thread Seth Wiesman (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15406?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Seth Wiesman updated FLINK-15406:
-
Affects Version/s: (was: 1.9.1)

> RocksDB savepoints with heap timers cannot be restored by non-process 
> functions
> ---
>
> Key: FLINK-15406
> URL: https://issues.apache.org/jira/browse/FLINK-15406
> Project: Flink
>  Issue Type: Bug
>  Components: API / State Processor
>Affects Versions: 1.10.0, 1.11.0
>Reporter: Darcy Lin
>Assignee: Seth Wiesman
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.0
>
> Attachments: CountWord.java
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The savepoint is writted by "State Processor API" can't be restore by map or 
> flatmap. But it can be retored by KeyedProcessFunction.  
>  Following is the error message:
> {code:java}
> java.lang.Exception: Could not write timer service of Flat Map -> Map -> 
> Sink: device_first_user_create (1/8) to checkpoint state 
> stream.java.lang.Exception: Could not write timer service of Flat Map -> Map 
> -> Sink: device_first_user_create (1/8) to checkpoint state stream. at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator.snapshotState(AbstractStreamOperator.java:466)
>  at 
> org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.snapshotState(AbstractUdfStreamOperator.java:89)
>  at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator.snapshotState(AbstractStreamOperator.java:399)
>  at 
> org.apache.flink.streaming.runtime.tasks.StreamTask$CheckpointingOperation.checkpointStreamOperator(StreamTask.java:1282)
>  at 
> org.apache.flink.streaming.runtime.tasks.StreamTask$CheckpointingOperation.executeCheckpointing(StreamTask.java:1216)
>  at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.checkpointState(StreamTask.java:872)
>  at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.performCheckpoint(StreamTask.java:777)
>  at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.triggerCheckpointOnBarrier(StreamTask.java:708)
>  at 
> org.apache.flink.streaming.runtime.io.CheckpointBarrierHandler.notifyCheckpoint(CheckpointBarrierHandler.java:88)
>  at 
> org.apache.flink.streaming.runtime.io.CheckpointBarrierAligner.processBarrier(CheckpointBarrierAligner.java:177)
>  at 
> org.apache.flink.streaming.runtime.io.CheckpointedInputGate.pollNext(CheckpointedInputGate.java:155)
>  at 
> org.apache.flink.streaming.runtime.io.StreamTaskNetworkInput.pollNextNullable(StreamTaskNetworkInput.java:102)
>  at 
> org.apache.flink.streaming.runtime.io.StreamTaskNetworkInput.pollNextNullable(StreamTaskNetworkInput.java:47)
>  at 
> org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:135)
>  at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:279)
>  at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.run(StreamTask.java:301) 
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:406)
>  at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:705) at 
> org.apache.flink.runtime.taskmanager.Task.run(Task.java:530) at 
> java.lang.Thread.run(Thread.java:748)Caused by: 
> java.lang.NullPointerException at 
> org.apache.flink.util.Preconditions.checkNotNull(Preconditions.java:58) at 
> org.apache.flink.streaming.api.operators.InternalTimersSnapshot.(InternalTimersSnapshot.java:52)
>  at 
> org.apache.flink.streaming.api.operators.InternalTimerServiceImpl.snapshotTimersForKeyGroup(InternalTimerServiceImpl.java:291)
>  at 
> org.apache.flink.streaming.api.operators.InternalTimerServiceSerializationProxy.write(InternalTimerServiceSerializationProxy.java:98)
>  at 
> org.apache.flink.streaming.api.operators.InternalTimeServiceManager.snapshotStateForKeyGroup(InternalTimeServiceManager.java:139)
>  at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator.snapshotState(AbstractStreamOperator.java:462)
>  ... 19 more{code}
>   



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] sjwiesman commented on issue #10777: [FLINK-10939][doc] Add documents for running Flink cluster natively on Kubernetes

2020-01-08 Thread GitBox
sjwiesman commented on issue #10777: [FLINK-10939][doc] Add documents for 
running Flink cluster natively on Kubernetes
URL: https://github.com/apache/flink/pull/10777#issuecomment-571987089
 
 
   @aljoscha I am tired, please ignore. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Closed] (FLINK-10939) Add documents for natively running Flink session cluster on k8s

2020-01-08 Thread Aljoscha Krettek (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-10939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek closed FLINK-10939.

Resolution: Fixed

Added on master in 0a1ac45d8b2d425c0c538b1e97729ab77024e7ad
Added on release-1.10 in fb32056e981388b0618f72796a839b6d3eba78b3

> Add documents for natively running Flink session cluster on k8s
> ---
>
> Key: FLINK-10939
> URL: https://issues.apache.org/jira/browse/FLINK-10939
> Project: Flink
>  Issue Type: Sub-task
>  Components: Deployment / Kubernetes
>Reporter: JIN SUN
>Assignee: Yang Wang
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] aljoscha closed pull request #10777: [FLINK-10939][doc] Add documents for running Flink cluster natively on Kubernetes

2020-01-08 Thread GitBox
aljoscha closed pull request #10777: [FLINK-10939][doc] Add documents for 
running Flink cluster natively on Kubernetes
URL: https://github.com/apache/flink/pull/10777
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] aljoscha commented on issue #10777: [FLINK-10939][doc] Add documents for running Flink cluster natively on Kubernetes

2020-01-08 Thread GitBox
aljoscha commented on issue #10777: [FLINK-10939][doc] Add documents for 
running Flink cluster natively on Kubernetes
URL: https://github.com/apache/flink/pull/10777#issuecomment-571987609
 
 
   😅:coffee: 
   
   I merged this now. Thanks a lot @wangyang0918! And also thanks @sjwiesman 
for the adjustments.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-15315) Add test case for rest

2020-01-08 Thread Gary Yao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17010555#comment-17010555
 ] 

Gary Yao commented on FLINK-15315:
--

[~lining] ping

> Add test case for rest
> --
>
> Key: FLINK-15315
> URL: https://issues.apache.org/jira/browse/FLINK-15315
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / REST, Tests
>Reporter: lining
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-15424) Make all AppendingState#add respect the java doc

2020-01-08 Thread Aljoscha Krettek (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek updated FLINK-15424:
-
Component/s: API / DataStream

> Make all AppendingState#add respect the java doc
> 
>
> Key: FLINK-15424
> URL: https://issues.apache.org/jira/browse/FLINK-15424
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream, Runtime / State Backends
>Affects Versions: 1.8.3, 1.9.1
>Reporter: Congxian Qiu(klion26)
>Assignee: Congxian Qiu(klion26)
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently, We have a java doc in 
> {{[AppendingState#add|https://github.com/apache/flink/blob/52fdee1d0c7af24d25c51caa073e29f11b07210b/flink-core/src/main/java/org/apache/flink/api/common/state/AppendingState.java#L63]}}
> {code:java}
>  If null is passed in, the state value will remain unchanged.{code}
> but currently, the implementation did not respect this, take 
> {{HeapReducingState}} as an example, we'll clear the state if the passed 
> parameter is null
> {code:java}
> @Override 
> public void add(V value) throws IOException {
> if (value == null) {  
> clear();  
> return;   
> }
> try { 
> stateTable.transform(currentNamespace, value, reduceTransformation);  
> } catch (Exception e) { 
> throw new IOException("Exception while applying ReduceFunction in 
> reducing state", e);
> } 
> }
> {code}
> But in {{RocksDBReducingState}}  we would not clear the state, and put the 
> null value into state if serializer can serialize null.
> {code:java}
> @Override
> public void add(V value) throws Exception {
>byte[] key = getKeyBytes();
>V oldValue = getInternal(key);
>V newValue = oldValue == null ? value : reduceFunction.reduce(oldValue, 
> value);
>updateInternal(key, newValue);
> }
> {code}
> this issue wants to make all {{Appending}}State respect the javadoc of 
> {{AppendingState}}, and return directly if the passed in parameter is null.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] aljoscha commented on issue #10796: [FLINK-15424][StateBackend] Make AppendintState#add refuse to add null element

2020-01-08 Thread GitBox
aljoscha commented on issue #10796: [FLINK-15424][StateBackend] Make 
AppendintState#add refuse to add null element
URL: https://github.com/apache/flink/pull/10796#issuecomment-571988186
 
 
   There's a CI failure.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-15461) Add Stream SQL end2end test to cover connecting to external systems

2020-01-08 Thread Gary Yao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17010558#comment-17010558
 ] 

Gary Yao commented on FLINK-15461:
--

[~Leonard Xu] Are you sure that fix version should be 1.11, and not 1.10?

> Add Stream SQL end2end test to cover connecting to external systems
> ---
>
> Key: FLINK-15461
> URL: https://issues.apache.org/jira/browse/FLINK-15461
> Project: Flink
>  Issue Type: Test
>  Components: Table SQL / API, Tests
>Reporter: Leonard Xu
>Priority: Major
> Fix For: 1.11.0
>
>
> We enhanced FLINK SQL in release 1.10, but we are lack of test/example that 
> can cover connecting to external systems, eg: Read from Kafka, join 
> Mysql/hbase dimension table and then sink to Kafka/hbase by SQL.
> This issue aims at addressing above scenarios.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] pnowojski commented on a change in pull request #10794: [FLINK-15490][kafka][test-stability] Enable idempotence producing in …

2020-01-08 Thread GitBox
pnowojski commented on a change in pull request #10794: 
[FLINK-15490][kafka][test-stability] Enable idempotence producing in …
URL: https://github.com/apache/flink/pull/10794#discussion_r364164729
 
 

 ##
 File path: 
flink-connectors/flink-connector-kafka-base/src/test/java/org/apache/flink/streaming/connectors/kafka/testutils/DataGenerators.java
 ##
 @@ -104,6 +104,8 @@ public void cancel() {
if (secureProps != null) {
props.putAll(testServer.getSecureProperties());
}
+   // Ensure the producer enables idempotence.
+   props.putAll(testServer.getIdempotentProducerConfig());
 
 Review comment:
   Is this the only place where we should be using `enable.idempotence`? 
   
   Do I remember correctly that `enable.idempotence` is 
automatically/effectively set to true when using transactional kafka producer?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink-playgrounds] patricklucas commented on issue #7: Add subnet setting for Proxy user

2020-01-08 Thread GitBox
patricklucas commented on issue #7: Add subnet setting for Proxy user
URL: https://github.com/apache/flink-playgrounds/pull/7#issuecomment-571991159
 
 
   @tsjsdbd I think this issue doesn't affect that many people, and picking 
another arbitrary IP range (like `10.103/16`) could possibly break things for 
other people who happen to already use it.
   
   Have you tried setting the `default-address-pools` setting on your Docker 
daemon? (see [this 
comment](https://github.com/docker/compose/issues/4336#issuecomment-457326123)) 
I just tried it on my machine (Docker Desktop 2.1.0.5 on Mac) and it worked 
fine.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-15150) ZooKeeperLeaderElectionITCase.testJobExecutionOnClusterWithLeaderChange failed on Travis

2020-01-08 Thread Gary Yao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Yao updated FLINK-15150:
-
Priority: Critical  (was: Major)

> ZooKeeperLeaderElectionITCase.testJobExecutionOnClusterWithLeaderChange 
> failed on Travis
> 
>
> Key: FLINK-15150
> URL: https://issues.apache.org/jira/browse/FLINK-15150
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.10.0
>Reporter: Congxian Qiu(klion26)
>Priority: Critical
> Fix For: 1.10.0
>
>
>  
> 06:37:18.423 [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time 
> elapsed: 14.014 s <<< FAILURE! - in 
> org.apache.flink.test.runtime.leaderelection.ZooKeeperLeaderElectionITCase
> 375406:37:18.423 [ERROR] 
> testJobExecutionOnClusterWithLeaderChange(org.apache.flink.test.runtime.leaderelection.ZooKeeperLeaderElectionITCase)
>  Time elapsed: 14.001 s <<< ERROR!
> 3755java.util.concurrent.ExecutionException: 
> org.apache.flink.util.FlinkException: JobMaster has been shut down.
> 3756 at 
> org.apache.flink.test.runtime.leaderelection.ZooKeeperLeaderElectionITCase.lambda$testJobExecutionOnClusterWithLeaderChange$1(ZooKeeperLeaderElectionITCase.java:131)
> 3757 at 
> org.apache.flink.test.runtime.leaderelection.ZooKeeperLeaderElectionITCase.testJobExecutionOnClusterWithLeaderChange(ZooKeeperLeaderElectionITCase.java:131)
> 3758Caused by: org.apache.flink.util.FlinkException: JobMaster has been shut 
> down.
> 3759
>  
> [https://travis-ci.com/flink-ci/flink/jobs/264972218]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-15150) ZooKeeperLeaderElectionITCase.testJobExecutionOnClusterWithLeaderChange failed on Travis

2020-01-08 Thread Gary Yao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Yao updated FLINK-15150:
-
Fix Version/s: 1.10.0

> ZooKeeperLeaderElectionITCase.testJobExecutionOnClusterWithLeaderChange 
> failed on Travis
> 
>
> Key: FLINK-15150
> URL: https://issues.apache.org/jira/browse/FLINK-15150
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.10.0
>Reporter: Congxian Qiu(klion26)
>Priority: Major
> Fix For: 1.10.0
>
>
>  
> 06:37:18.423 [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time 
> elapsed: 14.014 s <<< FAILURE! - in 
> org.apache.flink.test.runtime.leaderelection.ZooKeeperLeaderElectionITCase
> 375406:37:18.423 [ERROR] 
> testJobExecutionOnClusterWithLeaderChange(org.apache.flink.test.runtime.leaderelection.ZooKeeperLeaderElectionITCase)
>  Time elapsed: 14.001 s <<< ERROR!
> 3755java.util.concurrent.ExecutionException: 
> org.apache.flink.util.FlinkException: JobMaster has been shut down.
> 3756 at 
> org.apache.flink.test.runtime.leaderelection.ZooKeeperLeaderElectionITCase.lambda$testJobExecutionOnClusterWithLeaderChange$1(ZooKeeperLeaderElectionITCase.java:131)
> 3757 at 
> org.apache.flink.test.runtime.leaderelection.ZooKeeperLeaderElectionITCase.testJobExecutionOnClusterWithLeaderChange(ZooKeeperLeaderElectionITCase.java:131)
> 3758Caused by: org.apache.flink.util.FlinkException: JobMaster has been shut 
> down.
> 3759
>  
> [https://travis-ci.com/flink-ci/flink/jobs/264972218]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-14369) KafkaProducerAtLeastOnceITCase>KafkaProducerTestBase.testOneToOneAtLeastOnceCustomOperator fails on Travis

2020-01-08 Thread Gary Yao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Yao updated FLINK-14369:
-
Fix Version/s: 1.10.0

> KafkaProducerAtLeastOnceITCase>KafkaProducerTestBase.testOneToOneAtLeastOnceCustomOperator
>  fails on Travis
> --
>
> Key: FLINK-14369
> URL: https://issues.apache.org/jira/browse/FLINK-14369
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.10.0
>Reporter: Till Rohrmann
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.10.0
>
>
> The 
> {{KafkaProducerAtLeastOnceITCase>KafkaProducerTestBase.testOneToOneAtLeastOnceCustomOperator}}
>  fails on Travis with 
> {code}
> Test 
> testOneToOneAtLeastOnceCustomOperator(org.apache.flink.streaming.connectors.kafka.KafkaProducerAtLeastOnceITCase)
>  failed with:
> java.lang.AssertionError: Create test topic : oneToOneTopicCustomOperator 
> failed, org.apache.kafka.common.errors.TimeoutException: Timed out waiting 
> for a node assignment.
>   at org.junit.Assert.fail(Assert.java:88)
>   at 
> org.apache.flink.streaming.connectors.kafka.KafkaTestEnvironmentImpl.createTestTopic(KafkaTestEnvironmentImpl.java:180)
>   at 
> org.apache.flink.streaming.connectors.kafka.KafkaTestEnvironment.createTestTopic(KafkaTestEnvironment.java:115)
>   at 
> org.apache.flink.streaming.connectors.kafka.KafkaTestBase.createTestTopic(KafkaTestBase.java:196)
>   at 
> org.apache.flink.streaming.connectors.kafka.KafkaProducerTestBase.testOneToOneAtLeastOnce(KafkaProducerTestBase.java:231)
>   at 
> org.apache.flink.streaming.connectors.kafka.KafkaProducerTestBase.testOneToOneAtLeastOnceCustomOperator(KafkaProducerTestBase.java:214)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
>   at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> {code}
> https://api.travis-ci.com/v3/job/244297223/log.txt



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-14402) KafkaProducerAtLeastOnceITCase>KafkaProducerTestBase.testOneToOneAtLeastOnceCustomOperator:214->KafkaProducerTestBase.testOneToOneAtLeastOnce failed on Travis

2020-01-08 Thread Gary Yao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Yao closed FLINK-14402.

Resolution: Duplicate

> KafkaProducerAtLeastOnceITCase>KafkaProducerTestBase.testOneToOneAtLeastOnceCustomOperator:214->KafkaProducerTestBase.testOneToOneAtLeastOnce
>  failed on Travis
> --
>
> Key: FLINK-14402
> URL: https://issues.apache.org/jira/browse/FLINK-14402
> Project: Flink
>  Issue Type: Test
>  Components: Tests
>Reporter: vinoyang
>Priority: Major
>
> {code:java}
> 14:34:45.348 [ERROR]   
> KafkaProducerAtLeastOnceITCase>KafkaProducerTestBase.testCustomPartitioning:112->KafkaTestBase.createTestTopic:196
>  Create test topic : defaultTopic failed, 
> org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node 
> assignment.
> 14:34:45.348 [ERROR]   
> KafkaProducerAtLeastOnceITCase>KafkaProducerTestBase.testOneToOneAtLeastOnceCustomOperator:214->KafkaProducerTestBase.testOneToOneAtLeastOnce:231->KafkaTestBase.createTestTopic:196
>  Create test topic : oneToOneTopicCustomOperator failed, 
> org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node 
> assignment.
> 14:34:45.348 [ERROR]   
> KafkaProducerAtLeastOnceITCase>KafkaProducerTestBase.testOneToOneAtLeastOnceRegularSink:206->KafkaProducerTestBase.testOneToOneAtLeastOnce:283
>  Job should fail!
> {code}
> more details: https://api.travis-ci.com/v3/job/245838697/log.txt



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15511) export org.apache.flink.table.api.TableException when flink 1.10 connect hive

2020-01-08 Thread chenchencc (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17010562#comment-17010562
 ] 

chenchencc commented on FLINK-15511:


Hi, [~lirui]

Can i not use stream mode?

I use the result table save to text file ,

but meet the problem:

Caused by: org.apache.hadoop.ipc.RemoteException: /user/chenchao1/tmp/txt13 
already exists as a directory

scripts:

val t = tableEnv.sqlQuery("select id,importdatetime,hrowkey from 
orderparent_test2 where id = 'A21204170176'").toAppendStream[Row]
t.writeAsText("hdfs:///user/chenchao1/tmp/txt13")

> export org.apache.flink.table.api.TableException when flink 1.10 connect hive 
> --
>
> Key: FLINK-15511
> URL: https://issues.apache.org/jira/browse/FLINK-15511
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.10.0
> Environment: flink master
> hive 1.2.1
>  
>Reporter: chenchencc
>Priority: Major
>  Labels: flink, hive
>
> *run scripts:*
> bin/start-scala-shell.sh yarn -qu bi -jm 1024m -tm 2048m
> import org.apache.flink.table.catalog.hive.HiveCatalog
>  val name = "myhive"
>  val defaultDatabase = "test"
>  val hiveConfDir = "/etc/hive/conf"
>  val version = "1.2.1" // or 1.2.1 2.3.4
>  val hive = new HiveCatalog(name, defaultDatabase, hiveConfDir, version)
>  stenv.registerCatalog("myhive", hive)
>  stenv.useCatalog("myhive")
>  stenv.listTables
>  stenv.sqlQuery("select * from gsp_test3").toAppendStream[Row].print
>  *gsp_test3 table columns:*
> id int 
> name string
> *gsp_test3  table  storage:*
> txt file
>  
> *scripts run message*
> scala> import org.apache.flink.table.catalog.hive.HiveCatalog
>  import org.apache.flink.table.catalog.hive.HiveCatalog
> scala> val name = "myhive"
>  name: String = myhive
> scala> val defaultDatabase = "test"
>  defaultDatabase: String = test
> scala> val hiveConfDir = "/etc/hive/conf"
>  hiveConfDir: String = /etc/hive/conf
> scala> val version = "1.2.1" // or 1.2.1 2.3.4
>  version: String = 1.2.1
> scala> val hive = new HiveCatalog(name, defaultDatabase, hiveConfDir, version)
>  20/01/08 14:36:10 INFO hive.HiveCatalog: Setting hive conf dir as 
> /etc/hive/conf
>  20/01/08 14:36:10 WARN conf.HiveConf: HiveConf of name 
> hive.server2.enable.impersonation does not exist
>  20/01/08 14:36:10 WARN conf.HiveConf: HiveConf of name 
> hive.mapred.supports.subdirectories does not exist
>  20/01/08 14:36:10 INFO hive.HiveCatalog: Created HiveCatalog 'myhive'
>  hive: org.apache.flink.table.catalog.hive.HiveCatalog = 
> org.apache.flink.table.catalog.hive.HiveCatalog@60729135
> scala> stenv.registerCatalog("myhive", hive)
>  20/01/08 14:36:10 INFO hive.metastore: Trying to connect to metastore with 
> URI thrift://bgnode4:9083
>  20/01/08 14:36:10 INFO hive.metastore: Connected to metastore.
>  20/01/08 14:36:10 INFO hive.HiveCatalog: Connected to Hive metastore
> scala> stenv.useCatalog("myhive")
>  20/01/08 14:36:10 INFO catalog.CatalogManager: Set the current default 
> catalog as [myhive] and the current default database as [test].
> scala> stenv.listTables
>  res6: Array[String] = Array(amazonproductscore_test, 
> amazonproductscore_test_tmp, amazonshopmanagerkpi, bucketed_user, 
> bulkload_spark_gross_profit_items_zcm, dim_date_test, 
> dw_gross_profit_items_phoenix_test, dw_gross_profit_items_phoenix_test2, 
> dw_gross_profit_items_phoenix_test3, dw_gross_profit_items_phoenix_test4, 
> dw_gross_profit_items_phoenix_test5, gsp_test12, gsp_test2, gsp_test3, 
> hive_phoenix, ni, orderparent_test, orderparent_test2, 
> phoenix_orderparent_id_put_tb, phoenix_orderparent_id_put_tb2, 
> phoenix_orderparent_id_tb, productdailysales, result20190404, 
> result20190404_2, result20190404_3, result20190404_4_5_9, result20190404_5, 
> result20190404vat, result20190404vat11, result20190404vat12, 
> result20190404vat13, result20190404vat5, result20190404vat6_2, ...
>  scala> stenv.sqlQuery("select * from gsp_test3").toAppendStream[Row].print
>  20/01/08 14:36:13 INFO typeutils.TypeExtractor: class 
> org.apache.flink.types.Row does not contain a getter for field fields
>  20/01/08 14:36:13 INFO typeutils.TypeExtractor: class 
> org.apache.flink.types.Row does not contain a setter for field fields
>  20/01/08 14:36:13 INFO typeutils.TypeExtractor: Class class 
> org.apache.flink.types.Row cannot be used as a POJO type because not all 
> fields are valid POJO fields, and must be processed as GenericType. Please 
> read the Flink documentation on "Data Types & Serialization" for details of 
> the effect on performance.
>  20/01/08 14:36:13 WARN conf.HiveConf: HiveConf of name 
> hive.server2.enable.impersonation does not exist
>  20/01/08 14:36:13 WARN conf.HiveConf: HiveConf of name 
> hive.mapred.supports

[GitHub] [flink] flinkbot edited a comment on issue #10654: [FLINK-14081][table-planner-blink] Support precision of TimeType

2020-01-08 Thread GitBox
flinkbot edited a comment on issue #10654: [FLINK-14081][table-planner-blink] 
Support precision of TimeType
URL: https://github.com/apache/flink/pull/10654#issuecomment-568150600
 
 
   
   ## CI report:
   
   * 210649f07986a5c815bd39bb532a86f06ee412b8 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/141996849) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3838)
 
   * 817a1efa635ff3201c943fbf579432b4ef5b8c7b Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/142075142) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3849)
 
   * a942e8e4a3b9f458b4f903280ccef3ccf2364770 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/142375460) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3937)
 
   * 383d6c9e55ecc74580f998b20ac60cf7990779d1 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/143376910) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4161)
 
   * a04621c54d57075fd9f747642b071c1d6a68767b Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/143499280) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4173)
 
   * cfa0072c7b98c2ffcc1aced9fbc148cc12755bad Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/143509092) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4177)
 
   * 3bf4ff084432ced2e7da115aea1eb17cf925096b Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/143519873) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4183)
 
   * 140543d71dbc8456479ead209ef74fddb12bbd92 Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/143537752) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4189)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] sjwiesman commented on issue #10502: [FLINK-14825][state-processor-api][docs] Rework state processor api documentation

2020-01-08 Thread GitBox
sjwiesman commented on issue #10502: [FLINK-14825][state-processor-api][docs] 
Rework state processor api documentation
URL: https://github.com/apache/flink/pull/10502#issuecomment-571996139
 
 
   cc @NicoK 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-14742) Unstable tests TaskExecutorTest#testSubmitTaskBeforeAcceptSlot

2020-01-08 Thread Gary Yao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Yao updated FLINK-14742:
-
Fix Version/s: 1.10.0

> Unstable tests TaskExecutorTest#testSubmitTaskBeforeAcceptSlot
> --
>
> Key: FLINK-14742
> URL: https://issues.apache.org/jira/browse/FLINK-14742
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.10.0
>Reporter: Zili Chen
>Priority: Critical
> Fix For: 1.10.0
>
>
> deadlock.
> {code}
> "main" #1 prio=5 os_prio=0 tid=0x7f1f8800b800 nid=0x356 waiting on 
> condition [0x7f1f8e65c000]
>java.lang.Thread.State: WAITING (parking)
>   at sun.misc.Unsafe.park(Native Method)
>   - parking to wait for  <0x86e9e9c0> (a 
> java.util.concurrent.CompletableFuture$Signaller)
>   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>   at 
> java.util.concurrent.CompletableFuture$Signaller.block(CompletableFuture.java:1693)
>   at 
> java.util.concurrent.ForkJoinPool.managedBlock(ForkJoinPool.java:3323)
>   at 
> java.util.concurrent.CompletableFuture.waitingGet(CompletableFuture.java:1729)
>   at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1895)
>   at 
> org.apache.flink.runtime.taskexecutor.TaskExecutorTest.testSubmitTaskBeforeAcceptSlot(TaskExecutorTest.java:1108)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
> {code}
> full log https://api.travis-ci.org/v3/job/611275566/log.txt



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] yanghua commented on a change in pull request #10777: [FLINK-10939][doc] Add documents for running Flink cluster natively on Kubernetes

2020-01-08 Thread GitBox
yanghua commented on a change in pull request #10777: [FLINK-10939][doc] Add 
documents for running Flink cluster natively on Kubernetes
URL: https://github.com/apache/flink/pull/10777#discussion_r364170639
 
 

 ##
 File path: docs/ops/deployment/native_kubernetes.md
 ##
 @@ -0,0 +1,199 @@
+---
+title:  "Native Kubernetes Setup"
+nav-title: Native Kubernetes
+nav-parent_id: deployment
+is_beta: true
+nav-pos: 7
+---
+
+
+This page describes how to deploy a Flink session cluster natively on 
[Kubernetes](https://kubernetes.io).
+
+* This will be replaced by the TOC
+{:toc}
+
+
+Flink's native Kubernetes integration is still experimental. There may be 
changes in the configuration and CLI flags in latter versions. Job clusters are 
not yet supported.
+
+
+## Requirements
+
+- Kubernetes 1.9 or above.
+- KubeConfig, which has access to list, create, delete pods and services, 
configurable via `~/.kube/config`. You can verify permissions by running 
`kubectl auth can-i  pods`.
+- Kubernetes DNS enabled.
+- A service Account with [RBAC](#rbac) permissions to create, delete pods.
+
+## Flink Kubernetes Session
+
+### Start Flink Session
+
+Follow these instructions to start a Flink Session within your Kubernetes 
cluster.
+
+A session will start all required Flink services (JobManager and TaskManagers) 
so that you can submit programs to the cluster.
+Note that you can run multiple programs per session.
+
+{% highlight bash %}
+$ ./bin/kubernetes-session.sh
+{% endhighlight %}
+
+All the Kubernetes configuration options can be found in our [configuration 
guide]({{ site.baseurl }}/ops/config.html#kubernetes).
+
+**Example**: Issue the following command to start a session cluster with 4 GB 
of memory and 2 CPUs with 4 slots per TaskManager:
+
+{% highlight bash %}
+./bin/kubernetes-session.sh \
+  -Dkubernetes.cluster-id= \
+  -Dtaskmanager.memory.process.size=4096m \
+  -Dkubernetes.taskmanager.cpu=2 \
+  -Dtaskmanager.numberOfTaskSlots=4
+{% endhighlight %}
+
+The system will use the configuration in `conf/flink-conf.yaml`.
+Please follow our [configuration guide]({{ site.baseurl }}/ops/config.html) if 
you want to change something.
+
+If you do not specify a particular name for your session by 
`kubernetes.cluster-id`, the Flink client will generate a UUID name. 
+
+### Submitting jobs to an existing Session
+
+Use the following command to submit a Flink Job to the Kubernetes cluster.
+
+{% highlight bash %}
+$ ./bin/flink run -d -e kubernetes-session -Dkubernetes.cluster-id= 
examples/streaming/WindowJoin.jar
+{% endhighlight %}
+
+### Accessing Job Manager UI
+
+There are several ways to expose a Service onto an external (outside of your 
cluster) IP address.
+This can be configured using `kubernetes.service.exposed.type`.
+
+- `ClusterIP`: Exposes the service on a cluster-internal IP.
+The Service is only reachable within the cluster. If you want to access the 
Job Manager ui or submit job to the existing session, you need to start a local 
proxy.
 
 Review comment:
   ui -> UI and `submit job` -> `submit a job`?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] aljoscha closed pull request #6200: [FLINK-9641] [streaming-connectors] Flink pulsar source connector

2020-01-08 Thread GitBox
aljoscha closed pull request #6200: [FLINK-9641] [streaming-connectors] Flink 
pulsar source connector
URL: https://github.com/apache/flink/pull/6200
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] aljoscha commented on issue #6200: [FLINK-9641] [streaming-connectors] Flink pulsar source connector

2020-01-08 Thread GitBox
aljoscha commented on issue #6200: [FLINK-9641] [streaming-connectors] Flink 
pulsar source connector
URL: https://github.com/apache/flink/pull/6200#issuecomment-571997030
 
 
   I'm sorry but I'm closing this in favour of the newer efforts in 
https://issues.apache.org/jira/browse/FLINK-14146.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


  1   2   3   4   5   6   >