[GitHub] [flink] flinkbot edited a comment on pull request #12275: [FLINK-16021][table-common] DescriptorProperties.putTableSchema does …

2020-05-21 Thread GitBox


flinkbot edited a comment on pull request #12275:
URL: https://github.com/apache/flink/pull/12275#issuecomment-631915106


   
   ## CI report:
   
   * bc6de4913b48c82a220eda620fb58667e7e93642 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1983)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-statefun] tzulitai closed pull request #115: [FLINK-17518] [e2e] Add remote module E2E

2020-05-21 Thread GitBox


tzulitai closed pull request #115:
URL: https://github.com/apache/flink-statefun/pull/115


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] klion26 commented on a change in pull request #12189: [FLINK-17376][API/DataStream]Deprecated methods and related code updated

2020-05-21 Thread GitBox


klion26 commented on a change in pull request #12189:
URL: https://github.com/apache/flink/pull/12189#discussion_r428476861



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/state/DefaultOperatorStateBackend.java
##
@@ -219,31 +219,6 @@ public void dispose() {
return getListState(stateDescriptor, 
OperatorStateHandle.Mode.UNION);
}
 
-   // 
---
-   //  Deprecated state access methods
-   // 
---
-
-   /**
-* @deprecated This was deprecated as part of a refinement to the 
function names.
-* Please use {@link #getListState(ListStateDescriptor)} 
instead.
-*/
-   @Deprecated
-   @Override
-   public  ListState getOperatorState(ListStateDescriptor 
stateDescriptor) throws Exception {
-   return getListState(stateDescriptor);
-   }
-
-   /**
-* @deprecated Using Java serialization for persisting state is not 
encouraged.
-* Please use {@link #getListState(ListStateDescriptor)} 
instead.
-*/
-   @SuppressWarnings("unchecked")
-   @Deprecated
-   @Override
-   public  ListState 
getSerializableListState(String stateName) throws Exception {

Review comment:
   Maybe we can add an auxiliary function in which the function 
implementation is same as this function so that we can migrate easily?
   The `getSerializableListState()` will always use 
`deprecatedDefaultJavaSerializer`





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-17518) Add HTTP-based request reply protocol E2E test for Stateful Functions

2020-05-21 Thread Tzu-Li (Gordon) Tai (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai closed FLINK-17518.
---
Resolution: Fixed

master - 2abbff1b0036fa3099cccdb13330e97078bd4d97
release-2.0 - cc8cbe4beec3cba981fbf91b60af494405839158

> Add HTTP-based request reply protocol E2E test for Stateful Functions
> -
>
> Key: FLINK-17518
> URL: https://issues.apache.org/jira/browse/FLINK-17518
> Project: Flink
>  Issue Type: Sub-task
>  Components: Stateful Functions
>Reporter: Tzu-Li (Gordon) Tai
>Assignee: Tzu-Li (Gordon) Tai
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: statefun-2.0.1, statefun-2.1.0
>
>
> The E2E test should contain of a standalone deployed containerized remote 
> function, e.g. using the Python SDK + Flask, as well as a Flink Stateful 
> Functions cluster deployed using the {{StatefulFunctionsAppsContainers}} 
> utility.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] zhengcanbin commented on pull request #12215: [FLINK-17565][k8s] Bump fabric8 version from 4.5.2 to 4.9.2

2020-05-21 Thread GitBox


zhengcanbin commented on pull request #12215:
URL: https://github.com/apache/flink/pull/12215#issuecomment-631927890


   > @zhengcanbin Thanks a lot for creating this PR. I am afraid this PR could 
not work because the new `kubernetes-client` version introduce some additional 
dependencies(e.g. `com.fasterxml.jackson.datatype:jackson-datatype-jsr310`). 
Could you please check for that?
   > 
   > ```
   > 2020-05-18 14:22:19,882 INFO  
org.apache.flink.client.deployment.DefaultClusterClientServiceLoader [] - Could 
not load factory due to missing dependencies.
   > Exception in thread "main" java.lang.NoClassDefFoundError: 
org/apache/flink/kubernetes/shaded/com/fasterxml/jackson/datatype/jsr310/JavaTimeModule
   >at 
io.fabric8.kubernetes.client.internal.KubeConfigUtils.parseConfigFromString(KubeConfigUtils.java:44)
   >at 
io.fabric8.kubernetes.client.Config.loadFromKubeconfig(Config.java:505)
   >at io.fabric8.kubernetes.client.Config.tryKubeConfig(Config.java:491)
   >at io.fabric8.kubernetes.client.Config.autoConfigure(Config.java:230)
   >at io.fabric8.kubernetes.client.Config.(Config.java:214)
   >at io.fabric8.kubernetes.client.Config.autoConfigure(Config.java:225)
   >at 
org.apache.flink.kubernetes.kubeclient.KubeClientFactory.fromConfiguration(KubeClientFactory.java:69)
   >at 
org.apache.flink.kubernetes.KubernetesClusterClientFactory.createClusterDescriptor(KubernetesClusterClientFactory.java:58)
   >at 
org.apache.flink.kubernetes.KubernetesClusterClientFactory.createClusterDescriptor(KubernetesClusterClientFactory.java:39)
   >at 
org.apache.flink.kubernetes.cli.KubernetesSessionCli.run(KubernetesSessionCli.java:95)
   >at 
org.apache.flink.kubernetes.cli.KubernetesSessionCli.lambda$main$0(KubernetesSessionCli.java:185)
   >at 
org.apache.flink.runtime.security.contexts.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:30)
   >at 
org.apache.flink.kubernetes.cli.KubernetesSessionCli.main(KubernetesSessionCli.java:185)
   > Caused by: java.lang.ClassNotFoundException: 
org.apache.flink.kubernetes.shaded.com.fasterxml.jackson.datatype.jsr310.JavaTimeModule
   >at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
   >at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
   >at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352)
   >at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
   >... 13 more
   > ```
   
   True, updated.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12215: [FLINK-17565][k8s] Bump fabric8 version from 4.5.2 to 4.9.2

2020-05-21 Thread GitBox


flinkbot edited a comment on pull request #12215:
URL: https://github.com/apache/flink/pull/12215#issuecomment-630047332


   
   ## CI report:
   
   * 906be78b0943a61b70d4624b95bad5479c9f3d92 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1896)
 
   * 65c86e560750386063796783054ed5ea7ea42881 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-16402) Alter table fails on Hive catalog

2020-05-21 Thread Matyas Orhidi (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17112891#comment-17112891
 ] 

Matyas Orhidi commented on FLINK-16402:
---

hi [~lirui] [~gyfora] the error message is coming from:
https://github.com/apache/hive/blob/master/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/MetastoreDefaultTransformer.java

The problem is caused by an autoconversion to managed->external on table 
creation  for non-ACID tables :
https://github.com/apache/hive/blob/master/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/MetastoreDefaultTransformer.java#L561

HMS is 3.1 + some cherry-picks.

> Alter table fails on Hive catalog
> -
>
> Key: FLINK-16402
> URL: https://issues.apache.org/jira/browse/FLINK-16402
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive, Table SQL / Client
>Affects Versions: 1.10.0
>Reporter: Gyula Fora
>Priority: Major
>
> Hive version: 3.1.0
> I get the following error when trying to execute a simple alter table 
> statement:
>  
> {code:java}
> ALTER TABLE ItemTransactions 
> SET (
>  'connector.properties.bootstrap.servers' = 'gyula-1.gce.cloudera.com:9092'
> );
> {code}
> {code:java}
> Caused by: org.apache.flink.table.api.TableException: Could not execute ALTER 
> TABLE hive.default.ItemTransactions SET 
> (connector.properties.zookeeper.connect: [dummy], connector.version: 
> [universal], format.schema: [ROW(transactionId LONG, ts LONG, itemId STRING, 
> quantity INT)], connector.topic: [transaction.log.1], is_generic: [true], 
> connector.startup-mode: [earliest-offset], connector.type: [kafka], 
> connector.properties.bootstrap.servers: [gyula-1.gce.cloudera.com:9092], 
> connector.properties.group.id: [testGroup], format.type: [json])Caused by: 
> org.apache.flink.table.api.TableException: Could not execute ALTER TABLE 
> hive.default.ItemTransactions SET (connector.properties.zookeeper.connect: 
> [dummy], connector.version: [universal], format.schema: [ROW(transactionId 
> LONG, ts LONG, itemId STRING, quantity INT)], connector.topic: 
> [transaction.log.1], is_generic: [true], connector.startup-mode: 
> [earliest-offset], connector.type: [kafka], 
> connector.properties.bootstrap.servers: [gyula-1.gce.cloudera.com:9092], 
> connector.properties.group.id: [testGroup], format.type: [json]) at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.sqlUpdate(TableEnvironmentImpl.java:545)
>  at 
> org.apache.flink.table.api.java.internal.StreamTableEnvironmentImpl.sqlUpdate(StreamTableEnvironmentImpl.java:331)
>  at 
> org.apache.flink.table.client.gateway.local.LocalExecutor.lambda$applyUpdate$17(LocalExecutor.java:690)
>  at 
> org.apache.flink.table.client.gateway.local.ExecutionContext.wrapClassLoader(ExecutionContext.java:240)
>  at 
> org.apache.flink.table.client.gateway.local.LocalExecutor.applyUpdate(LocalExecutor.java:688)
>  ... 9 moreCaused by: 
> org.apache.flink.table.catalog.exceptions.CatalogException: Failed to rename 
> table default.ItemTransactions at 
> org.apache.flink.table.catalog.hive.HiveCatalog.alterTable(HiveCatalog.java:433)
>  at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.sqlUpdate(TableEnvironmentImpl.java:537)
>  ... 13 moreCaused by: MetaException(message:A managed table's location needs 
> to be under the hive warehouse root 
> directory,table:ItemTransactions,location:/warehouse/tablespace/external/hive/itemtransactions,Hive
>  warehouse:/warehouse/tablespace/managed/hive) at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$alter_table_req_result$alter_table_req_resultStandardScheme.read(ThriftHiveMetastore.java)
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-17657) jdbc not support read BIGINT UNSIGNED field

2020-05-21 Thread Leonard Xu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17112897#comment-17112897
 ] 

Leonard Xu commented on FLINK-17657:


[~zhanglun] Thanks for the report, it's a bug from type mapping between  mysql 
and flink sql type exists in JDBC connector.

the root cause is that bigint unsigned range is {{0}} to 
{{18446744073709551615, }}{{bigint rang is }}{{-9223372036854775808}}{{ to 
}}{{9223372036854775807}}{{. so we cannot map a bigint unsigned to bigint(flink 
bigint has same range with mysql bigint)}}.

[~JinxinTang] 's work around[1] is read String  rather read the original 
BigInteger(jdbc treats bigint unsigned as java.math.BigInteger).

[1][https://github.com/TJX2014/flink/blob/master-flink17657-jdbc-bigint/flink-examples/flink-examples-batch/src/main/java/org/apache/flink/examples/java/jdbc/JdbcReader.java#L49]

> jdbc not support read BIGINT UNSIGNED field
> ---
>
> Key: FLINK-17657
> URL: https://issues.apache.org/jira/browse/FLINK-17657
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC, Table SQL / Client
>Affects Versions: 1.10.0, 1.10.1
>Reporter: lun zhang
>Priority: Major
> Attachments: env.yaml, excetion.txt
>
>
> I use sql client read mysql table, but I found I can't read a table contain 
> `BIGINT UNSIGNED` field. It will 
>  Caused by: java.lang.ClassCastException: java.math.BigInteger cannot be cast 
> to java.lang.Long
>  
> MySQL table:
>  
> create table tb
>  (
>  id BIGINT UNSIGNED auto_increment 
>  primary key,
>  cooper BIGINT(19) null ,
> user_sex VARCHAR(2) null 
> );
>  
> my env yaml is env.yaml .



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #12215: [FLINK-17565][k8s] Bump fabric8 version from 4.5.2 to 4.9.2

2020-05-21 Thread GitBox


flinkbot edited a comment on pull request #12215:
URL: https://github.com/apache/flink/pull/12215#issuecomment-630047332


   
   ## CI report:
   
   * 906be78b0943a61b70d4624b95bad5479c9f3d92 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1896)
 
   * 65c86e560750386063796783054ed5ea7ea42881 UNKNOWN
   * 96ecf9da6d438f616bc9b39b69b0f1c7319b1c34 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12275: [FLINK-16021][table-common] DescriptorProperties.putTableSchema does …

2020-05-21 Thread GitBox


flinkbot edited a comment on pull request #12275:
URL: https://github.com/apache/flink/pull/12275#issuecomment-631915106


   
   ## CI report:
   
   * bc6de4913b48c82a220eda620fb58667e7e93642 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1983)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] GJL opened a new pull request #12276: [FLINK-13553][tests][qs] Improve Logging to debug Test Instability

2020-05-21 Thread GitBox


GJL opened a new pull request #12276:
URL: https://github.com/apache/flink/pull/12276


   ## What is the purpose of the change
   
   *This improves logging in AbstractServerHandler to debug a test instability 
(timeouts in `KvStateServerHandlerTest`; see FLINK-13553)*
   
   
   ## Brief change log
   
 - *See commits*
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (yes / **no** / 
don't know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / **no**)
 - If yes, how is the feature documented? (**not applicable** / docs / 
JavaDocs / not documented)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #12276: [FLINK-13553][tests][qs] Improve Logging to debug Test Instability

2020-05-21 Thread GitBox


flinkbot commented on pull request #12276:
URL: https://github.com/apache/flink/pull/12276#issuecomment-631947196


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit a95142c9bbb6c90acad13e7829321cc03f1545eb (Thu May 21 
08:03:57 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-17860) Recursively remove channel state directories

2020-05-21 Thread Roman Khachatryan (Jira)
Roman Khachatryan created FLINK-17860:
-

 Summary: Recursively remove channel state directories
 Key: FLINK-17860
 URL: https://issues.apache.org/jira/browse/FLINK-17860
 Project: Flink
  Issue Type: Improvement
  Components: Runtime / Checkpointing
Affects Versions: 1.11.0
Reporter: Roman Khachatryan
Assignee: Roman Khachatryan
 Fix For: 1.11.0


With a high degree of parallelism, we end up with n*n files in each checkpoint 
(not mitigated by state.backend.fs.memory-threshold: 1048576 and 
state.backend.fs.write-buffer-size: 4194304 because each state is ten-hundreds 
Mb).

Writing them if fast (from many subtasks), removing them is slow (from JM).

 

Instead of going through them 1 by 1, we could remove the directory recursively.

 

The easiest way is to just not call discard for channelStateHandles and then 
switch flag isRecursive=true (if unalignedCheckpoints == true?).

 

This can be extended to other state handles in future as well.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #12273: [FLINK-17850][jdbc] Fix PostgresCatalogITCase.testGroupByInsert() fails on CI

2020-05-21 Thread GitBox


flinkbot edited a comment on pull request #12273:
URL: https://github.com/apache/flink/pull/12273#issuecomment-631848905


   
   ## CI report:
   
   * 9b1c59eda81fd5bd092a6d437d603c88b2450008 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1977)
 
   * e0e2827194c166e64d0e66494c0cc84cca70db3e UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12215: [FLINK-17565][k8s] Bump fabric8 version from 4.5.2 to 4.9.2

2020-05-21 Thread GitBox


flinkbot edited a comment on pull request #12215:
URL: https://github.com/apache/flink/pull/12215#issuecomment-630047332


   
   ## CI report:
   
   * 906be78b0943a61b70d4624b95bad5479c9f3d92 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1896)
 
   * 65c86e560750386063796783054ed5ea7ea42881 UNKNOWN
   * 96ecf9da6d438f616bc9b39b69b0f1c7319b1c34 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1984)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #12276: [FLINK-13553][tests][qs] Improve Logging to debug Test Instability

2020-05-21 Thread GitBox


flinkbot commented on pull request #12276:
URL: https://github.com/apache/flink/pull/12276#issuecomment-631952514


   
   ## CI report:
   
   * 627cef7e0f6cc46ad9c329f013fd1b4a3f956b70 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-17861) Channel state handles, when inlined, duplicate underlying data

2020-05-21 Thread Roman Khachatryan (Jira)
Roman Khachatryan created FLINK-17861:
-

 Summary: Channel state handles, when inlined, duplicate underlying 
data
 Key: FLINK-17861
 URL: https://issues.apache.org/jira/browse/FLINK-17861
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Checkpointing, Runtime / Task
Affects Versions: 1.11.0
Reporter: Roman Khachatryan
Assignee: Roman Khachatryan
 Fix For: 1.11.0


When a subtask snapshots its state it creates one channelStateHandle per 
inputChannel/resultSubpartition. All handles of a single subtask share the 
underlying streamStateHandle. This is an optimisation to prevent having too 
many files.

But if streamStateHandle is inlined (size < state.backend.fs.memory-threshold) 
then most of the bytes in the underlying streamStateHandle are duplicated.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] JingsongLi merged pull request #12274: [FLINK-17474][parquet][hive] Parquet reader should be case insensitive for hive

2020-05-21 Thread GitBox


JingsongLi merged pull request #12274:
URL: https://github.com/apache/flink/pull/12274


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] JingsongLi merged pull request #12241: [FLINK-17474][parquet][hive] Parquet reader should be case insensitive for hive

2020-05-21 Thread GitBox


JingsongLi merged pull request #12241:
URL: https://github.com/apache/flink/pull/12241


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-17474) Test and correct case insensitive for parquet and orc in hive

2020-05-21 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee closed FLINK-17474.

Resolution: Implemented

master: 4d1a2f5da980f79b62dacf836e32ad1487a78eb0

release-1.11: d00fa4c67a20cfcff01101d1bb76026478329921

> Test and correct case insensitive for parquet and orc in hive
> -
>
> Key: FLINK-17474
> URL: https://issues.apache.org/jira/browse/FLINK-17474
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive, Formats (JSON, Avro, Parquet, ORC, 
> SequenceFile)
>Reporter: Jingsong Lee
>Assignee: Jingsong Lee
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>
> Orc and parquet should be field names case insensitive to compatible with 
> hive.
> Both hive mapred reader and vectorization reader.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-17086) Flink sql client not able to read parquet hive table because `HiveMapredSplitReader` not supports name mapping reading for parquet format.

2020-05-21 Thread Jingsong Lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17112925#comment-17112925
 ] 

Jingsong Lee commented on FLINK-17086:
--

Hi [~leiwangouc], related issues have been fixed, you can re-try Flink 1.11. 
Close this.

> Flink sql client not able to read parquet hive table because  
> `HiveMapredSplitReader` not supports name mapping reading for parquet format.
> ---
>
> Key: FLINK-17086
> URL: https://issues.apache.org/jira/browse/FLINK-17086
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.10.0
>Reporter: Lei Wang
>Priority: Major
>
> When writing hive table with parquet format, flink sql client not able to 
> read it correctly because HiveMapredSplitReader not supports name mapping 
> reading for parquet format.
> [http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/fink-sql-client-not-able-to-read-parquet-format-table-td34119.html]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-17086) Flink sql client not able to read parquet hive table because `HiveMapredSplitReader` not supports name mapping reading for parquet format.

2020-05-21 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee closed FLINK-17086.

Resolution: Fixed

> Flink sql client not able to read parquet hive table because  
> `HiveMapredSplitReader` not supports name mapping reading for parquet format.
> ---
>
> Key: FLINK-17086
> URL: https://issues.apache.org/jira/browse/FLINK-17086
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.10.0
>Reporter: Lei Wang
>Priority: Major
>
> When writing hive table with parquet format, flink sql client not able to 
> read it correctly because HiveMapredSplitReader not supports name mapping 
> reading for parquet format.
> [http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/fink-sql-client-not-able-to-read-parquet-format-table-td34119.html]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-17860) Recursively remove channel state directories

2020-05-21 Thread Roman Khachatryan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Khachatryan updated FLINK-17860:
--
Description: 
With a high degree of parallelism, we end up with n*n files in each checkpoint 
(not mitigated by state.backend.fs.memory-threshold: 1048576 and 
state.backend.fs.write-buffer-size: 4194304 because each state is ten-hundreds 
Mb).

Writing them if fast (from many subtasks), removing them is slow (from JM).

 

Instead of going through them 1 by 1, we could remove the directory recursively.

 

The easiest way is to remove channelStateHandle.discard() calls and use 
isRecursive=true  in 
FsCompletedCheckpointStorageLocation.disposeStorageLocation.

Note: with the current isRecursive=false there will be an exception if there 
are any files left under that folder.

 

This can be extended to other state handles in future as well.

  was:
With a high degree of parallelism, we end up with n*n files in each checkpoint 
(not mitigated by state.backend.fs.memory-threshold: 1048576 and 
state.backend.fs.write-buffer-size: 4194304 because each state is ten-hundreds 
Mb).

Writing them if fast (from many subtasks), removing them is slow (from JM).

 

Instead of going through them 1 by 1, we could remove the directory recursively.

 

The easiest way is to just not call discard for channelStateHandles and then 
switch flag isRecursive=true (if unalignedCheckpoints == true?).

 

This can be extended to other state handles in future as well.


> Recursively remove channel state directories
> 
>
> Key: FLINK-17860
> URL: https://issues.apache.org/jira/browse/FLINK-17860
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Checkpointing
>Affects Versions: 1.11.0
>Reporter: Roman Khachatryan
>Assignee: Roman Khachatryan
>Priority: Critical
> Fix For: 1.11.0
>
>
> With a high degree of parallelism, we end up with n*n files in each 
> checkpoint (not mitigated by state.backend.fs.memory-threshold: 1048576 and 
> state.backend.fs.write-buffer-size: 4194304 because each state is 
> ten-hundreds Mb).
> Writing them if fast (from many subtasks), removing them is slow (from JM).
>  
> Instead of going through them 1 by 1, we could remove the directory 
> recursively.
>  
> The easiest way is to remove channelStateHandle.discard() calls and use 
> isRecursive=true  in 
> FsCompletedCheckpointStorageLocation.disposeStorageLocation.
> Note: with the current isRecursive=false there will be an exception if there 
> are any files left under that folder.
>  
> This can be extended to other state handles in future as well.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-17860) Recursively remove channel state directories

2020-05-21 Thread Roman Khachatryan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Khachatryan updated FLINK-17860:
--
Description: 
With a high degree of parallelism, we end up with n*n number of files in each 
checkpoint. Writing them if fast (from many subtasks), removing them is slow 
(from JM).

This can't be mitigated by state.backend.fs.memory-threshold because most 
states are ten to hundreds Mb.

 

Instead of going through them 1 by 1, we could remove the directory recursively.

 

The easiest way is to remove channelStateHandle.discard() calls and use 
isRecursive=true  in 
FsCompletedCheckpointStorageLocation.disposeStorageLocation.

Note: with the current isRecursive=false there will be an exception if there 
are any files left under that folder.

 

This can be extended to other state handles in future as well.

  was:
With a high degree of parallelism, we end up with n*n files in each checkpoint 
(not mitigated by state.backend.fs.memory-threshold: 1048576 and 
state.backend.fs.write-buffer-size: 4194304 because each state is ten-hundreds 
Mb).

Writing them if fast (from many subtasks), removing them is slow (from JM).

 

Instead of going through them 1 by 1, we could remove the directory recursively.

 

The easiest way is to remove channelStateHandle.discard() calls and use 
isRecursive=true  in 
FsCompletedCheckpointStorageLocation.disposeStorageLocation.

Note: with the current isRecursive=false there will be an exception if there 
are any files left under that folder.

 

This can be extended to other state handles in future as well.


> Recursively remove channel state directories
> 
>
> Key: FLINK-17860
> URL: https://issues.apache.org/jira/browse/FLINK-17860
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Checkpointing
>Affects Versions: 1.11.0
>Reporter: Roman Khachatryan
>Assignee: Roman Khachatryan
>Priority: Critical
> Fix For: 1.11.0
>
>
> With a high degree of parallelism, we end up with n*n number of files in each 
> checkpoint. Writing them if fast (from many subtasks), removing them is slow 
> (from JM).
> This can't be mitigated by state.backend.fs.memory-threshold because most 
> states are ten to hundreds Mb.
>  
> Instead of going through them 1 by 1, we could remove the directory 
> recursively.
>  
> The easiest way is to remove channelStateHandle.discard() calls and use 
> isRecursive=true  in 
> FsCompletedCheckpointStorageLocation.disposeStorageLocation.
> Note: with the current isRecursive=false there will be an exception if there 
> are any files left under that folder.
>  
> This can be extended to other state handles in future as well.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #12230: [FLINK-17504][docs] Update Chinese translation of Getting Started / O…

2020-05-21 Thread GitBox


flinkbot edited a comment on pull request #12230:
URL: https://github.com/apache/flink/pull/12230#issuecomment-630205457


   
   ## CI report:
   
   * 7896b3f7088a1b0b6b585818af4bcb556f95 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1980)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Assigned] (FLINK-17657) jdbc not support read BIGINT UNSIGNED field

2020-05-21 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17657?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu reassigned FLINK-17657:
---

Assignee: Leonard Xu

> jdbc not support read BIGINT UNSIGNED field
> ---
>
> Key: FLINK-17657
> URL: https://issues.apache.org/jira/browse/FLINK-17657
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC, Table SQL / Client
>Affects Versions: 1.10.0, 1.10.1
>Reporter: lun zhang
>Assignee: Leonard Xu
>Priority: Major
> Attachments: env.yaml, excetion.txt
>
>
> I use sql client read mysql table, but I found I can't read a table contain 
> `BIGINT UNSIGNED` field. It will 
>  Caused by: java.lang.ClassCastException: java.math.BigInteger cannot be cast 
> to java.lang.Long
>  
> MySQL table:
>  
> create table tb
>  (
>  id BIGINT UNSIGNED auto_increment 
>  primary key,
>  cooper BIGINT(19) null ,
> user_sex VARCHAR(2) null 
> );
>  
> my env yaml is env.yaml .



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #12276: [FLINK-13553][tests][qs] Improve Logging to debug Test Instability

2020-05-21 Thread GitBox


flinkbot edited a comment on pull request #12276:
URL: https://github.com/apache/flink/pull/12276#issuecomment-631952514


   
   ## CI report:
   
   * 627cef7e0f6cc46ad9c329f013fd1b4a3f956b70 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1986)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12273: [FLINK-17850][jdbc] Fix PostgresCatalogITCase.testGroupByInsert() fails on CI

2020-05-21 Thread GitBox


flinkbot edited a comment on pull request #12273:
URL: https://github.com/apache/flink/pull/12273#issuecomment-631848905


   
   ## CI report:
   
   * 9b1c59eda81fd5bd092a6d437d603c88b2450008 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1977)
 
   * e0e2827194c166e64d0e66494c0cc84cca70db3e Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1985)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-17657) jdbc not support read BIGINT UNSIGNED field

2020-05-21 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17657?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-17657:

Fix Version/s: 1.11.0

> jdbc not support read BIGINT UNSIGNED field
> ---
>
> Key: FLINK-17657
> URL: https://issues.apache.org/jira/browse/FLINK-17657
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC, Table SQL / Client
>Affects Versions: 1.10.0, 1.10.1
>Reporter: lun zhang
>Assignee: Leonard Xu
>Priority: Major
> Fix For: 1.11.0
>
> Attachments: env.yaml, excetion.txt
>
>
> I use sql client read mysql table, but I found I can't read a table contain 
> `BIGINT UNSIGNED` field. It will 
>  Caused by: java.lang.ClassCastException: java.math.BigInteger cannot be cast 
> to java.lang.Long
>  
> MySQL table:
>  
> create table tb
>  (
>  id BIGINT UNSIGNED auto_increment 
>  primary key,
>  cooper BIGINT(19) null ,
> user_sex VARCHAR(2) null 
> );
>  
> my env yaml is env.yaml .



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (FLINK-14369) KafkaProducerAtLeastOnceITCase>KafkaProducerTestBase.testOneToOneAtLeastOnceCustomOperator fails on Travis

2020-05-21 Thread Jiangjie Qin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiangjie Qin resolved FLINK-14369.
--
Resolution: Fixed

I believe this issue is related to FLINK-14370 & FLINK-14235. The problem is 
following:

Prior to FLINK-14235 may cause `testOneToOneAtLeastOnceRegularSink()` to fail 
because the job was expected to fail but it doesn't.

When that happens, due to the problem fixed in FLINK-14370, the traffic to the 
Kafka brokers will remain blocked by the proxy. This will cause the next test, 
in this case `testOneToOneAtLeastOnceCustomOperator` to fail in a cascading way.

Given those two issues have been resolved. I am closing this ticket.

 

> KafkaProducerAtLeastOnceITCase>KafkaProducerTestBase.testOneToOneAtLeastOnceCustomOperator
>  fails on Travis
> --
>
> Key: FLINK-14369
> URL: https://issues.apache.org/jira/browse/FLINK-14369
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.10.0
>Reporter: Till Rohrmann
>Assignee: shuiqiangchen
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.11.0, 1.10.2
>
>
> The 
> {{KafkaProducerAtLeastOnceITCase>KafkaProducerTestBase.testOneToOneAtLeastOnceCustomOperator}}
>  fails on Travis with 
> {code}
> Test 
> testOneToOneAtLeastOnceCustomOperator(org.apache.flink.streaming.connectors.kafka.KafkaProducerAtLeastOnceITCase)
>  failed with:
> java.lang.AssertionError: Create test topic : oneToOneTopicCustomOperator 
> failed, org.apache.kafka.common.errors.TimeoutException: Timed out waiting 
> for a node assignment.
>   at org.junit.Assert.fail(Assert.java:88)
>   at 
> org.apache.flink.streaming.connectors.kafka.KafkaTestEnvironmentImpl.createTestTopic(KafkaTestEnvironmentImpl.java:180)
>   at 
> org.apache.flink.streaming.connectors.kafka.KafkaTestEnvironment.createTestTopic(KafkaTestEnvironment.java:115)
>   at 
> org.apache.flink.streaming.connectors.kafka.KafkaTestBase.createTestTopic(KafkaTestBase.java:196)
>   at 
> org.apache.flink.streaming.connectors.kafka.KafkaProducerTestBase.testOneToOneAtLeastOnce(KafkaProducerTestBase.java:231)
>   at 
> org.apache.flink.streaming.connectors.kafka.KafkaProducerTestBase.testOneToOneAtLeastOnceCustomOperator(KafkaProducerTestBase.java:214)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
>   at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProvi

[jira] [Updated] (FLINK-16605) Add max limitation to the total number of slots

2020-05-21 Thread Yangze Guo (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yangze Guo updated FLINK-16605:
---
Issue Type: New Feature  (was: Improvement)

> Add max limitation to the total number of slots
> ---
>
> Key: FLINK-16605
> URL: https://issues.apache.org/jira/browse/FLINK-16605
> Project: Flink
>  Issue Type: New Feature
>  Components: Runtime / Coordination
>Reporter: Yangze Guo
>Assignee: Yangze Guo
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> As discussed in FLINK-15527 and FLINK-15959, we propose to add the max limit 
> to the total number of slots.
> To be specific:
> - Introduce "cluster.number-of-slots.max" configuration option with default 
> value MAX_INT
> - Make the SlotManager respect the max number of slots, when exceeded, it 
> would not allocate resource anymore.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-16605) Add max limitation to the total number of slots

2020-05-21 Thread Yangze Guo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17112968#comment-17112968
 ] 

Yangze Guo commented on FLINK-16605:


[~gjy] We may add release note for it?

> Add max limitation to the total number of slots
> ---
>
> Key: FLINK-16605
> URL: https://issues.apache.org/jira/browse/FLINK-16605
> Project: Flink
>  Issue Type: New Feature
>  Components: Runtime / Coordination
>Reporter: Yangze Guo
>Assignee: Yangze Guo
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> As discussed in FLINK-15527 and FLINK-15959, we propose to add the max limit 
> to the total number of slots.
> To be specific:
> - Introduce "cluster.number-of-slots.max" configuration option with default 
> value MAX_INT
> - Make the SlotManager respect the max number of slots, when exceeded, it 
> would not allocate resource anymore.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] danny0405 commented on a change in pull request #12260: [FLINK-17189][table-planner] Table with proctime attribute cannot be read from Hive catalog

2020-05-21 Thread GitBox


danny0405 commented on a change in pull request #12260:
URL: https://github.com/apache/flink/pull/12260#discussion_r428524670



##
File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/sources/TableSourceUtil.scala
##
@@ -238,6 +250,14 @@ object TableSourceUtil {
 }
   }
 
+  private def containsTimeAttribute(tableSchema: TableSchema): Boolean = {
+tableSchema.getWatermarkSpecs.nonEmpty || 
tableSchema.getTableColumns.exists(isProctime)
+  }
+
+  private def isProctime(column: TableColumn): Boolean = {
+toScala(column.getExpr).exists(_.equalsIgnoreCase("proctime()"))
+  }

Review comment:
   Actually `proctime()` should also work, i'm thinking if we can use 
the `Sqlparser` to parse the expr first, if we got one `SqlUnresolvedFunction` 
with no operands and its name matches `proctime`, the we can go ahead.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-17862) The internal class StreamingRuntimeContext is leaked in AbstractUdfStreamOperator

2020-05-21 Thread Yangze Guo (Jira)
Yangze Guo created FLINK-17862:
--

 Summary: The internal class StreamingRuntimeContext is leaked in 
AbstractUdfStreamOperator
 Key: FLINK-17862
 URL: https://issues.apache.org/jira/browse/FLINK-17862
 Project: Flink
  Issue Type: Bug
Reporter: Yangze Guo


The {{StreamingRuntimeContext}} is annotated with {{Internal}}. However, user 
could get it in {{AbstractUdfStreamOperator}}. Then, many internal classes, 
e.g. {{TaskManagerRuntimeInfo}}, will be leaked from it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-17862) The internal class StreamingRuntimeContext is leaked in AbstractUdfStreamOperator

2020-05-21 Thread Yangze Guo (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yangze Guo updated FLINK-17862:
---
Component/s: API / DataStream

> The internal class StreamingRuntimeContext is leaked in 
> AbstractUdfStreamOperator
> -
>
> Key: FLINK-17862
> URL: https://issues.apache.org/jira/browse/FLINK-17862
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream
>Reporter: Yangze Guo
>Priority: Major
>
> The {{StreamingRuntimeContext}} is annotated with {{Internal}}. However, user 
> could get it in {{AbstractUdfStreamOperator}}. Then, many internal classes, 
> e.g. {{TaskManagerRuntimeInfo}}, will be leaked from it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #12215: [FLINK-17565][k8s] Bump fabric8 version from 4.5.2 to 4.9.2

2020-05-21 Thread GitBox


flinkbot edited a comment on pull request #12215:
URL: https://github.com/apache/flink/pull/12215#issuecomment-630047332


   
   ## CI report:
   
   * 65c86e560750386063796783054ed5ea7ea42881 UNKNOWN
   * 96ecf9da6d438f616bc9b39b69b0f1c7319b1c34 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1984)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] JingsongLi commented on a change in pull request #12260: [FLINK-17189][table-planner] Table with proctime attribute cannot be read from Hive catalog

2020-05-21 Thread GitBox


JingsongLi commented on a change in pull request #12260:
URL: https://github.com/apache/flink/pull/12260#discussion_r428534265



##
File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/sources/TableSourceUtil.scala
##
@@ -238,6 +250,14 @@ object TableSourceUtil {
 }
   }
 
+  private def containsTimeAttribute(tableSchema: TableSchema): Boolean = {
+tableSchema.getWatermarkSpecs.nonEmpty || 
tableSchema.getTableColumns.exists(isProctime)
+  }
+
+  private def isProctime(column: TableColumn): Boolean = {
+toScala(column.getExpr).exists(_.equalsIgnoreCase("proctime()"))
+  }

Review comment:
   It is hard to get `Sqlparser` too. Obtain parser also creates a circular 
dependency. It is hard to break it.
   I will remove `" "` for checking.
   





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wangyang0918 commented on pull request #12215: [FLINK-17565][k8s] Bump fabric8 version from 4.5.2 to 4.9.2

2020-05-21 Thread GitBox


wangyang0918 commented on pull request #12215:
URL: https://github.com/apache/flink/pull/12215#issuecomment-631977115


   @zhengcanbin Thanks for the updating. LGTM now.
   
   One more thing, we need to revert this commit 
5846e5458e20308db4f0dd1c87d511a60817472c to verify FLINK-17416 is fixed(i.e. 
Flink on K8s could work on java 8u252). For FLINK-17566, i have verified it 
could be fixed by this PR.
   
   nit: Maybe we could use fabric8 kubernetes-client version in the commit 
message, not fabric8 version.
   
   
   Moreover, could you rebase the latest master to enable the e2e test?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] hequn8128 commented on a change in pull request #12246: [FLINK-17303][python] Return TableResult for Python TableEnvironment

2020-05-21 Thread GitBox


hequn8128 commented on a change in pull request #12246:
URL: https://github.com/apache/flink/pull/12246#discussion_r428525745



##
File path: flink-python/pyflink/table/tests/test_sql.py
##
@@ -58,6 +56,67 @@ def test_sql_query(self):
 expected = ['2,Hi,Hello', '3,Hello,Hello']
 self.assert_equals(actual, expected)
 
+def test_execute_sql(self):
+t_env = self.t_env
+table_result = t_env.execute_sql("create table tbl"
+ "("
+ "   a bigint,"
+ "   b int,"
+ "   c varchar"
+ ") with ("
+ "  'connector' = 'COLLECTION',"
+ "   'is-bounded' = 'false'"
+ ")")
+self.assertIsNotNone(table_result)

Review comment:
   Remove this line. If table_result is None, the following tests would 
failed.

##
File path: flink-python/pyflink/table/tests/test_sql.py
##
@@ -58,6 +56,67 @@ def test_sql_query(self):
 expected = ['2,Hi,Hello', '3,Hello,Hello']
 self.assert_equals(actual, expected)
 
+def test_execute_sql(self):
+t_env = self.t_env
+table_result = t_env.execute_sql("create table tbl"
+ "("
+ "   a bigint,"
+ "   b int,"
+ "   c varchar"
+ ") with ("
+ "  'connector' = 'COLLECTION',"
+ "   'is-bounded' = 'false'"
+ ")")
+self.assertIsNotNone(table_result)
+self.assertIsNone(table_result.get_job_client())
+self.assertIsNotNone(table_result.get_table_schema())
+self.assert_equals(table_result.get_table_schema().get_field_names(), 
["result"])
+self.assertIsNotNone(table_result.get_result_kind())
+self.assertEqual(table_result.get_result_kind(), ResultKind.SUCCESS)
+table_result.print()
+
+table_result = t_env.execute_sql("alter table tbl set ('k1' = 'a', 
'k2' = 'b')")
+self.assertIsNotNone(table_result)
+self.assertIsNone(table_result.get_job_client())
+self.assertIsNotNone(table_result.get_table_schema())
+self.assert_equals(table_result.get_table_schema().get_field_names(), 
["result"])
+self.assertIsNotNone(table_result.get_result_kind())
+self.assertEqual(table_result.get_result_kind(), ResultKind.SUCCESS)
+table_result.print()
+
+field_names = ["k1", "k2", "c"]
+field_types = [DataTypes.BIGINT(), DataTypes.INT(), DataTypes.STRING()]
+t_env.register_table_sink(
+"sinks",
+source_sink_utils.TestAppendSink(field_names, field_types))
+table_result = t_env.execute_sql("insert into sinks select * from tbl")
+self.assertIsNotNone(table_result)
+self.assertIsNotNone(table_result.get_job_client())
+job_status_feature = table_result.get_job_client().get_job_status()
+job_execution_result_feature = 
table_result.get_job_client().get_job_execution_result(
+get_gateway().jvm.Thread.currentThread().getContextClassLoader())
+job_execution_result = job_execution_result_feature.result()
+self.assertIsNotNone(job_execution_result)
+self.assertIsNotNone(job_execution_result.get_job_id())
+self.assertIsNotNone(job_execution_result.get_job_execution_result())
+job_status = job_status_feature.result()
+self.assertIsNotNone(job_status)
+self.assertIsNotNone(table_result.get_table_schema())
+self.assert_equals(table_result.get_table_schema().get_field_names(),
+   ["default_catalog.default_database.sinks"])
+self.assertIsNotNone(table_result.get_result_kind())
+self.assertEqual(table_result.get_result_kind(), 
ResultKind.SUCCESS_WITH_CONTENT)
+table_result.print()
+
+table_result = t_env.execute_sql("drop table tbl")
+self.assertIsNotNone(table_result)
+self.assertIsNone(table_result.get_job_client())
+self.assertIsNotNone(table_result.get_table_schema())

Review comment:
   Remove this line

##
File path: flink-python/pyflink/table/tests/test_sql.py
##
@@ -58,6 +56,67 @@ def test_sql_query(self):
 expected = ['2,Hi,Hello', '3,Hello,Hello']
 self.assert_equals(actual, expected)
 
+def test_execute_sql(self):
+t_env = self.t_env
+table_result = t_env.execute_sql("create table tbl"
+ "("
+ "   a bigint,"
+

[GitHub] [flink] zhengcanbin opened a new pull request #12277: [FLINK-17230] Fix incorrect returned address of Endpoint for external Service of ClusterIP type

2020-05-21 Thread GitBox


zhengcanbin opened a new pull request #12277:
URL: https://github.com/apache/flink/pull/12277


   ## What is the purpose of the change
   
   At the moment, when external Service is set to ClusterIP, we return 
`${internal-service-name} + "." + nameSpace` as the address of the Endpoint, 
which could be problematic since we don't create the internal service in 
ZooKeeper based HA setups.  According to 
[FLINK-16602](https://issues.apache.org/jira/browse/FLINK-16602), we always use 
external Service for rest communication so that this PR returns 
`${external-service-name} + "." + nameSpace` as Endpoint's address instead.
   
   ## Brief change log
   
 - Fix incorrect returned address of Endpoint for the external Service of 
ClusterIP type
 - Port KubernetesUtils.getInternalServiceName to InternalServiceDecorator
 - Port KubernetesUtils.getExternalServiceName to ExternalServiceDecorator
 - Fix code style
   
   
   ## Verifying this change
   
   This change hardens `Fabric8FlinkKubeClientTest#testClusterIPService` to add 
a test branch for Endpoint's address.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not applicable)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-17230) Fix incorrect returned address of Endpoint for the ClusterIP Service

2020-05-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-17230:
---
Labels: pull-request-available  (was: )

> Fix incorrect returned address of Endpoint for the ClusterIP Service
> 
>
> Key: FLINK-17230
> URL: https://issues.apache.org/jira/browse/FLINK-17230
> Project: Flink
>  Issue Type: Sub-task
>  Components: Deployment / Kubernetes
>Affects Versions: 1.10.0, 1.10.1
>Reporter: Canbin Zheng
>Assignee: Canbin Zheng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>
> At the moment, when the type of the external Service is set to {{ClusterIP}}, 
> we return an incorrect address 
> {{KubernetesUtils.getInternalServiceName(clusterId) + "." + nameSpace}} for 
> the Endpoint.
> This ticket aims to fix this bug by returning 
> {{KubernetesUtils.getRestServiceName(clusterId) + "." + nameSpace}} instead.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (FLINK-17447) Flink CEPOperator StateException

2020-05-21 Thread chun111111 (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chun11 resolved FLINK-17447.

Resolution: Not A Bug

> Flink CEPOperator StateException
> 
>
> Key: FLINK-17447
> URL: https://issues.apache.org/jira/browse/FLINK-17447
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Network
>Affects Versions: 1.10.0
>Reporter: chun11
>Priority: Critical
> Attachments: image-2020-04-29-10-21-10-717.png, 
> image-2020-04-29-10-22-20-645.png, image-2020-04-29-10-23-14-306.png, 
> image-2020-04-29-11-55-12-995.png, image-2020-04-30-11-37-20-411.png
>
>
> !image-2020-04-29-11-55-12-995.png!!image-2020-04-29-10-21-10-717.png!
>  
> !image-2020-04-29-10-23-14-306.png!
>  
> !image-2020-04-29-10-22-20-645.png!
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on pull request #12277: [FLINK-17230] Fix incorrect returned address of Endpoint for external Service of ClusterIP type

2020-05-21 Thread GitBox


flinkbot commented on pull request #12277:
URL: https://github.com/apache/flink/pull/12277#issuecomment-631978940


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 689aafe09dd533f85d746915b404d126e051dd6b (Thu May 21 
09:15:32 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wangyang0918 edited a comment on pull request #12240: [FLINK-15792][k8s] Make Flink logs accessible via kubectl logs per default

2020-05-21 Thread GitBox


wangyang0918 edited a comment on pull request #12240:
URL: https://github.com/apache/flink/pull/12240#issuecomment-630701881


   cc @tillrohrmann follow what we have discussed in FLINK-15792, this PR will 
make logs accessible via `kubectl logs` by default. Please have a look at your 
convenience.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zhengcanbin commented on pull request #12215: [FLINK-17565][k8s] Bump fabric8 version from 4.5.2 to 4.9.2

2020-05-21 Thread GitBox


zhengcanbin commented on pull request #12215:
URL: https://github.com/apache/flink/pull/12215#issuecomment-631983535


   > @zhengcanbin Thanks for the updating. LGTM now.
   > 
   > One more thing, we need to revert this commit 
[5846e54](https://github.com/apache/flink/commit/5846e5458e20308db4f0dd1c87d511a60817472c)
 to verify [FLINK-17416](https://issues.apache.org/jira/browse/FLINK-17416) is 
fixed(i.e. Flink on K8s could work on java 8u252). For 
[FLINK-17566](https://issues.apache.org/jira/browse/FLINK-17566), i have 
verified it could be fixed by this PR.
   > 
   > nit: Maybe we could use fabric8 kubernetes-client version in the commit 
message, not fabric8 version.
   > 
   > Moreover, could you rebase the latest master to enable the e2e test?
   
   Thanks for the reminder. Done!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-17863) flink streaming sql read hive with lots small files need to control parallelism

2020-05-21 Thread richt richt (Jira)
richt richt created FLINK-17863:
---

 Summary: flink streaming sql  read hive with lots small files  
need to control  parallelism
 Key: FLINK-17863
 URL: https://issues.apache.org/jira/browse/FLINK-17863
 Project: Flink
  Issue Type: Improvement
  Components: Connectors / Hive
Affects Versions: 1.10.1
Reporter: richt richt


the table wy.cartest  has 19 rows with 19 files  

so when i query the table use *streaming* mode it will require 19 slots , my 
cluster cannot allocate so much resource to the task.

Caused by: org.apache.flink.runtime.JobException: Vertex Source: 
HiveTableSource(carid, time, num, var) TablePath: wy.cartest, Par
titionPruned: false, PartitionNums: null -> SinkConversionToTuple2's 
parallelism (19) is higher than the max parallelism (2). Plea
se lower the parallelism or increase the max parallelism.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-17864) Update document about removing #registerTableSource

2020-05-21 Thread Danny Chen (Jira)
Danny Chen created FLINK-17864:
--

 Summary: Update document about removing #registerTableSource
 Key: FLINK-17864
 URL: https://issues.apache.org/jira/browse/FLINK-17864
 Project: Flink
  Issue Type: Sub-task
  Components: Documentation
Affects Versions: 1.11.0
Reporter: Danny Chen
 Fix For: 1.11.0






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-17863) flink streaming sql read hive with lots small files need to control parallelism

2020-05-21 Thread Jingsong Lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17113012#comment-17113012
 ] 

Jingsong Lee commented on FLINK-17863:
--

Hi [~yantianyu], there is a "table.exec.hive.infer-source-parallelism.max" to 
control max parallelism

> flink streaming sql  read hive with lots small files  need to control  
> parallelism
> --
>
> Key: FLINK-17863
> URL: https://issues.apache.org/jira/browse/FLINK-17863
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Hive
>Affects Versions: 1.10.1
>Reporter: richt richt
>Priority: Major
>
> the table wy.cartest  has 19 rows with 19 files  
> so when i query the table use *streaming* mode it will require 19 slots , my 
> cluster cannot allocate so much resource to the task.
> 
> Caused by: org.apache.flink.runtime.JobException: Vertex Source: 
> HiveTableSource(carid, time, num, var) TablePath: wy.cartest, Par
> titionPruned: false, PartitionNums: null -> SinkConversionToTuple2's 
> parallelism (19) is higher than the max parallelism (2). Plea
> se lower the parallelism or increase the max parallelism.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] carp84 closed pull request #12096: [FLINK-16074][docs-zh] Translate the Overview page for State & Fault Tolerance into Chinese

2020-05-21 Thread GitBox


carp84 closed pull request #12096:
URL: https://github.com/apache/flink/pull/12096


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-16074) Translate the "Overview" page for "State & Fault Tolerance" into Chinese

2020-05-21 Thread Yu Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16074?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yu Li closed FLINK-16074.
-
Fix Version/s: (was: 1.12.0)
   1.11.0
   Resolution: Done

Merged into:
master via 25c151e7a911adf0787e311cde01d8c66ab2ae1a
release-1.11 via 570df3a172f5a1c12d6647546b00136f3516636e

> Translate the "Overview" page for "State & Fault Tolerance" into Chinese
> 
>
> Key: FLINK-16074
> URL: https://issues.apache.org/jira/browse/FLINK-16074
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation
>Affects Versions: 1.11.0
>Reporter: Yu Li
>Assignee: Congxian Qiu(klion26)
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>
> Complete the translation in `docs/dev/stream/state/index.zh.md`



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #12215: [FLINK-17565][k8s] Bump fabric8 version from 4.5.2 to 4.9.2

2020-05-21 Thread GitBox


flinkbot edited a comment on pull request #12215:
URL: https://github.com/apache/flink/pull/12215#issuecomment-630047332


   
   ## CI report:
   
   * 65c86e560750386063796783054ed5ea7ea42881 UNKNOWN
   * 96ecf9da6d438f616bc9b39b69b0f1c7319b1c34 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1984)
 
   * fec492efa997636aa6d39d4de6dabf059a89b18b UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12240: [FLINK-15792][k8s] Make Flink logs accessible via kubectl logs per default

2020-05-21 Thread GitBox


flinkbot edited a comment on pull request #12240:
URL: https://github.com/apache/flink/pull/12240#issuecomment-630661048


   
   ## CI report:
   
   * fc462938ff28feca6fd689f6e51e1fca79efe975 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1901)
 
   * a6bec5ceb4785380806029bac88033ed794047a6 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #12277: [FLINK-17230] Fix incorrect returned address of Endpoint for external Service of ClusterIP type

2020-05-21 Thread GitBox


flinkbot commented on pull request #12277:
URL: https://github.com/apache/flink/pull/12277#issuecomment-631994506


   
   ## CI report:
   
   * 689aafe09dd533f85d746915b404d126e051dd6b UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12260: [FLINK-17189][table-planner] Table with proctime attribute cannot be read from Hive catalog

2020-05-21 Thread GitBox


flinkbot edited a comment on pull request #12260:
URL: https://github.com/apache/flink/pull/12260#issuecomment-631229314


   
   ## CI report:
   
   * 87d0b478bf38fc74639f8ac2c065e4e6d2fc2156 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1927)
 
   * ed086581e9bdc7cc60830294a3ddd1cf8d9e0bbe UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12246: [FLINK-17303][python] Return TableResult for Python TableEnvironment

2020-05-21 Thread GitBox


flinkbot edited a comment on pull request #12246:
URL: https://github.com/apache/flink/pull/12246#issuecomment-630803193


   
   ## CI report:
   
   * 911e459fe53b61aa74ce3bc3d0761651eb7f61fb Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1893)
 
   * be75b66a69ad7616f5277cfb636e7b135b75 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] carp84 commented on pull request #8693: [FLINK-8871] Support to cancel checkpoing via notification

2020-05-21 Thread GitBox


carp84 commented on pull request #8693:
URL: https://github.com/apache/flink/pull/8693#issuecomment-631995913


   Already merged into master via:
   * f0ed29c06d331892a06ee9bddea4173d6300835d
   * fae6a6cad3f4fe30d80cc9dd664b2efc72c0be36
   * 1f6b1e6b8ffb38f196c2a1c6348cb96935e9a9cc
   
   release-1.11 via:
   * fa731288518c8ebf66f40b4e0e9b1929546b6257
   * fcacc42e17f00cb47c5c16fe75af035f784ae1fa
   * 6591263c8871898f602d122aac336f2ad63bbb1c



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] carp84 edited a comment on pull request #8693: [FLINK-8871] Support to cancel checkpoing via notification

2020-05-21 Thread GitBox


carp84 edited a comment on pull request #8693:
URL: https://github.com/apache/flink/pull/8693#issuecomment-631995913


   Already merged into master via:
   * f0ed29c06d331892a06ee9bddea4173d6300835d
   * fae6a6cad3f4fe30d80cc9dd664b2efc72c0be36
   * 1f6b1e6b8ffb38f196c2a1c6348cb96935e9a9cc
   
   release-1.11 via:
   * fa731288518c8ebf66f40b4e0e9b1929546b6257
   * fcacc42e17f00cb47c5c16fe75af035f784ae1fa
   * 6591263c8871898f602d122aac336f2ad63bbb1c
   
   (Thanks @pnowojski for merging this)



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] carp84 closed pull request #8693: [FLINK-8871] Support to cancel checkpoing via notification

2020-05-21 Thread GitBox


carp84 closed pull request #8693:
URL: https://github.com/apache/flink/pull/8693


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-8871) Checkpoint cancellation is not propagated to stop checkpointing threads on the task manager

2020-05-21 Thread Yu Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-8871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yu Li updated FLINK-8871:
-
Issue Type: Improvement  (was: Bug)

Merged into master via:
* f0ed29c06d331892a06ee9bddea4173d6300835d
* fae6a6cad3f4fe30d80cc9dd664b2efc72c0be36
* 1f6b1e6b8ffb38f196c2a1c6348cb96935e9a9cc

release-1.11 via:
* fa731288518c8ebf66f40b4e0e9b1929546b6257
* fcacc42e17f00cb47c5c16fe75af035f784ae1fa
* 6591263c8871898f602d122aac336f2ad63bbb1c

Thanks [~pnowojski] for merging this.

> Checkpoint cancellation is not propagated to stop checkpointing threads on 
> the task manager
> ---
>
> Key: FLINK-8871
> URL: https://issues.apache.org/jira/browse/FLINK-8871
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Checkpointing
>Affects Versions: 1.3.2, 1.4.1, 1.5.0, 1.6.0, 1.7.0
>Reporter: Stefan Richter
>Assignee: Yun Tang
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Flink currently lacks any form of feedback mechanism from the job manager / 
> checkpoint coordinator to the tasks when it comes to failing a checkpoint. 
> This means that running snapshots on the tasks are also not stopped even if 
> their owning checkpoint is already cancelled. Two examples for cases where 
> this applies are checkpoint timeouts and local checkpoint failures on a task 
> together with a configuration that does not fail tasks on checkpoint failure. 
> Notice that those running snapshots do no longer account for the maximum 
> number of parallel checkpoints, because their owning checkpoint is considered 
> as cancelled.
> Not stopping the task's snapshot thread can lead to a problematic situation 
> where the next checkpoints already started, while the abandoned checkpoint 
> thread from a previous checkpoint is still lingering around running. This 
> scenario can potentially cascade: many parallel checkpoints will slow down 
> checkpointing and make timeouts even more likely.
>  
> A possible solution is introducing a {{cancelCheckpoint}} method  as 
> counterpart to the {{triggerCheckpoint}} method in the task manager gateway, 
> which is invoked by the checkpoint coordinator as part of cancelling the 
> checkpoint.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-8871) Checkpoint cancellation is not propagated to stop checkpointing threads on the task manager

2020-05-21 Thread Yun Tang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-8871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Tang updated FLINK-8871:

Release Note: This improvement help to abort checkpoint more eagerly, and 
also introduce interface #notifyCheckpointAborted(long checkpointId) to 
CheckpointListener.
  Issue Type: Bug  (was: Improvement)

> Checkpoint cancellation is not propagated to stop checkpointing threads on 
> the task manager
> ---
>
> Key: FLINK-8871
> URL: https://issues.apache.org/jira/browse/FLINK-8871
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.3.2, 1.4.1, 1.5.0, 1.6.0, 1.7.0
>Reporter: Stefan Richter
>Assignee: Yun Tang
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Flink currently lacks any form of feedback mechanism from the job manager / 
> checkpoint coordinator to the tasks when it comes to failing a checkpoint. 
> This means that running snapshots on the tasks are also not stopped even if 
> their owning checkpoint is already cancelled. Two examples for cases where 
> this applies are checkpoint timeouts and local checkpoint failures on a task 
> together with a configuration that does not fail tasks on checkpoint failure. 
> Notice that those running snapshots do no longer account for the maximum 
> number of parallel checkpoints, because their owning checkpoint is considered 
> as cancelled.
> Not stopping the task's snapshot thread can lead to a problematic situation 
> where the next checkpoints already started, while the abandoned checkpoint 
> thread from a previous checkpoint is still lingering around running. This 
> scenario can potentially cascade: many parallel checkpoints will slow down 
> checkpointing and make timeouts even more likely.
>  
> A possible solution is introducing a {{cancelCheckpoint}} method  as 
> counterpart to the {{triggerCheckpoint}} method in the task manager gateway, 
> which is invoked by the checkpoint coordinator as part of cancelling the 
> checkpoint.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-8871) Checkpoint cancellation is not propagated to stop checkpointing threads on the task manager

2020-05-21 Thread Yu Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-8871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yu Li updated FLINK-8871:
-
Issue Type: Improvement  (was: Bug)

> Checkpoint cancellation is not propagated to stop checkpointing threads on 
> the task manager
> ---
>
> Key: FLINK-8871
> URL: https://issues.apache.org/jira/browse/FLINK-8871
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Checkpointing
>Affects Versions: 1.3.2, 1.4.1, 1.5.0, 1.6.0, 1.7.0
>Reporter: Stefan Richter
>Assignee: Yun Tang
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Flink currently lacks any form of feedback mechanism from the job manager / 
> checkpoint coordinator to the tasks when it comes to failing a checkpoint. 
> This means that running snapshots on the tasks are also not stopped even if 
> their owning checkpoint is already cancelled. Two examples for cases where 
> this applies are checkpoint timeouts and local checkpoint failures on a task 
> together with a configuration that does not fail tasks on checkpoint failure. 
> Notice that those running snapshots do no longer account for the maximum 
> number of parallel checkpoints, because their owning checkpoint is considered 
> as cancelled.
> Not stopping the task's snapshot thread can lead to a problematic situation 
> where the next checkpoints already started, while the abandoned checkpoint 
> thread from a previous checkpoint is still lingering around running. This 
> scenario can potentially cascade: many parallel checkpoints will slow down 
> checkpointing and make timeouts even more likely.
>  
> A possible solution is introducing a {{cancelCheckpoint}} method  as 
> counterpart to the {{triggerCheckpoint}} method in the task manager gateway, 
> which is invoked by the checkpoint coordinator as part of cancelling the 
> checkpoint.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-8871) Checkpoint cancellation is not propagated to stop checkpointing threads on the task manager

2020-05-21 Thread Yu Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-8871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yu Li closed FLINK-8871.

Release Note: FLINK-8871 helps to abort checkpoint more eagerly, and added 
a new interface #notifyCheckpointAborted(long checkpointId) in 
CheckpointListener.  (was: This improvement help to abort checkpoint more 
eagerly, and also introduce interface #notifyCheckpointAborted(long 
checkpointId) to CheckpointListener.)
  Resolution: Implemented

> Checkpoint cancellation is not propagated to stop checkpointing threads on 
> the task manager
> ---
>
> Key: FLINK-8871
> URL: https://issues.apache.org/jira/browse/FLINK-8871
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Checkpointing
>Affects Versions: 1.3.2, 1.4.1, 1.5.0, 1.6.0, 1.7.0
>Reporter: Stefan Richter
>Assignee: Yun Tang
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Flink currently lacks any form of feedback mechanism from the job manager / 
> checkpoint coordinator to the tasks when it comes to failing a checkpoint. 
> This means that running snapshots on the tasks are also not stopped even if 
> their owning checkpoint is already cancelled. Two examples for cases where 
> this applies are checkpoint timeouts and local checkpoint failures on a task 
> together with a configuration that does not fail tasks on checkpoint failure. 
> Notice that those running snapshots do no longer account for the maximum 
> number of parallel checkpoints, because their owning checkpoint is considered 
> as cancelled.
> Not stopping the task's snapshot thread can lead to a problematic situation 
> where the next checkpoints already started, while the abandoned checkpoint 
> thread from a previous checkpoint is still lingering around running. This 
> scenario can potentially cascade: many parallel checkpoints will slow down 
> checkpointing and make timeouts even more likely.
>  
> A possible solution is introducing a {{cancelCheckpoint}} method  as 
> counterpart to the {{triggerCheckpoint}} method in the task manager gateway, 
> which is invoked by the checkpoint coordinator as part of cancelling the 
> checkpoint.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #12215: [FLINK-17565][k8s] Bump fabric8 version from 4.5.2 to 4.9.2

2020-05-21 Thread GitBox


flinkbot edited a comment on pull request #12215:
URL: https://github.com/apache/flink/pull/12215#issuecomment-630047332


   
   ## CI report:
   
   * 65c86e560750386063796783054ed5ea7ea42881 UNKNOWN
   * 96ecf9da6d438f616bc9b39b69b0f1c7319b1c34 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1984)
 
   * fec492efa997636aa6d39d4de6dabf059a89b18b Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1991)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12260: [FLINK-17189][table-planner] Table with proctime attribute cannot be read from Hive catalog

2020-05-21 Thread GitBox


flinkbot edited a comment on pull request #12260:
URL: https://github.com/apache/flink/pull/12260#issuecomment-631229314


   
   ## CI report:
   
   * 87d0b478bf38fc74639f8ac2c065e4e6d2fc2156 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1927)
 
   * ed086581e9bdc7cc60830294a3ddd1cf8d9e0bbe Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1994)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12240: [FLINK-15792][k8s] Make Flink logs accessible via kubectl logs per default

2020-05-21 Thread GitBox


flinkbot edited a comment on pull request #12240:
URL: https://github.com/apache/flink/pull/12240#issuecomment-630661048


   
   ## CI report:
   
   * fc462938ff28feca6fd689f6e51e1fca79efe975 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1901)
 
   * a6bec5ceb4785380806029bac88033ed794047a6 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1992)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12246: [FLINK-17303][python] Return TableResult for Python TableEnvironment

2020-05-21 Thread GitBox


flinkbot edited a comment on pull request #12246:
URL: https://github.com/apache/flink/pull/12246#issuecomment-630803193


   
   ## CI report:
   
   * 911e459fe53b61aa74ce3bc3d0761651eb7f61fb Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1893)
 
   * be75b66a69ad7616f5277cfb636e7b135b75 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1993)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12277: [FLINK-17230] Fix incorrect returned address of Endpoint for external Service of ClusterIP type

2020-05-21 Thread GitBox


flinkbot edited a comment on pull request #12277:
URL: https://github.com/apache/flink/pull/12277#issuecomment-631994506


   
   ## CI report:
   
   * 689aafe09dd533f85d746915b404d126e051dd6b Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1995)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-17864) Update document about removing #registerTableSource

2020-05-21 Thread Kurt Young (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kurt Young closed FLINK-17864.
--
Resolution: Duplicate

> Update document about removing #registerTableSource
> ---
>
> Key: FLINK-17864
> URL: https://issues.apache.org/jira/browse/FLINK-17864
> Project: Flink
>  Issue Type: Sub-task
>  Components: Documentation
>Affects Versions: 1.11.0
>Reporter: Danny Chen
>Priority: Major
> Fix For: 1.11.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-14492) Modify state backend to report reserved memory to MemoryManager

2020-05-21 Thread Yu Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14492?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yu Li closed FLINK-14492.
-
Resolution: Duplicate

This work is already done through FLINK-15084

> Modify state backend to report reserved memory to MemoryManager
> ---
>
> Key: FLINK-14492
> URL: https://issues.apache.org/jira/browse/FLINK-14492
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / State Backends, Runtime / Task
>Reporter: Yu Li
>Assignee: Yu Li
>Priority: Major
>
> {\{MemoryManager}} will add a new \{{reserveMemory}} interface through 
> FLINK-13984, and we should discuss to see whether state backend follows this 
> way to implement the initial version of memory control. Status of discussion 
> in ML and conclusion around this topic will be tracked by this JIRA.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-14489) Check and make sure state backend fits into TaskExecutor memory configuration

2020-05-21 Thread Yu Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yu Li closed FLINK-14489.
-
Resolution: Later

We had some discussion (in this [google 
doc|https://docs.google.com/document/d/12HL7qd3fF3nLOipOqXGkYk8cA-pIPYtDCuRiQEKMsX8/edit?usp=sharing])
 but since the current implementation of Heap backend has no memory accounting 
logic, we actually only need to take care of the {{RocksDBStateBackend}}. We 
will get back and review this again when there're more types of backends which 
requires a more general design.

> Check and make sure state backend fits into TaskExecutor memory configuration
> -
>
> Key: FLINK-14489
> URL: https://issues.apache.org/jira/browse/FLINK-14489
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / State Backends, Runtime / Task
>Reporter: Yu Li
>Assignee: Yu Li
>Priority: Major
>
> [FLIP-49|https://cwiki.apache.org/confluence/display/FLINK/FLIP-49%3A+Unified+Memory+Configuration+for+TaskExecutors]
>  and [FLINK-13980|https://issues.apache.org/jira/browse/FLINK-13984] has 
> proposed a unified memory configuration for {{TaskExecutor}}, we should find 
> a proper way to suit state backend memory management into it. This sub-task 
> will track the discussion and solution on this topic.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #12260: [FLINK-17189][table-planner] Table with proctime attribute cannot be read from Hive catalog

2020-05-21 Thread GitBox


flinkbot edited a comment on pull request #12260:
URL: https://github.com/apache/flink/pull/12260#issuecomment-631229314


   
   ## CI report:
   
   * ed086581e9bdc7cc60830294a3ddd1cf8d9e0bbe Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1994)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-14883) Resource management on state backends

2020-05-21 Thread Yu Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yu Li closed FLINK-14883.
-
Resolution: Later

Now RocksDBStateBackend is already included into resource management 
([documents 
here|https://ci.apache.org/projects/flink/flink-docs-master/ops/state/state_backends.html#memory-management])
 and we will look back at this later when more backends introduced.

> Resource management on state backends
> -
>
> Key: FLINK-14883
> URL: https://issues.apache.org/jira/browse/FLINK-14883
> Project: Flink
>  Issue Type: New Feature
>  Components: Runtime / State Backends
>Reporter: Yu Li
>Assignee: Yu Li
>Priority: Major
>  Labels: resource-management
>
> This is the umbrella issue for resource management on state backends, 
> especially the memory management, as mentioned in 
> [FLIP-49|https://cwiki.apache.org/confluence/display/FLINK/FLIP-49%3A+Unified+Memory+Configuration+for+TaskExecutors]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #12246: [FLINK-17303][python] Return TableResult for Python TableEnvironment

2020-05-21 Thread GitBox


flinkbot edited a comment on pull request #12246:
URL: https://github.com/apache/flink/pull/12246#issuecomment-630803193


   
   ## CI report:
   
   * be75b66a69ad7616f5277cfb636e7b135b75 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1993)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] fpompermaier commented on pull request #12273: [FLINK-17850][jdbc] Fix PostgresCatalogITCase.testGroupByInsert() fails on CI

2020-05-21 Thread GitBox


fpompermaier commented on pull request #12273:
URL: https://github.com/apache/flink/pull/12273#issuecomment-632022838


   LGTM on my side. Then what should I do with the other PR that was reverted?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-17849) YARNHighAvailabilityITCase hangs in Azure Pipelines CI

2020-05-21 Thread Yang Wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17113079#comment-17113079
 ] 

Yang Wang commented on FLINK-17849:
---

When i check the jobmanager.log of the failed test 
{{org.apache.flink.yarn.YARNHighAvailabilityITCase#testJobRecoversAfterKillingTaskManager}},
 i find that the jobmanager failed over because of zookeeper client timeout. 
The timeout is configured to 1000ms. Maybe something is wrong with the network 
at that time. This unexpected failover makes the 
{{restClusterClient.getJobDetails}} failed with timeout exception.

 
{code:java}
193 2020-05-20 16:29:30,141 WARN  
org.apache.flink.shaded.zookeeper3.org.apache.zookeeper.ClientCnxn [] - Client 
session timed out, have not heard from server in 4001ms for sessionid 
0x17232eac285
194 2020-05-20 16:29:30,141 INFO  
org.apache.flink.shaded.zookeeper3.org.apache.zookeeper.ClientCnxn [] - Client 
session timed out, have not heard from server in 4001ms for sessionid 
0x17232eac285, closing socket connection and attempting reconnect
{code}

> YARNHighAvailabilityITCase hangs in Azure Pipelines CI
> --
>
> Key: FLINK-17849
> URL: https://issues.apache.org/jira/browse/FLINK-17849
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN
>Affects Versions: 1.11.0
>Reporter: Stephan Ewen
>Priority: Blocker
> Fix For: 1.11.0
>
>
> The test seems to hang for 15 minutes, then gets killed.
> Full logs: 
> https://dev.azure.com/sewen0794/19b23adf-d190-4fb4-ae6e-2e92b08923a3/_apis/build/builds/25/logs/121



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-17849) YARNHighAvailabilityITCase hangs in Azure Pipelines CI

2020-05-21 Thread Yang Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Wang updated FLINK-17849:
--
Attachment: jobmanager.log

> YARNHighAvailabilityITCase hangs in Azure Pipelines CI
> --
>
> Key: FLINK-17849
> URL: https://issues.apache.org/jira/browse/FLINK-17849
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN
>Affects Versions: 1.11.0
>Reporter: Stephan Ewen
>Priority: Blocker
> Fix For: 1.11.0
>
> Attachments: jobmanager.log
>
>
> The test seems to hang for 15 minutes, then gets killed.
> Full logs: 
> https://dev.azure.com/sewen0794/19b23adf-d190-4fb4-ae6e-2e92b08923a3/_apis/build/builds/25/logs/121



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] klion26 commented on pull request #12096: [FLINK-16074][docs-zh] Translate the Overview page for State & Fault Tolerance into Chinese

2020-05-21 Thread GitBox


klion26 commented on pull request #12096:
URL: https://github.com/apache/flink/pull/12096#issuecomment-632025908


   @carp84 thanks for the review and merging.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Assigned] (FLINK-17322) Enable latency tracker would corrupt the broadcast state

2020-05-21 Thread Arvid Heise (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arvid Heise reassigned FLINK-17322:
---

Assignee: Arvid Heise

> Enable latency tracker would corrupt the broadcast state
> 
>
> Key: FLINK-17322
> URL: https://issues.apache.org/jira/browse/FLINK-17322
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Network
>Affects Versions: 1.9.2, 1.10.0
>Reporter: Yun Tang
>Assignee: Arvid Heise
>Priority: Major
> Fix For: 1.11.0
>
> Attachments: 
> Telematics2-feature-flink-1.10-latency-tracking-broken.zip
>
>
> This bug is reported from user mail list:
>  
> [http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Latency-tracking-together-with-broadcast-state-can-cause-job-failure-td34013.html]
> Execute {{BroadcastStateIT#broadcastStateWorksWithLatencyTracking}} would 
> easily reproduce this problem.
> From current information, the broadcast element would be corrupt once we 
> enable {{env.getConfig().setLatencyTrackingInterval(2000)}}.
>  The exception stack trace would be: (based on current master branch)
> {code:java}
> Caused by: java.io.IOException: Corrupt stream, found tag: 84
>   at 
> org.apache.flink.streaming.runtime.streamrecord.StreamElementSerializer.deserialize(StreamElementSerializer.java:217)
>  ~[classes/:?]
>   at 
> org.apache.flink.streaming.runtime.streamrecord.StreamElementSerializer.deserialize(StreamElementSerializer.java:46)
>  ~[classes/:?]
>   at 
> org.apache.flink.runtime.plugable.NonReusingDeserializationDelegate.read(NonReusingDeserializationDelegate.java:55)
>  ~[classes/:?]
>   at 
> org.apache.flink.runtime.io.network.api.serialization.SpillingAdaptiveSpanningRecordDeserializer.getNextRecord(SpillingAdaptiveSpanningRecordDeserializer.java:157)
>  ~[classes/:?]
>   at 
> org.apache.flink.streaming.runtime.io.StreamTaskNetworkInput.emitNext(StreamTaskNetworkInput.java:123)
>  ~[classes/:?]
>   at 
> org.apache.flink.streaming.runtime.io.StreamTwoInputProcessor.processInput(StreamTwoInputProcessor.java:181)
>  ~[classes/:?]
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:332)
>  ~[classes/:?]
>   at 
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxStep(MailboxProcessor.java:206)
>  ~[classes/:?]
>   at 
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:196)
>  ~[classes/:?]
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:505)
>  ~[classes/:?]
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:485)
>  ~[classes/:?]
>   at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:720) 
> ~[classes/:?]
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:544) 
> ~[classes/:?]
>   at java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_144]
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-17322) Enable latency tracker would corrupt the broadcast state

2020-05-21 Thread Arvid Heise (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arvid Heise updated FLINK-17322:

Affects Version/s: (was: 1.9.2)
   (was: 1.10.0)
   1.9.3
   1.10.1

> Enable latency tracker would corrupt the broadcast state
> 
>
> Key: FLINK-17322
> URL: https://issues.apache.org/jira/browse/FLINK-17322
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Network
>Affects Versions: 1.9.3, 1.10.1
>Reporter: Yun Tang
>Assignee: Arvid Heise
>Priority: Major
> Fix For: 1.11.0
>
> Attachments: 
> Telematics2-feature-flink-1.10-latency-tracking-broken.zip
>
>
> This bug is reported from user mail list:
>  
> [http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Latency-tracking-together-with-broadcast-state-can-cause-job-failure-td34013.html]
> Execute {{BroadcastStateIT#broadcastStateWorksWithLatencyTracking}} would 
> easily reproduce this problem.
> From current information, the broadcast element would be corrupt once we 
> enable {{env.getConfig().setLatencyTrackingInterval(2000)}}.
>  The exception stack trace would be: (based on current master branch)
> {code:java}
> Caused by: java.io.IOException: Corrupt stream, found tag: 84
>   at 
> org.apache.flink.streaming.runtime.streamrecord.StreamElementSerializer.deserialize(StreamElementSerializer.java:217)
>  ~[classes/:?]
>   at 
> org.apache.flink.streaming.runtime.streamrecord.StreamElementSerializer.deserialize(StreamElementSerializer.java:46)
>  ~[classes/:?]
>   at 
> org.apache.flink.runtime.plugable.NonReusingDeserializationDelegate.read(NonReusingDeserializationDelegate.java:55)
>  ~[classes/:?]
>   at 
> org.apache.flink.runtime.io.network.api.serialization.SpillingAdaptiveSpanningRecordDeserializer.getNextRecord(SpillingAdaptiveSpanningRecordDeserializer.java:157)
>  ~[classes/:?]
>   at 
> org.apache.flink.streaming.runtime.io.StreamTaskNetworkInput.emitNext(StreamTaskNetworkInput.java:123)
>  ~[classes/:?]
>   at 
> org.apache.flink.streaming.runtime.io.StreamTwoInputProcessor.processInput(StreamTwoInputProcessor.java:181)
>  ~[classes/:?]
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:332)
>  ~[classes/:?]
>   at 
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxStep(MailboxProcessor.java:206)
>  ~[classes/:?]
>   at 
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:196)
>  ~[classes/:?]
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:505)
>  ~[classes/:?]
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:485)
>  ~[classes/:?]
>   at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:720) 
> ~[classes/:?]
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:544) 
> ~[classes/:?]
>   at java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_144]
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] klion26 closed pull request #7955: [FLINK-11797][checkpoint][test]Implement empty test functions in CheckpointCoordinatorMasterHooksTest

2020-05-21 Thread GitBox


klion26 closed pull request #7955:
URL: https://github.com/apache/flink/pull/7955


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-17322) Enable latency tracker would corrupt the broadcast state

2020-05-21 Thread Arvid Heise (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17113083#comment-17113083
 ] 

Arvid Heise commented on FLINK-17322:
-

There is a fundamental issue with latency markers and legacy sources that also 
applies to the current master.

 

Latency markers are emitted over task thread, while all other records come over 
legacy thread. Since buffer builder is not thread-safe, we should not interact 
with it over task thread at all.

However, I also don't see a way to reliably emit them over legacy thread. The 
only way would be to fall back to record hand-over from legacy to task thread 
and then do everything in task thread, but that would degrade performance 
tremendously.

 

So until we get FLIP-27, it might be that legacy markers are not working.

> Enable latency tracker would corrupt the broadcast state
> 
>
> Key: FLINK-17322
> URL: https://issues.apache.org/jira/browse/FLINK-17322
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Network
>Affects Versions: 1.9.2, 1.10.0
>Reporter: Yun Tang
>Priority: Major
> Fix For: 1.11.0
>
> Attachments: 
> Telematics2-feature-flink-1.10-latency-tracking-broken.zip
>
>
> This bug is reported from user mail list:
>  
> [http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Latency-tracking-together-with-broadcast-state-can-cause-job-failure-td34013.html]
> Execute {{BroadcastStateIT#broadcastStateWorksWithLatencyTracking}} would 
> easily reproduce this problem.
> From current information, the broadcast element would be corrupt once we 
> enable {{env.getConfig().setLatencyTrackingInterval(2000)}}.
>  The exception stack trace would be: (based on current master branch)
> {code:java}
> Caused by: java.io.IOException: Corrupt stream, found tag: 84
>   at 
> org.apache.flink.streaming.runtime.streamrecord.StreamElementSerializer.deserialize(StreamElementSerializer.java:217)
>  ~[classes/:?]
>   at 
> org.apache.flink.streaming.runtime.streamrecord.StreamElementSerializer.deserialize(StreamElementSerializer.java:46)
>  ~[classes/:?]
>   at 
> org.apache.flink.runtime.plugable.NonReusingDeserializationDelegate.read(NonReusingDeserializationDelegate.java:55)
>  ~[classes/:?]
>   at 
> org.apache.flink.runtime.io.network.api.serialization.SpillingAdaptiveSpanningRecordDeserializer.getNextRecord(SpillingAdaptiveSpanningRecordDeserializer.java:157)
>  ~[classes/:?]
>   at 
> org.apache.flink.streaming.runtime.io.StreamTaskNetworkInput.emitNext(StreamTaskNetworkInput.java:123)
>  ~[classes/:?]
>   at 
> org.apache.flink.streaming.runtime.io.StreamTwoInputProcessor.processInput(StreamTwoInputProcessor.java:181)
>  ~[classes/:?]
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:332)
>  ~[classes/:?]
>   at 
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxStep(MailboxProcessor.java:206)
>  ~[classes/:?]
>   at 
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:196)
>  ~[classes/:?]
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:505)
>  ~[classes/:?]
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:485)
>  ~[classes/:?]
>   at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:720) 
> ~[classes/:?]
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:544) 
> ~[classes/:?]
>   at java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_144]
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-17849) YARNHighAvailabilityITCase hangs in Azure Pipelines CI

2020-05-21 Thread Yang Wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17113092#comment-17113092
 ] 

Yang Wang commented on FLINK-17849:
---

The reason why the {{YARNHighAvailabilityITCase}} hangs is that the application 
started by {{testJobRecoversAfterKillingTaskManager}} does not finished and 
next test {{testKillYarnSessionClusterEntrypoint}} do not have enough resources 
to start the Flink cluster. We could find the following logs of Flink client to 
verify.
{code:java}
28160 16:30:59,764 [main] INFO  org.apache.flink.yarn.YarnClusterDescriptor 
 [] - Deployment took more than 60 seconds. Please check if the 
requested resources are available in the YARN cluster
{code}
To make the test more stable, i suggest to increase the 
{{AkkaOptions.ASK_TIMEOUT}} to 30s, just like what we have done in 
{{YARNITCase}}.

> YARNHighAvailabilityITCase hangs in Azure Pipelines CI
> --
>
> Key: FLINK-17849
> URL: https://issues.apache.org/jira/browse/FLINK-17849
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN
>Affects Versions: 1.11.0
>Reporter: Stephan Ewen
>Priority: Blocker
> Fix For: 1.11.0
>
> Attachments: jobmanager.log
>
>
> The test seems to hang for 15 minutes, then gets killed.
> Full logs: 
> https://dev.azure.com/sewen0794/19b23adf-d190-4fb4-ae6e-2e92b08923a3/_apis/build/builds/25/logs/121



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-16402) Alter table fails on Hive catalog

2020-05-21 Thread Rui Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17113093#comment-17113093
 ] 

Rui Li commented on FLINK-16402:


Hi [~morhidi], thanks for the clarifications. I'm still not sure why the issue 
happened. When {{HiveCatalog}} creates a table, it doesn't set the table 
location, which means the location should be set on HMS side. Then when 
{{HiveCatalog}} alters properties of a table, it doesn't touch the location 
either. So everything should be fine as long as HMS sets a correct location in 
the first place. It does remind me of [an 
issue|https://issues.apache.org/jira/browse/HIVE-22158?focusedCommentId=16971481&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16971481]
 I hit myself. Not sure whether it's related but it might be helpful to take a 
look.

Besides, please be noted that the highest hive version supported by the hive 
connector is 3.1.2, and we haven't tested beyond that version. So in general, 
if you cherry-pick newer features it's likely to lead to problems.

> Alter table fails on Hive catalog
> -
>
> Key: FLINK-16402
> URL: https://issues.apache.org/jira/browse/FLINK-16402
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive, Table SQL / Client
>Affects Versions: 1.10.0
>Reporter: Gyula Fora
>Priority: Major
>
> Hive version: 3.1.0
> I get the following error when trying to execute a simple alter table 
> statement:
>  
> {code:java}
> ALTER TABLE ItemTransactions 
> SET (
>  'connector.properties.bootstrap.servers' = 'gyula-1.gce.cloudera.com:9092'
> );
> {code}
> {code:java}
> Caused by: org.apache.flink.table.api.TableException: Could not execute ALTER 
> TABLE hive.default.ItemTransactions SET 
> (connector.properties.zookeeper.connect: [dummy], connector.version: 
> [universal], format.schema: [ROW(transactionId LONG, ts LONG, itemId STRING, 
> quantity INT)], connector.topic: [transaction.log.1], is_generic: [true], 
> connector.startup-mode: [earliest-offset], connector.type: [kafka], 
> connector.properties.bootstrap.servers: [gyula-1.gce.cloudera.com:9092], 
> connector.properties.group.id: [testGroup], format.type: [json])Caused by: 
> org.apache.flink.table.api.TableException: Could not execute ALTER TABLE 
> hive.default.ItemTransactions SET (connector.properties.zookeeper.connect: 
> [dummy], connector.version: [universal], format.schema: [ROW(transactionId 
> LONG, ts LONG, itemId STRING, quantity INT)], connector.topic: 
> [transaction.log.1], is_generic: [true], connector.startup-mode: 
> [earliest-offset], connector.type: [kafka], 
> connector.properties.bootstrap.servers: [gyula-1.gce.cloudera.com:9092], 
> connector.properties.group.id: [testGroup], format.type: [json]) at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.sqlUpdate(TableEnvironmentImpl.java:545)
>  at 
> org.apache.flink.table.api.java.internal.StreamTableEnvironmentImpl.sqlUpdate(StreamTableEnvironmentImpl.java:331)
>  at 
> org.apache.flink.table.client.gateway.local.LocalExecutor.lambda$applyUpdate$17(LocalExecutor.java:690)
>  at 
> org.apache.flink.table.client.gateway.local.ExecutionContext.wrapClassLoader(ExecutionContext.java:240)
>  at 
> org.apache.flink.table.client.gateway.local.LocalExecutor.applyUpdate(LocalExecutor.java:688)
>  ... 9 moreCaused by: 
> org.apache.flink.table.catalog.exceptions.CatalogException: Failed to rename 
> table default.ItemTransactions at 
> org.apache.flink.table.catalog.hive.HiveCatalog.alterTable(HiveCatalog.java:433)
>  at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.sqlUpdate(TableEnvironmentImpl.java:537)
>  ... 13 moreCaused by: MetaException(message:A managed table's location needs 
> to be under the hive warehouse root 
> directory,table:ItemTransactions,location:/warehouse/tablespace/external/hive/itemtransactions,Hive
>  warehouse:/warehouse/tablespace/managed/hive) at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$alter_table_req_result$alter_table_req_resultStandardScheme.read(ThriftHiveMetastore.java)
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-17351) CheckpointCoordinator and CheckpointFailureManager ignores checkpoint timeouts

2020-05-21 Thread Congxian Qiu(klion26) (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17113095#comment-17113095
 ] 

Congxian Qiu(klion26) commented on FLINK-17351:
---

Attach another issue which wants to increase the count under more checkpoint 
fail reasons

> CheckpointCoordinator and CheckpointFailureManager ignores checkpoint timeouts
> --
>
> Key: FLINK-17351
> URL: https://issues.apache.org/jira/browse/FLINK-17351
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.9.2, 1.10.0
>Reporter: Piotr Nowojski
>Assignee: Yuan Mei
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>
> As described in point 2: 
> https://issues.apache.org/jira/browse/FLINK-17327?focusedCommentId=17090576&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17090576
> (copy of description from above linked comment):
> The logic in how {{CheckpointCoordinator}} handles checkpoint timeouts is 
> broken. In your [~qinjunjerry] examples, your job should have failed after 
> first checkpoint failure, but checkpoints were time outing on 
> CheckpointCoordinator after 5 seconds, before {{FlinkKafkaProducer}} was 
> detecting Kafka failure after 2 minutes. Those timeouts were not checked 
> against {{setTolerableCheckpointFailureNumber(...)}} limit, so the job was 
> keep going with many timed out checkpoints. Now funny thing happens: 
> FlinkKafkaProducer detects Kafka failure. Funny thing is that it depends 
> where the failure was detected:
> a) on processing record? no problem, job will failover immediately once 
> failure is detected (in this example after 2 minutes)
> b) on checkpoint? heh, the failure is reported to {{CheckpointCoordinator}} 
> *and gets ignored, as PendingCheckpoint has already been discarded 2 minutes 
> ago* :) So theoretically the checkpoints can keep failing forever and the job 
> will not restart automatically, unless something else fails.
> Even more funny things can happen if we mix FLINK-17350 . or b) with 
> intermittent external system failure. Sink reports an exception, transaction 
> was lost/aborted, Sink is in failed state, but if there will be a happy 
> coincidence that it manages to accept further records, this exception can be 
> lost and all of the records in those failed checkpoints will be lost forever 
> as well. In all of the examples that [~qinjunjerry] posted it hasn't 
> happened. {{FlinkKafkaProducer}} was not able to recover after the initial 
> failure and it was keep throwing exceptions until the job finally failed (but 
> much later then it should have). And that's not guaranteed anywhere.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #12260: [FLINK-17189][table-planner] Table with proctime attribute cannot be read from Hive catalog

2020-05-21 Thread GitBox


flinkbot edited a comment on pull request #12260:
URL: https://github.com/apache/flink/pull/12260#issuecomment-631229314


   
   ## CI report:
   
   * ed086581e9bdc7cc60830294a3ddd1cf8d9e0bbe Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1994)
 
   * 3362d0c9984f7c9e20c05315582e982a8da10deb UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-17750) YARNHighAvailabilityITCase.testKillYarnSessionClusterEntrypoint failed on azure

2020-05-21 Thread Yang Wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17113100#comment-17113100
 ] 

Yang Wang commented on FLINK-17750:
---

I am afraid the "AssertionError: There is at least one application on the" is 
not the root cause. It happened just because the test does not finished 
properly.

BTW, I think this is a same case of FLINK-17849. See the analysis of in the 
comments.

> YARNHighAvailabilityITCase.testKillYarnSessionClusterEntrypoint failed on 
> azure
> ---
>
> Key: FLINK-17750
> URL: https://issues.apache.org/jira/browse/FLINK-17750
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.11.0
>Reporter: Roman Khachatryan
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.11.0
>
>
> [https://dev.azure.com/khachatryanroman/810e80cc-0656-4d3c-9d8c-186764456a01/_apis/build/builds/6/logs/156]
>  
> {code:java}
>  2020-05-15T23:42:29.5307581Z [ERROR] 
> testKillYarnSessionClusterEntrypoint(org.apache.flink.yarn.YARNHighAvailabilityITCase)
>   Time elapsed: 21.68 s  <<< ERROR!
>  2020-05-15T23:42:29.5308406Z java.util.concurrent.ExecutionException:
>  2020-05-15T23:42:29.5308864Z 
> org.apache.flink.runtime.rest.util.RestClientException: [Internal server 
> error.,   2020-05-15T23:42:29.5309678Z java.util.concurrent.TimeoutException: 
> Invocation of public abstract java.util.concurrent.CompletableFuture 
> org.apache.flink.runt 
> ime.dispatcher.DispatcherGateway.requestJob(org.apache.flink.api.common.JobID,org.apache.flink.api.common.time.Time)
>  timed out.
>  2020-05-15T23:42:29.5310322Zat com.sun.proxy.$Proxy33.requestJob(Unknown 
> Source)
>  2020-05-15T23:42:29.5311018Zat 
> org.apache.flink.runtime.rest.handler.legacy.DefaultExecutionGraphCache.getExecutionGraphInternal(DefaultExecutionGraphCach
>  e.java:103)
>  2020-05-15T23:42:29.5311704Zat 
> org.apache.flink.runtime.rest.handler.legacy.DefaultExecutionGraphCache.getExecutionGraph(DefaultExecutionGraphCache.java:7
>  1)
>  2020-05-15T23:42:29.5312355Zat 
> org.apache.flink.runtime.rest.handler.job.AbstractExecutionGraphHandler.handleRequest(AbstractExecutionGraphHandler.java:75
>  )
>  2020-05-15T23:42:29.5312924Zat 
> org.apache.flink.runtime.rest.handler.AbstractRestHandler.respondToRequest(AbstractRestHandler.java:73)
>  2020-05-15T23:42:29.5313423Zat 
> org.apache.flink.runtime.rest.handler.AbstractHandler.respondAsLeader(AbstractHandler.java:172)
>  2020-05-15T23:42:29.5314497Zat 
> org.apache.flink.runtime.rest.handler.LeaderRetrievalHandler.lambda$channelRead0$0(LeaderRetrievalHandler.java:81)
>  2020-05-15T23:42:29.5315083Zat 
> java.util.Optional.ifPresent(Optional.java:159)
>  2020-05-15T23:42:29.5315474Zat 
> org.apache.flink.util.OptionalConsumer.ifPresent(OptionalConsumer.java:46)
>  2020-05-15T23:42:29.5315979Zat 
> org.apache.flink.runtime.rest.handler.LeaderRetrievalHandler.channelRead0(LeaderRetrievalHandler.java:78)
>  2020-05-15T23:42:29.5316520Zat 
> org.apache.flink.runtime.rest.handler.LeaderRetrievalHandler.channelRead0(LeaderRetrievalHandler.java:49)
>  2020-05-15T23:42:29.5317092Zat 
> org.apache.flink.shaded.netty4.io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:10
>  5)
>  2020-05-15T23:42:29.5317705Zat 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerConte
>  xt.java:374)
>  2020-05-15T23:42:29.5318586Zat 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerConte
>  xt.java:360)
>  2020-05-15T23:42:29.5319249Zat 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext
>  .java:352)
>  2020-05-15T23:42:29.5319729Zat 
> org.apache.flink.runtime.rest.handler.router.RouterHandler.routed(RouterHandler.java:110)
>  2020-05-15T23:42:29.5320136Zat 
> org.apache.flink.runtime.rest.handler.router.RouterHandler.channelRead0(RouterHandler.java:89)
>  2020-05-15T23:42:29.5320742Zat 
> org.apache.flink.runtime.rest.handler.router.RouterHandler.channelRead0(RouterHandler.java:54)
>  2020-05-15T23:42:29.5321195Zat 
> org.apache.flink.shaded.netty4.io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:10
>  5)
>  2020-05-15T23:42:29.5321730Zat 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerConte
>  xt.java:374)
>  2020-05-15T23:42:29.5322263Zat 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(Ab

[jira] [Created] (FLINK-17865) Increase default size of `state.backend.fs.memory-threshold`

2020-05-21 Thread Yu Li (Jira)
Yu Li created FLINK-17865:
-

 Summary: Increase default size of 
`state.backend.fs.memory-threshold`
 Key: FLINK-17865
 URL: https://issues.apache.org/jira/browse/FLINK-17865
 Project: Flink
  Issue Type: Improvement
  Components: Runtime / State Backends
Affects Versions: 1.11.0
Reporter: Yu Li
Assignee: Yun Tang
 Fix For: 1.11.0


As discussed in [ML thread|https://s.apache.org/us0en], we decided to increase 
the default value of `state.backend.fs.memory-threshold` from 1K to *20K* to 
prevent too many small files created on remote FS.

We need to add an explicit release note to notify our users about this change, 
especially for those having jobs with large parallelism on source or stateful 
operators.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wuchong commented on a change in pull request #12260: [FLINK-17189][table-planner] Table with proctime attribute cannot be read from Hive catalog

2020-05-21 Thread GitBox


wuchong commented on a change in pull request #12260:
URL: https://github.com/apache/flink/pull/12260#discussion_r428601204



##
File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/sources/TableSourceUtil.scala
##
@@ -238,6 +250,14 @@ object TableSourceUtil {
 }
   }
 
+  private def containsTimeAttribute(tableSchema: TableSchema): Boolean = {
+tableSchema.getWatermarkSpecs.nonEmpty || 
tableSchema.getTableColumns.exists(isProctime)
+  }
+
+  private def isProctime(column: TableColumn): Boolean = {
+toScala(column.getExpr).exists(_.equalsIgnoreCase("proctime()"))
+  }

Review comment:
   I think this is error-prone. The string compare is not a general 
solution, I can come up with some corner cases: 
   ```
   `proctime()`  // quoted
   proctime // function identifier
   ```
   
   I think a more general solution is pass a `SqlExprToRexConverterFactory` 
(which can creates a `SqlExprToRexConverter` to get the correct type of an 
expression) into the `CatalogSchemaTable`. I know it's verbose to pass it into 
`CatalogSchemaTable`, but I don't find other reasonable ways. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-17866) InvalidPathException was thrown when running the test cases of PyFlink on Windows

2020-05-21 Thread Wei Zhong (Jira)
Wei Zhong created FLINK-17866:
-

 Summary: InvalidPathException was thrown when running the test 
cases of PyFlink on Windows
 Key: FLINK-17866
 URL: https://issues.apache.org/jira/browse/FLINK-17866
 Project: Flink
  Issue Type: Bug
  Components: API / Python
Reporter: Wei Zhong


When running the test_dependency.py on Windows,such exception was thrown:
{code:java}
Error
Traceback (most recent call last):
  File 
"C:\Users\zw144119\AppData\Local\Continuum\miniconda3\envs\py36\lib\unittest\case.py",
 line 59, in testPartExecutor
yield
  File 
"C:\Users\zw144119\AppData\Local\Continuum\miniconda3\envs\py36\lib\unittest\case.py",
 line 605, in run
testMethod()
  File "D:\flink\flink-python\pyflink\table\tests\test_dependency.py", line 55, 
in test_add_python_file
self.t_env.execute("test")
  File "D:\flink\flink-python\pyflink\table\table_environment.py", line 1049, 
in execute
return JobExecutionResult(self._j_tenv.execute(job_name))
  File 
"C:\Users\zw144119\AppData\Local\Continuum\miniconda3\envs\py36\lib\site-packages\py4j\java_gateway.py",
 line 1286, in __call__
answer, self.gateway_client, self.target_id, self.name)
  File "D:\flink\flink-python\pyflink\util\exceptions.py", line 147, in deco
return f(*a, **kw)
  File 
"C:\Users\zw144119\AppData\Local\Continuum\miniconda3\envs\py36\lib\site-packages\py4j\protocol.py",
 line 328, in get_return_value
format(target_id, ".", name), value)
py4j.protocol.Py4JJavaError: An error occurred while calling o4.execute.
: java.nio.file.InvalidPathException: Illegal char <:> at index 2: 
/C:/Users/zw144119/AppData/Local/Temp/tmp0x4273cg/python_file_dir_cfb9e8fe-2812-4a89-ae46-5dc3c844d62c/test_dependency_manage_lib.py
  at sun.nio.fs.WindowsPathParser.normalize(WindowsPathParser.java:182)
  at sun.nio.fs.WindowsPathParser.parse(WindowsPathParser.java:153)
  at sun.nio.fs.WindowsPathParser.parse(WindowsPathParser.java:77)
  at sun.nio.fs.WindowsPath.parse(WindowsPath.java:94)
  at sun.nio.fs.WindowsFileSystem.getPath(WindowsFileSystem.java:255)
  at java.nio.file.Paths.get(Paths.java:84)
  at 
org.apache.flink.core.fs.local.LocalFileSystem.pathToFile(LocalFileSystem.java:314)
  at 
org.apache.flink.core.fs.local.LocalFileSystem.getFileStatus(LocalFileSystem.java:110)
  at 
org.apache.flink.runtime.jobgraph.JobGraphUtils.addUserArtifactEntries(JobGraphUtils.java:52)
  at 
org.apache.flink.streaming.api.graph.StreamingJobGraphGenerator.createJobGraph(StreamingJobGraphGenerator.java:186)
  at 
org.apache.flink.streaming.api.graph.StreamingJobGraphGenerator.createJobGraph(StreamingJobGraphGenerator.java:109)
  at 
org.apache.flink.streaming.api.graph.StreamGraph.getJobGraph(StreamGraph.java:850)
  at 
org.apache.flink.client.StreamGraphTranslator.translateToJobGraph(StreamGraphTranslator.java:52)
  at 
org.apache.flink.client.FlinkPipelineTranslationUtil.getJobGraph(FlinkPipelineTranslationUtil.java:43)
  at 
org.apache.flink.client.deployment.executors.PipelineExecutorUtils.getJobGraph(PipelineExecutorUtils.java:55)
  at 
org.apache.flink.client.deployment.executors.LocalExecutor.getJobGraph(LocalExecutor.java:98)
  at 
org.apache.flink.client.deployment.executors.LocalExecutor.execute(LocalExecutor.java:79)
  at 
org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.executeAsync(StreamExecutionEnvironment.java:1786)
  at 
org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1687)
  at 
org.apache.flink.streaming.api.environment.LocalStreamEnvironment.execute(LocalStreamEnvironment.java:74)
  at 
org.apache.flink.table.planner.delegation.ExecutorBase.execute(ExecutorBase.java:52)
  at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.execute(TableEnvironmentImpl.java:1167)
  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
  at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:498)
  at 
org.apache.flink.api.python.shaded.py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
  at 
org.apache.flink.api.python.shaded.py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
  at org.apache.flink.api.python.shaded.py4j.Gateway.invoke(Gateway.java:282)
  at 
org.apache.flink.api.python.shaded.py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
  at 
org.apache.flink.api.python.shaded.py4j.commands.CallCommand.execute(CallCommand.java:79)
  at 
org.apache.flink.api.python.shaded.py4j.GatewayConnection.run(GatewayConnection.java:238)
  at java.lang.Thread.run(Thread.java:748)
{code}
It seems the windows-style path is not recognized by the "Paths.get()" method.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-17866) InvalidPathException was thrown when running the test cases of PyFlink on Windows

2020-05-21 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu reassigned FLINK-17866:
---

Assignee: Wei Zhong

> InvalidPathException was thrown when running the test cases of PyFlink on 
> Windows
> -
>
> Key: FLINK-17866
> URL: https://issues.apache.org/jira/browse/FLINK-17866
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Reporter: Wei Zhong
>Assignee: Wei Zhong
>Priority: Major
>
> When running the test_dependency.py on Windows,such exception was thrown:
> {code:java}
> Error
> Traceback (most recent call last):
>   File 
> "C:\Users\zw144119\AppData\Local\Continuum\miniconda3\envs\py36\lib\unittest\case.py",
>  line 59, in testPartExecutor
> yield
>   File 
> "C:\Users\zw144119\AppData\Local\Continuum\miniconda3\envs\py36\lib\unittest\case.py",
>  line 605, in run
> testMethod()
>   File "D:\flink\flink-python\pyflink\table\tests\test_dependency.py", line 
> 55, in test_add_python_file
> self.t_env.execute("test")
>   File "D:\flink\flink-python\pyflink\table\table_environment.py", line 1049, 
> in execute
> return JobExecutionResult(self._j_tenv.execute(job_name))
>   File 
> "C:\Users\zw144119\AppData\Local\Continuum\miniconda3\envs\py36\lib\site-packages\py4j\java_gateway.py",
>  line 1286, in __call__
> answer, self.gateway_client, self.target_id, self.name)
>   File "D:\flink\flink-python\pyflink\util\exceptions.py", line 147, in deco
> return f(*a, **kw)
>   File 
> "C:\Users\zw144119\AppData\Local\Continuum\miniconda3\envs\py36\lib\site-packages\py4j\protocol.py",
>  line 328, in get_return_value
> format(target_id, ".", name), value)
> py4j.protocol.Py4JJavaError: An error occurred while calling o4.execute.
> : java.nio.file.InvalidPathException: Illegal char <:> at index 2: 
> /C:/Users/zw144119/AppData/Local/Temp/tmp0x4273cg/python_file_dir_cfb9e8fe-2812-4a89-ae46-5dc3c844d62c/test_dependency_manage_lib.py
>   at sun.nio.fs.WindowsPathParser.normalize(WindowsPathParser.java:182)
>   at sun.nio.fs.WindowsPathParser.parse(WindowsPathParser.java:153)
>   at sun.nio.fs.WindowsPathParser.parse(WindowsPathParser.java:77)
>   at sun.nio.fs.WindowsPath.parse(WindowsPath.java:94)
>   at sun.nio.fs.WindowsFileSystem.getPath(WindowsFileSystem.java:255)
>   at java.nio.file.Paths.get(Paths.java:84)
>   at 
> org.apache.flink.core.fs.local.LocalFileSystem.pathToFile(LocalFileSystem.java:314)
>   at 
> org.apache.flink.core.fs.local.LocalFileSystem.getFileStatus(LocalFileSystem.java:110)
>   at 
> org.apache.flink.runtime.jobgraph.JobGraphUtils.addUserArtifactEntries(JobGraphUtils.java:52)
>   at 
> org.apache.flink.streaming.api.graph.StreamingJobGraphGenerator.createJobGraph(StreamingJobGraphGenerator.java:186)
>   at 
> org.apache.flink.streaming.api.graph.StreamingJobGraphGenerator.createJobGraph(StreamingJobGraphGenerator.java:109)
>   at 
> org.apache.flink.streaming.api.graph.StreamGraph.getJobGraph(StreamGraph.java:850)
>   at 
> org.apache.flink.client.StreamGraphTranslator.translateToJobGraph(StreamGraphTranslator.java:52)
>   at 
> org.apache.flink.client.FlinkPipelineTranslationUtil.getJobGraph(FlinkPipelineTranslationUtil.java:43)
>   at 
> org.apache.flink.client.deployment.executors.PipelineExecutorUtils.getJobGraph(PipelineExecutorUtils.java:55)
>   at 
> org.apache.flink.client.deployment.executors.LocalExecutor.getJobGraph(LocalExecutor.java:98)
>   at 
> org.apache.flink.client.deployment.executors.LocalExecutor.execute(LocalExecutor.java:79)
>   at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.executeAsync(StreamExecutionEnvironment.java:1786)
>   at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1687)
>   at 
> org.apache.flink.streaming.api.environment.LocalStreamEnvironment.execute(LocalStreamEnvironment.java:74)
>   at 
> org.apache.flink.table.planner.delegation.ExecutorBase.execute(ExecutorBase.java:52)
>   at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.execute(TableEnvironmentImpl.java:1167)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.flink.api.python.shaded.py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
>   at 
> org.apache.flink.api.python.shaded.py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
>   at org.apache.flink.api.python.shaded.py4j.Gateway.invoke(Gateway.java:282)
>   at 
> org.apache.flink.api.python.shaded.py4j.commands.Abst

[jira] [Updated] (FLINK-17866) InvalidPathException was thrown when running the test cases of PyFlink on Windows

2020-05-21 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu updated FLINK-17866:

Affects Version/s: 1.11.0

> InvalidPathException was thrown when running the test cases of PyFlink on 
> Windows
> -
>
> Key: FLINK-17866
> URL: https://issues.apache.org/jira/browse/FLINK-17866
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.11.0
>Reporter: Wei Zhong
>Assignee: Wei Zhong
>Priority: Major
>
> When running the test_dependency.py on Windows,such exception was thrown:
> {code:java}
> Error
> Traceback (most recent call last):
>   File 
> "C:\Users\zw144119\AppData\Local\Continuum\miniconda3\envs\py36\lib\unittest\case.py",
>  line 59, in testPartExecutor
> yield
>   File 
> "C:\Users\zw144119\AppData\Local\Continuum\miniconda3\envs\py36\lib\unittest\case.py",
>  line 605, in run
> testMethod()
>   File "D:\flink\flink-python\pyflink\table\tests\test_dependency.py", line 
> 55, in test_add_python_file
> self.t_env.execute("test")
>   File "D:\flink\flink-python\pyflink\table\table_environment.py", line 1049, 
> in execute
> return JobExecutionResult(self._j_tenv.execute(job_name))
>   File 
> "C:\Users\zw144119\AppData\Local\Continuum\miniconda3\envs\py36\lib\site-packages\py4j\java_gateway.py",
>  line 1286, in __call__
> answer, self.gateway_client, self.target_id, self.name)
>   File "D:\flink\flink-python\pyflink\util\exceptions.py", line 147, in deco
> return f(*a, **kw)
>   File 
> "C:\Users\zw144119\AppData\Local\Continuum\miniconda3\envs\py36\lib\site-packages\py4j\protocol.py",
>  line 328, in get_return_value
> format(target_id, ".", name), value)
> py4j.protocol.Py4JJavaError: An error occurred while calling o4.execute.
> : java.nio.file.InvalidPathException: Illegal char <:> at index 2: 
> /C:/Users/zw144119/AppData/Local/Temp/tmp0x4273cg/python_file_dir_cfb9e8fe-2812-4a89-ae46-5dc3c844d62c/test_dependency_manage_lib.py
>   at sun.nio.fs.WindowsPathParser.normalize(WindowsPathParser.java:182)
>   at sun.nio.fs.WindowsPathParser.parse(WindowsPathParser.java:153)
>   at sun.nio.fs.WindowsPathParser.parse(WindowsPathParser.java:77)
>   at sun.nio.fs.WindowsPath.parse(WindowsPath.java:94)
>   at sun.nio.fs.WindowsFileSystem.getPath(WindowsFileSystem.java:255)
>   at java.nio.file.Paths.get(Paths.java:84)
>   at 
> org.apache.flink.core.fs.local.LocalFileSystem.pathToFile(LocalFileSystem.java:314)
>   at 
> org.apache.flink.core.fs.local.LocalFileSystem.getFileStatus(LocalFileSystem.java:110)
>   at 
> org.apache.flink.runtime.jobgraph.JobGraphUtils.addUserArtifactEntries(JobGraphUtils.java:52)
>   at 
> org.apache.flink.streaming.api.graph.StreamingJobGraphGenerator.createJobGraph(StreamingJobGraphGenerator.java:186)
>   at 
> org.apache.flink.streaming.api.graph.StreamingJobGraphGenerator.createJobGraph(StreamingJobGraphGenerator.java:109)
>   at 
> org.apache.flink.streaming.api.graph.StreamGraph.getJobGraph(StreamGraph.java:850)
>   at 
> org.apache.flink.client.StreamGraphTranslator.translateToJobGraph(StreamGraphTranslator.java:52)
>   at 
> org.apache.flink.client.FlinkPipelineTranslationUtil.getJobGraph(FlinkPipelineTranslationUtil.java:43)
>   at 
> org.apache.flink.client.deployment.executors.PipelineExecutorUtils.getJobGraph(PipelineExecutorUtils.java:55)
>   at 
> org.apache.flink.client.deployment.executors.LocalExecutor.getJobGraph(LocalExecutor.java:98)
>   at 
> org.apache.flink.client.deployment.executors.LocalExecutor.execute(LocalExecutor.java:79)
>   at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.executeAsync(StreamExecutionEnvironment.java:1786)
>   at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1687)
>   at 
> org.apache.flink.streaming.api.environment.LocalStreamEnvironment.execute(LocalStreamEnvironment.java:74)
>   at 
> org.apache.flink.table.planner.delegation.ExecutorBase.execute(ExecutorBase.java:52)
>   at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.execute(TableEnvironmentImpl.java:1167)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.flink.api.python.shaded.py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
>   at 
> org.apache.flink.api.python.shaded.py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
>   at org.apache.flink.api.python.shaded.py4j.Gateway.invoke(Gateway.java:282)
>   at 
> org.apache.flink.api.pyt

[jira] [Updated] (FLINK-17866) InvalidPathException was thrown when running the test cases of PyFlink on Windows

2020-05-21 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu updated FLINK-17866:

Fix Version/s: 1.11.0

> InvalidPathException was thrown when running the test cases of PyFlink on 
> Windows
> -
>
> Key: FLINK-17866
> URL: https://issues.apache.org/jira/browse/FLINK-17866
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.11.0
>Reporter: Wei Zhong
>Assignee: Wei Zhong
>Priority: Major
> Fix For: 1.11.0
>
>
> When running the test_dependency.py on Windows,such exception was thrown:
> {code:java}
> Error
> Traceback (most recent call last):
>   File 
> "C:\Users\zw144119\AppData\Local\Continuum\miniconda3\envs\py36\lib\unittest\case.py",
>  line 59, in testPartExecutor
> yield
>   File 
> "C:\Users\zw144119\AppData\Local\Continuum\miniconda3\envs\py36\lib\unittest\case.py",
>  line 605, in run
> testMethod()
>   File "D:\flink\flink-python\pyflink\table\tests\test_dependency.py", line 
> 55, in test_add_python_file
> self.t_env.execute("test")
>   File "D:\flink\flink-python\pyflink\table\table_environment.py", line 1049, 
> in execute
> return JobExecutionResult(self._j_tenv.execute(job_name))
>   File 
> "C:\Users\zw144119\AppData\Local\Continuum\miniconda3\envs\py36\lib\site-packages\py4j\java_gateway.py",
>  line 1286, in __call__
> answer, self.gateway_client, self.target_id, self.name)
>   File "D:\flink\flink-python\pyflink\util\exceptions.py", line 147, in deco
> return f(*a, **kw)
>   File 
> "C:\Users\zw144119\AppData\Local\Continuum\miniconda3\envs\py36\lib\site-packages\py4j\protocol.py",
>  line 328, in get_return_value
> format(target_id, ".", name), value)
> py4j.protocol.Py4JJavaError: An error occurred while calling o4.execute.
> : java.nio.file.InvalidPathException: Illegal char <:> at index 2: 
> /C:/Users/zw144119/AppData/Local/Temp/tmp0x4273cg/python_file_dir_cfb9e8fe-2812-4a89-ae46-5dc3c844d62c/test_dependency_manage_lib.py
>   at sun.nio.fs.WindowsPathParser.normalize(WindowsPathParser.java:182)
>   at sun.nio.fs.WindowsPathParser.parse(WindowsPathParser.java:153)
>   at sun.nio.fs.WindowsPathParser.parse(WindowsPathParser.java:77)
>   at sun.nio.fs.WindowsPath.parse(WindowsPath.java:94)
>   at sun.nio.fs.WindowsFileSystem.getPath(WindowsFileSystem.java:255)
>   at java.nio.file.Paths.get(Paths.java:84)
>   at 
> org.apache.flink.core.fs.local.LocalFileSystem.pathToFile(LocalFileSystem.java:314)
>   at 
> org.apache.flink.core.fs.local.LocalFileSystem.getFileStatus(LocalFileSystem.java:110)
>   at 
> org.apache.flink.runtime.jobgraph.JobGraphUtils.addUserArtifactEntries(JobGraphUtils.java:52)
>   at 
> org.apache.flink.streaming.api.graph.StreamingJobGraphGenerator.createJobGraph(StreamingJobGraphGenerator.java:186)
>   at 
> org.apache.flink.streaming.api.graph.StreamingJobGraphGenerator.createJobGraph(StreamingJobGraphGenerator.java:109)
>   at 
> org.apache.flink.streaming.api.graph.StreamGraph.getJobGraph(StreamGraph.java:850)
>   at 
> org.apache.flink.client.StreamGraphTranslator.translateToJobGraph(StreamGraphTranslator.java:52)
>   at 
> org.apache.flink.client.FlinkPipelineTranslationUtil.getJobGraph(FlinkPipelineTranslationUtil.java:43)
>   at 
> org.apache.flink.client.deployment.executors.PipelineExecutorUtils.getJobGraph(PipelineExecutorUtils.java:55)
>   at 
> org.apache.flink.client.deployment.executors.LocalExecutor.getJobGraph(LocalExecutor.java:98)
>   at 
> org.apache.flink.client.deployment.executors.LocalExecutor.execute(LocalExecutor.java:79)
>   at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.executeAsync(StreamExecutionEnvironment.java:1786)
>   at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1687)
>   at 
> org.apache.flink.streaming.api.environment.LocalStreamEnvironment.execute(LocalStreamEnvironment.java:74)
>   at 
> org.apache.flink.table.planner.delegation.ExecutorBase.execute(ExecutorBase.java:52)
>   at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.execute(TableEnvironmentImpl.java:1167)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.flink.api.python.shaded.py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
>   at 
> org.apache.flink.api.python.shaded.py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
>   at org.apache.flink.api.python.shaded.py4j.Gateway.invoke(Gateway.java:282)
>   at

[jira] [Updated] (FLINK-17771) "PyFlink end-to-end test" fails with "The output result: [] is not as expected: [2, 3, 4]!" on Java11

2020-05-21 Thread Wei Zhong (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Zhong updated FLINK-17771:
--
Attachment: image-2020-05-21-20-11-07-626.png

> "PyFlink end-to-end test" fails with "The output result: [] is not as 
> expected: [2, 3, 4]!" on Java11
> -
>
> Key: FLINK-17771
> URL: https://issues.apache.org/jira/browse/FLINK-17771
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python, Tests
>Affects Versions: 1.11.0
>Reporter: Robert Metzger
>Priority: Major
>  Labels: test-stability
> Fix For: 1.11.0
>
> Attachments: image-2020-05-21-20-11-07-626.png
>
>
> Java 11 nightly profile: 
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1579&view=logs&j=6caf31d6-847a-526e-9624-468e053467d6&t=679407b1-ea2c-5965-2c8d-146fff88
> {code}
> Job has been submitted with JobID ef78030becb3bfd6415d3de2e06420b4
> java.lang.AssertionError: The output result: [] is not as expected: [2, 3, 4]!
>   at 
> org.apache.flink.python.tests.FlinkStreamPythonUdfSqlJob.main(FlinkStreamPythonUdfSqlJob.java:55)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.base/java.lang.reflect.Method.invoke(Method.java:566)
>   at 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:288)
>   at 
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:198)
>   at 
> org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:148)
>   at 
> org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:689)
>   at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:227)
>   at 
> org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:906)
>   at 
> org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:982)
>   at 
> org.apache.flink.runtime.security.contexts.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:30)
>   at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:982)
> Stopping taskexecutor daemon (pid: 2705) on host fv-az670.
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-17771) "PyFlink end-to-end test" fails with "The output result: [] is not as expected: [2, 3, 4]!" on Java11

2020-05-21 Thread Wei Zhong (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Zhong updated FLINK-17771:
--
Attachment: image-2020-05-21-20-11-29-389.png

> "PyFlink end-to-end test" fails with "The output result: [] is not as 
> expected: [2, 3, 4]!" on Java11
> -
>
> Key: FLINK-17771
> URL: https://issues.apache.org/jira/browse/FLINK-17771
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python, Tests
>Affects Versions: 1.11.0
>Reporter: Robert Metzger
>Priority: Major
>  Labels: test-stability
> Fix For: 1.11.0
>
> Attachments: image-2020-05-21-20-11-07-626.png, 
> image-2020-05-21-20-11-29-389.png, image-2020-05-21-20-11-48-220.png, 
> image-2020-05-21-20-12-16-889.png
>
>
> Java 11 nightly profile: 
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1579&view=logs&j=6caf31d6-847a-526e-9624-468e053467d6&t=679407b1-ea2c-5965-2c8d-146fff88
> {code}
> Job has been submitted with JobID ef78030becb3bfd6415d3de2e06420b4
> java.lang.AssertionError: The output result: [] is not as expected: [2, 3, 4]!
>   at 
> org.apache.flink.python.tests.FlinkStreamPythonUdfSqlJob.main(FlinkStreamPythonUdfSqlJob.java:55)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.base/java.lang.reflect.Method.invoke(Method.java:566)
>   at 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:288)
>   at 
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:198)
>   at 
> org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:148)
>   at 
> org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:689)
>   at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:227)
>   at 
> org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:906)
>   at 
> org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:982)
>   at 
> org.apache.flink.runtime.security.contexts.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:30)
>   at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:982)
> Stopping taskexecutor daemon (pid: 2705) on host fv-az670.
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-17771) "PyFlink end-to-end test" fails with "The output result: [] is not as expected: [2, 3, 4]!" on Java11

2020-05-21 Thread Wei Zhong (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Zhong updated FLINK-17771:
--
Attachment: image-2020-05-21-20-11-48-220.png

> "PyFlink end-to-end test" fails with "The output result: [] is not as 
> expected: [2, 3, 4]!" on Java11
> -
>
> Key: FLINK-17771
> URL: https://issues.apache.org/jira/browse/FLINK-17771
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python, Tests
>Affects Versions: 1.11.0
>Reporter: Robert Metzger
>Priority: Major
>  Labels: test-stability
> Fix For: 1.11.0
>
> Attachments: image-2020-05-21-20-11-07-626.png, 
> image-2020-05-21-20-11-29-389.png, image-2020-05-21-20-11-48-220.png, 
> image-2020-05-21-20-12-16-889.png
>
>
> Java 11 nightly profile: 
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1579&view=logs&j=6caf31d6-847a-526e-9624-468e053467d6&t=679407b1-ea2c-5965-2c8d-146fff88
> {code}
> Job has been submitted with JobID ef78030becb3bfd6415d3de2e06420b4
> java.lang.AssertionError: The output result: [] is not as expected: [2, 3, 4]!
>   at 
> org.apache.flink.python.tests.FlinkStreamPythonUdfSqlJob.main(FlinkStreamPythonUdfSqlJob.java:55)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.base/java.lang.reflect.Method.invoke(Method.java:566)
>   at 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:288)
>   at 
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:198)
>   at 
> org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:148)
>   at 
> org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:689)
>   at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:227)
>   at 
> org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:906)
>   at 
> org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:982)
>   at 
> org.apache.flink.runtime.security.contexts.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:30)
>   at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:982)
> Stopping taskexecutor daemon (pid: 2705) on host fv-az670.
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-17771) "PyFlink end-to-end test" fails with "The output result: [] is not as expected: [2, 3, 4]!" on Java11

2020-05-21 Thread Wei Zhong (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Zhong updated FLINK-17771:
--
Attachment: image-2020-05-21-20-12-16-889.png

> "PyFlink end-to-end test" fails with "The output result: [] is not as 
> expected: [2, 3, 4]!" on Java11
> -
>
> Key: FLINK-17771
> URL: https://issues.apache.org/jira/browse/FLINK-17771
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python, Tests
>Affects Versions: 1.11.0
>Reporter: Robert Metzger
>Priority: Major
>  Labels: test-stability
> Fix For: 1.11.0
>
> Attachments: image-2020-05-21-20-11-07-626.png, 
> image-2020-05-21-20-11-29-389.png, image-2020-05-21-20-11-48-220.png, 
> image-2020-05-21-20-12-16-889.png
>
>
> Java 11 nightly profile: 
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1579&view=logs&j=6caf31d6-847a-526e-9624-468e053467d6&t=679407b1-ea2c-5965-2c8d-146fff88
> {code}
> Job has been submitted with JobID ef78030becb3bfd6415d3de2e06420b4
> java.lang.AssertionError: The output result: [] is not as expected: [2, 3, 4]!
>   at 
> org.apache.flink.python.tests.FlinkStreamPythonUdfSqlJob.main(FlinkStreamPythonUdfSqlJob.java:55)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.base/java.lang.reflect.Method.invoke(Method.java:566)
>   at 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:288)
>   at 
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:198)
>   at 
> org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:148)
>   at 
> org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:689)
>   at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:227)
>   at 
> org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:906)
>   at 
> org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:982)
>   at 
> org.apache.flink.runtime.security.contexts.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:30)
>   at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:982)
> Stopping taskexecutor daemon (pid: 2705) on host fv-az670.
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wuchong commented on pull request #12273: [FLINK-17850][jdbc] Fix PostgresCatalogITCase.testGroupByInsert() fails on CI

2020-05-21 Thread GitBox


wuchong commented on pull request #12273:
URL: https://github.com/apache/flink/pull/12273#issuecomment-632052087


   Thanks for the reviewing @fpompermaier .  This is the only commit been 
reverted.  I will merge this when the build is passed. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] leonardBang commented on pull request #12273: [FLINK-17850][jdbc] Fix PostgresCatalogITCase.testGroupByInsert() fails on CI

2020-05-21 Thread GitBox


leonardBang commented on pull request #12273:
URL: https://github.com/apache/flink/pull/12273#issuecomment-632052580


   Thanks for the update @wuchong , LGTM



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12260: [FLINK-17189][table-planner] Table with proctime attribute cannot be read from Hive catalog

2020-05-21 Thread GitBox


flinkbot edited a comment on pull request #12260:
URL: https://github.com/apache/flink/pull/12260#issuecomment-631229314


   
   ## CI report:
   
   * ed086581e9bdc7cc60830294a3ddd1cf8d9e0bbe Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1994)
 
   * 3362d0c9984f7c9e20c05315582e982a8da10deb Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1996)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12273: [FLINK-17850][jdbc] Fix PostgresCatalogITCase.testGroupByInsert() fails on CI

2020-05-21 Thread GitBox


flinkbot edited a comment on pull request #12273:
URL: https://github.com/apache/flink/pull/12273#issuecomment-631848905


   
   ## CI report:
   
   * e0e2827194c166e64d0e66494c0cc84cca70db3e Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1985)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




  1   2   3   4   >