[jira] [Created] (FLINK-21427) Recovering from savepoint unable to commit

2021-02-21 Thread Kenzyme Le (Jira)
Kenzyme Le created FLINK-21427:
--

 Summary: Recovering from savepoint unable to commit
 Key: FLINK-21427
 URL: https://issues.apache.org/jira/browse/FLINK-21427
 Project: Flink
  Issue Type: Bug
Affects Versions: 1.11.2
Reporter: Kenzyme Le


Hi,

I was able to stop and generate a savepoint successfully, but resuming the job 
caused repeated errors in the logs about unable to commit to S3.

 
{code:java}
[] - Could not commit checkpoint.
com.facebook.presto.hive.s3.PrestoS3FileSystem$UnrecoverableS3OperationException:
 com.amazonaws.services.s3.model.AmazonS3Exception: The specified key does not 
exist. (Service: Amazon S3; Status Code: 404; Error Code: NoSuchKey; Request 
ID: D76810E62E37680C; S3 Extended Request ID: 
h5MRYXOZj5UnPEVjYtMOnUqXUSeJ784eFz3PEQjdT8B7499ZvV+3DHNLII8WLVbVhJ1/ujPG7Bo=), 
S3 Extended Request ID: 
h5MRYXOZj5UnPEVjYtMOnUqXUSeJ784eFz3PEQjdT8B7499ZvV+3DHNLII8WLVbVhJ1/ujPG7Bo= 
(Path: 
s3p://app/flink/checkpoints/prod/613240ac4a3ebb2e1a428bbd1a973433/taskowned/7a252162-f002-4afc-a45a-04b0a622c204)
at 
com.facebook.presto.hive.s3.PrestoS3FileSystem$PrestoS3InputStream.lambda$openStream$1(PrestoS3FileSystem.java:917)
 ~[?:?]
at com.facebook.presto.hive.RetryDriver.run(RetryDriver.java:138) ~[?:?]
at 
com.facebook.presto.hive.s3.PrestoS3FileSystem$PrestoS3InputStream.openStream(PrestoS3FileSystem.java:902)
 ~[?:?]
at 
com.facebook.presto.hive.s3.PrestoS3FileSystem$PrestoS3InputStream.openStream(PrestoS3FileSystem.java:887)
 ~[?:?]
at 
com.facebook.presto.hive.s3.PrestoS3FileSystem$PrestoS3InputStream.seekStream(PrestoS3FileSystem.java:880)
 ~[?:?]
at 
com.facebook.presto.hive.s3.PrestoS3FileSystem$PrestoS3InputStream.lambda$read$0(PrestoS3FileSystem.java:819)
 ~[?:?]
at com.facebook.presto.hive.RetryDriver.run(RetryDriver.java:138) ~[?:?]
at 
com.facebook.presto.hive.s3.PrestoS3FileSystem$PrestoS3InputStream.read(PrestoS3FileSystem.java:818)
 ~[?:?]
at java.io.BufferedInputStream.fill(BufferedInputStream.java:252) ~[?:?]
at java.io.BufferedInputStream.read(BufferedInputStream.java:271) ~[?:?]
at java.io.FilterInputStream.read(FilterInputStream.java:83) ~[?:?]
at 
org.apache.flink.fs.s3presto.common.HadoopDataInputStream.read(HadoopDataInputStream.java:84)
 ~[?:?]
at 
org.apache.flink.core.fs.FSDataInputStreamWrapper.read(FSDataInputStreamWrapper.java:51)
 ~[flink-dist_2.12-1.11.2.jar:1.11.2]
at java.io.DataInputStream.readByte(DataInputStream.java:270) ~[?:?]
at 
org.apache.flink.api.java.typeutils.runtime.PojoSerializer.deserialize(PojoSerializer.java:435)
 ~[flink-dist_2.12-1.11.2.jar:1.11.2]
at 
org.apache.flink.runtime.io.disk.InputViewIterator.next(InputViewIterator.java:43)
 ~[flink-dist_2.12-1.11.2.jar:1.11.2]
at 
org.apache.flink.runtime.util.ReusingMutableToRegularIteratorWrapper.hasNext(ReusingMutableToRegularIteratorWrapper.java:61)
 ~[flink-dist_2.12-1.11.2.jar:1.11.2]
at 
com.app.streams.kernel.sinks.JdbcBatchWriteAheadSink.sendValues(JdbcBatchWriteAheadSink.java:52)
 
~[blob_p-642a2d12ebc0fdfb4a406ab5e9ebff24a2edf335-29b861b5042a27f11426951b8a753b1f:?]
at 
org.apache.flink.streaming.runtime.operators.GenericWriteAheadSink.notifyCheckpointComplete(GenericWriteAheadSink.java:233)
 ~[flink-dist_2.12-1.11.2.jar:1.11.2]
at 
org.apache.flink.streaming.runtime.tasks.StreamOperatorWrapper.notifyCheckpointComplete(StreamOperatorWrapper.java:107)
 ~[flink-dist_2.12-1.11.2.jar:1.11.2]
at 
org.apache.flink.streaming.runtime.tasks.SubtaskCheckpointCoordinatorImpl.notifyCheckpointComplete(SubtaskCheckpointCoordinatorImpl.java:283)
 ~[flink-dist_2.12-1.11.2.jar:1.11.2]
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.notifyCheckpointComplete(StreamTask.java:987)
 ~[flink-dist_2.12-1.11.2.jar:1.11.2]
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.lambda$notifyCheckpointCompleteAsync$10(StreamTask.java:958)
 ~[flink-dist_2.12-1.11.2.jar:1.11.2]
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.lambda$notifyCheckpointOperation$12(StreamTask.java:974)
 ~[flink-dist_2.12-1.11.2.jar:1.11.2]
at 
org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$1.runThrowing(StreamTaskActionExecutor.java:47)
 ~[flink-dist_2.12-1.11.2.jar:1.11.2]
at 
org.apache.flink.streaming.runtime.tasks.mailbox.Mail.run(Mail.java:78) 
~[flink-dist_2.12-1.11.2.jar:1.11.2]
at 
org.apache.flink.streaming.runtime.tasks.mailbox.MailboxExecutorImpl.yield(MailboxExecutorImpl.java:79)
 ~[flink-dist_2.12-1.11.2.jar:1.11.2]
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.runSynchronousSavepointMailboxLoop(StreamTask.java:406)
 ~[flink-dist_2.12-1.11.2.jar:1.11.2]
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.triggerCheckpointOnBarrier(StreamT

[jira] [Created] (FLINK-21428) DeclarativeSchedulerSlotSharingITCase.testSchedulingOfJobRequiringSlotSharing fail

2021-02-21 Thread Guowei Ma (Jira)
Guowei Ma created FLINK-21428:
-

 Summary: 
DeclarativeSchedulerSlotSharingITCase.testSchedulingOfJobRequiringSlotSharing 
fail
 Key: FLINK-21428
 URL: https://issues.apache.org/jira/browse/FLINK-21428
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Coordination
Affects Versions: 1.13.0
Reporter: Guowei Ma


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=13510&view=logs&j=d8d26c26-7ec2-5ed2-772e-7a1a1eb8317c&t=be5fb08e-1ad7-563c-4f1a-a97ad4ce4865
{code:java}
[ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 23.313 
s <<< FAILURE! - in 
org.apache.flink.runtime.scheduler.declarative.DeclarativeSchedulerSlotSharingITCase
 [ERROR] 
testSchedulingOfJobRequiringSlotSharing(org.apache.flink.runtime.scheduler.declarative.DeclarativeSchedulerSlotSharingITCase)
 Time elapsed: 20.683 s <<< ERROR! 
org.apache.flink.runtime.client.JobExecutionException: Job execution failed. at 
org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:144)
 at 
org.apache.flink.runtime.scheduler.declarative.DeclarativeSchedulerSlotSharingITCase.runJob(DeclarativeSchedulerSlotSharingITCase.java:83)
 at 
org.apache.flink.runtime.scheduler.declarative.DeclarativeSchedulerSlotSharingITCase.testSchedulingOfJobRequiringSlotSharing(DeclarativeSchedulerSlotSharingITCase.java:71)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498) at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
 at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
 at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
 at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
 at org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45) 
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) at 
org.junit.rules.RunRules.evaluate(RunRules.java:20) at 
org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
 at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
 at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at 
org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at 
org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at 
org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at 
org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-21429) JsonFileCompactionITCase>CompactionITCaseBase.testNonPartition

2021-02-21 Thread Guowei Ma (Jira)
Guowei Ma created FLINK-21429:
-

 Summary: 
JsonFileCompactionITCase>CompactionITCaseBase.testNonPartition
 Key: FLINK-21429
 URL: https://issues.apache.org/jira/browse/FLINK-21429
 Project: Flink
  Issue Type: Bug
  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
Affects Versions: 1.12.3
Reporter: Guowei Ma


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=13533&view=logs&j=e9af9cde-9a65-5281-a58e-2c8511d36983&t=b6c4efed-9c7d-55ea-03a9-9bd7d5b08e4c&l=11748
{code:java}
[ERROR] 
testNonPartition(org.apache.flink.formats.json.JsonFileCompactionITCase) Time 
elapsed: 1.187 s <<< FAILURE! java.lang.AssertionError: expected:<[0,0,0, 
0,0,0, 1,1,1, 1,1,1, 2,2,2, 2,2,2, 3,3,3, 3,3,3, 4,4,4, 4,4,4, 5,5,5, 5,5,5, 
6,6,6, 6,6,6, 7,7,7, 7,7,7, 8,8,8, 8,8,8, 9,9,9, 9,9,9, 10,0,0, 10,0,0, 11,1,1, 
11,1,1, 12,2,2, 12,2,2, 13,3,3, 13,3,3, 14,4,4, 14,4,4, 15,5,5, 15,5,5, 16,6,6, 
16,6,6, 17,7,7, 17,7,7, 18,8,8, 18,8,8, 19,9,9, 19,9,9, 20,0,0, 20,0,0, 21,1,1, 
21,1,1, 22,2,2, 22,2,2, 23,3,3, 23,3,3, 24,4,4, 24,4,4, 25,5,5, 25,5,5, 26,6,6, 
26,6,6, 27,7,7, 27,7,7, 28,8,8, 28,8,8, 29,9,9, 29,9,9, 30,0,0, 30,0,0, 31,1,1, 
31,1,1, 32,2,2, 32,2,2, 33,3,3, 33,3,3, 34,4,4, 34,4,4, 35,5,5, 35,5,5, 36,6,6, 
36,6,6, 37,7,7, 37,7,7, 38,8,8, 38,8,8, 39,9,9, 39,9,9, 40,0,0, 40,0,0, 41,1,1, 
41,1,1, 42,2,2, 42,2,2, 43,3,3, 43,3,3, 44,4,4, 44,4,4, 45,5,5, 45,5,5, 46,6,6, 
46,6,6, 47,7,7, 47,7,7, 48,8,8, 48,8,8, 49,9,9, 49,9,9, 50,0,0, 50,0,0, 51,1,1, 
51,1,1, 52,2,2, 52,2,2, 53,3,3, 53,3,3, 54,4,4, 54,4,4, 55,5,5, 55,5,5, 56,6,6, 
56,6,6, 57,7,7, 57,7,7, 58,8,8, 58,8,8, 59,9,9, 59,9,9, 60,0,0, 60,0,0, 61,1,1, 
61,1,1, 62,2,2, 62,2,2, 63,3,3, 63,3,3, 64,4,4, 64,4,4, 65,5,5, 65,5,5, 66,6,6, 
66,6,6, 67,7,7, 67,7,7, 68,8,8, 68,8,8, 69,9,9, 69,9,9, 70,0,0, 70,0,0, 71,1,1, 
71,1,1, 72,2,2, 72,2,2, 73,3,3, 73,3,3, 74,4,4, 74,4,4, 75,5,5, 75,5,5, 76,6,6, 
76,6,6, 77,7,7, 77,7,7, 78,8,8, 78,8,8, 79,9,9, 79,9,9, 80,0,0, 80,0,0, 81,1,1, 
81,1,1, 82,2,2, 82,2,2, 83,3,3, 83,3,3, 84,4,4, 84,4,4, 85,5,5, 85,5,5, 86,6,6, 
86,6,6, 87,7,7, 87,7,7, 88,8,8, 88,8,8, 89,9,9, 89,9,9, 90,0,0, 90,0,0, 91,1,1, 
91,1,1, 92,2,2, 92,2,2, 93,3,3, 93,3,3, 94,4,4, 94,4,4, 95,5,5, 95,5,5, 96,6,6, 
96,6,6, 97,7,7, 97,7,7, 98,8,8, 98,8,8, 99,9,9, 99,9,9]> but was:<[0,0,0, 
0,0,0, 1,1,1, 1,1,1, 2,2,2, 3,3,3, 3,3,3, 4,4,4, 4,4,4, 5,5,5, 6,6,6, 6,6,6, 
7,7,7, 7,7,7, 8,8,8, 9,9,9, 9,9,9, 10,0,0, 10,0,0, 11,1,1, 12,2,2, 12,2,2, 
13,3,3, 13,3,3, 14,4,4, 15,5,5, 15,5,5, 16,6,6, 16,6,6, 17,7,7, 18,8,8, 18,8,8, 
19,9,9, 19,9,9, 20,0,0, 21,1,1, 21,1,1, 22,2,2, 22,2,2, 23,3,3, 24,4,4, 24,4,4, 
25,5,5, 25,5,5, 26,6,6, 27,7,7, 27,7,7, 28,8,8, 28,8,8, 29,9,9, 30,0,0, 30,0,0, 
31,1,1, 31,1,1, 32,2,2, 33,3,3, 33,3,3, 34,4,4, 34,4,4, 35,5,5, 36,6,6, 36,6,6, 
37,7,7, 37,7,7, 38,8,8, 39,9,9, 39,9,9, 40,0,0, 40,0,0, 41,1,1, 42,2,2, 42,2,2, 
43,3,3, 43,3,3, 44,4,4, 45,5,5, 45,5,5, 46,6,6, 46,6,6, 47,7,7, 48,8,8, 48,8,8, 
49,9,9, 49,9,9, 50,0,0, 51,1,1, 51,1,1, 52,2,2, 52,2,2, 53,3,3, 54,4,4, 54,4,4, 
55,5,5, 55,5,5, 56,6,6, 57,7,7, 57,7,7, 58,8,8, 58,8,8, 59,9,9, 60,0,0, 60,0,0, 
61,1,1, 61,1,1, 62,2,2, 63,3,3, 63,3,3, 64,4,4, 64,4,4, 65,5,5, 66,6,6, 66,6,6, 
67,7,7, 67,7,7, 68,8,8, 69,9,9, 69,9,9, 70,0,0, 70,0,0, 71,1,1, 72,2,2, 72,2,2, 
73,3,3, 73,3,3, 74,4,4, 75,5,5, 75,5,5, 76,6,6, 76,6,6, 77,7,7, 78,8,8, 78,8,8, 
79,9,9, 79,9,9, 80,0,0, 81,1,1, 81,1,1, 82,2,2, 82,2,2, 83,3,3, 84,4,4, 84,4,4, 
85,5,5, 85,5,5, 86,6,6, 87,7,7, 87,7,7, 88,8,8, 88,8,8, 89,9,9, 90,0,0, 90,0,0, 
91,1,1, 91,1,1, 92,2,2, 93,3,3, 93,3,3, 94,4,4, 94,4,4, 95,5,5, 96,6,6, 96,6,6, 
97,7,7, 97,7,7, 98,8,8, 99,9,9, 99,9,9]> at 
org.junit.Assert.fail(Assert.java:88) at 
org.junit.Assert.failNotEquals(Assert.java:834) at 
org.junit.Assert.assertEquals(Assert.java:118) at 
org.junit.Assert.assertEquals(Assert.java:144) at 
org.apache.flink.table.planner.runtime.stream.sql.CompactionITCaseBase.assertIterator(CompactionITCaseBase.java:134)
 at 
org.apache.flink.table.planner.runtime.stream.sql.CompactionITCaseBase.innerTestNonPartition(CompactionITCaseBase.java:109)
 at 
org.apache.flink.table.planner.runtime.stream.sql.CompactionITCaseBase.testNonPartition(CompactionITCaseBase.java:101)
 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method) at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.base/java.lang.reflect.Method.invoke(Method.java:566) at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
 at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
 at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
 at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(Invoke

[jira] [Created] (FLINK-21430) Appear append data when flink sql sink mysql on key conflict

2021-02-21 Thread Yu Wang (Jira)
Yu Wang created FLINK-21430:
---

 Summary: Appear append data when flink sql sink mysql on key 
conflict
 Key: FLINK-21430
 URL: https://issues.apache.org/jira/browse/FLINK-21430
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Runtime
Affects Versions: 1.12.0
Reporter: Yu Wang



{code:java}
// Some comments here
public String getFoo()
{
return foo;
}
{code}




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-21431) UpsertKafkaTableITCase.testTemporalJoin hang

2021-02-21 Thread Guowei Ma (Jira)
Guowei Ma created FLINK-21431:
-

 Summary: UpsertKafkaTableITCase.testTemporalJoin hang
 Key: FLINK-21431
 URL: https://issues.apache.org/jira/browse/FLINK-21431
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Kafka
Affects Versions: 1.13.0
Reporter: Guowei Ma


This case hangs almost 3 hours:

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=13543&view=logs&j=ce8f3cc3-c1ea-5281-f5eb-df9ebd24947f&t=f266c805-9429-58ed-2f9e-482e7b82f58b
{code:java}
Test testTemporalJoin[format = 
csv](org.apache.flink.streaming.connectors.kafka.table.UpsertKafkaTableITCase) 
is running. 

 23:08:43,259 [ main] INFO 
org.apache.flink.streaming.connectors.kafka.KafkaTestEnvironmentImpl [] - 
Creating topic users_csv 23:08:45,303 [ main] WARN 
org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer [] - Property 
[transaction.timeout.ms] not specified. Setting it to 360 ms 23:08:45,430 
[ChangelogNormalize(key=[user_id]) -> Calc(select=[user_id, user_name, region, 
CAST(modification_time) AS timestamp]) -> Sink: 
Sink(table=[default_catalog.default_database.users_csv], fields=[user_id, 
user_name, region, timestamp]) (1/1)#0] WARN 
org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer [] - Using 
AT_LEAST_ONCE semantic, but checkpointing is not enabled. Switching to NONE 
semantic. 23:08:45,438 [ChangelogNormalize(key=[user_id]) -> 
Calc(select=[user_id, user_name, region, CAST(modification_time) AS timestamp]) 
-> Sink: Sink(table=[default_catalog.default_database.users_csv], 
fields=[user_id, user_name, region, timestamp]) (1/1)#0] INFO 
org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer [] - Starting 
FlinkKafkaInternalProducer (1/1) to produce into default topic users_csv 
23:08:45,791 [Source: TableSourceScan(table=[[default_catalog, 
default_database, users_csv, watermark=[CAST($3):TIMESTAMP(3)]]], 
fields=[user_id, user_name, region, timestamp]) (1/1)#0] INFO 
org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase [] - 
Consumer subtask 0 has no restore state. 23:08:45,810 [Source: 
TableSourceScan(table=[[default_catalog, default_database, users_csv, 
watermark=[CAST($3):TIMESTAMP(3)]]], fields=[user_id, user_name, region, 
timestamp]) (1/1)#0] INFO 
org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase [] - 
Consumer subtask 0 will start reading the following 2 partitions from the 
earliest offsets: [KafkaTopicPartition{topic='users_csv', partition=1}, 
KafkaTopicPartition{topic='users_csv', partition=0}] 23:08:45,825 [Legacy 
Source Thread - Source: TableSourceScan(table=[[default_catalog, 
default_database, users_csv, watermark=[CAST($3):TIMESTAMP(3)]]], 
fields=[user_id, user_name, region, timestamp]) (1/1)#0] INFO 
org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase [] - 
Consumer subtask 0 creating fetcher with offsets 
{KafkaTopicPartition{topic='users_csv', partition=1}=-915623761775, 
KafkaTopicPartition{topic='users_csv', partition=0}=-915623761775}. 
##[error]The operation was canceled.
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[VOTE] FLIP-163: SQL Client Improvements

2021-02-21 Thread Shengkai Fang
Hi devs

It seems we have reached consensus on FLIP-163[1] in the discussion[2]. So
I'd like to start the vote for this FLIP.

Please vote +1 to approve the FLIP, or -1 with a comment.

The vote will be open for 72 hours, until Feb. 25 2021 12:00 AM UTC+8,
unless there's an objection.

Best,
Shengkai


Re: [VOTE] FLIP-152: Hive Query Syntax Compatibility

2021-02-21 Thread Rui Li
Hi all,

The voting for FLIP-152 has been over 72h and there're 3 binding votes from:
- Kurt
- Jark
- Godfrey

There's no disapproval vote. Therefore the vote is closed and FLIP-152 has
been accepted.

Thanks to everyone who has helped with reviewing and voting for this FLIP.

On Fri, Feb 19, 2021 at 3:25 PM godfrey he  wrote:

> +1
>
> Best,
> Godfrey
>
> Jark Wu  于2021年2月8日周一 上午11:50写道:
>
> > Thanks for driving this.
> >
> > +1
> >
> > Best,
> > Jark
> >
> > On Mon, 8 Feb 2021 at 09:47, Kurt Young  wrote:
> >
> > > +1
> > >
> > > Best,
> > > Kurt
> > >
> > >
> > > On Sun, Feb 7, 2021 at 7:24 PM Rui Li  wrote:
> > >
> > > > Hi everyone,
> > > >
> > > > I think we have reached some consensus on FLIP-152 [1] in the
> > discussion
> > > > thread [2]. So I'd like to start the vote for this FLIP.
> > > >
> > > > The vote will be open for 72 hours, until Feb. 10 2021 01:00 PM UTC,
> > > unless
> > > > there's an objection.
> > > >
> > > > [1]
> > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-152%3A+Hive+Query+Syntax+Compatibility
> > > > [2]
> > > >
> > > >
> > >
> >
> http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-FLIP-152-Hive-Query-Syntax-Compatibility-td46928.html
> > > >
> > > > --
> > > > Best regards!
> > > > Rui Li
> > > >
> > >
> >
>


-- 
Best regards!
Rui Li


Re: [VOTE] FLIP-163: SQL Client Improvements

2021-02-21 Thread Jark Wu
+1 (binding)

Best,
Jark

On Mon, 22 Feb 2021 at 11:06, Shengkai Fang  wrote:

> Hi devs
>
> It seems we have reached consensus on FLIP-163[1] in the discussion[2]. So
> I'd like to start the vote for this FLIP.
>
> Please vote +1 to approve the FLIP, or -1 with a comment.
>
> The vote will be open for 72 hours, until Feb. 25 2021 12:00 AM UTC+8,
> unless there's an objection.
>
> Best,
> Shengkai
>


[ANNOUNCE] New Apache Flink Committers - Wei Zhong and Xingbo Huang

2021-02-21 Thread Dian Fu
Hi all,

On behalf of the PMC, I’m very happy to announce that Wei Zhong and Xingbo 
Huang have accepted the invitation to become Flink committers.

- Wei Zhong mainly works on PyFlink and has driven several important features 
in PyFlink, e.g. Python UDF dependency management (FLIP-78), Python UDF support 
in SQL (FLIP-106, FLIP-114), Python UDAF support (FLIP-139), etc. He has 
contributed the first PR of PyFlink and have contributed 100+ commits since 
then.

- Xingbo Huang's contribution is also mainly in PyFlink and has driven several 
important features in PyFlink, e.g. performance optimizing for Python UDF and 
Python UDAF (FLIP-121, FLINK-16747, FLINK-19236), Pandas UDAF support 
(FLIP-137), Python UDTF support (FLINK-14500), row-based Operations support in 
Python Table API (FLINK-20479), etc. He is also actively helping on answering 
questions in the user mailing list, helping on the release check, monitoring 
the status of the azure pipeline, etc.

Please join me in congratulating Wei Zhong and Xingbo Huang for becoming Flink 
committers!

Regards,
Dian

Re: [ANNOUNCE] New Apache Flink Committers - Wei Zhong and Xingbo Huang

2021-02-21 Thread Xintong Song
Congratulations, Wei & Xingbo~! Welcome aboard.

Thank you~

Xintong Song



On Mon, Feb 22, 2021 at 11:48 AM Dian Fu  wrote:

> Hi all,
>
> On behalf of the PMC, I’m very happy to announce that Wei Zhong and Xingbo
> Huang have accepted the invitation to become Flink committers.
>
> - Wei Zhong mainly works on PyFlink and has driven several important
> features in PyFlink, e.g. Python UDF dependency management (FLIP-78),
> Python UDF support in SQL (FLIP-106, FLIP-114), Python UDAF support
> (FLIP-139), etc. He has contributed the first PR of PyFlink and have
> contributed 100+ commits since then.
>
> - Xingbo Huang's contribution is also mainly in PyFlink and has driven
> several important features in PyFlink, e.g. performance optimizing for
> Python UDF and Python UDAF (FLIP-121, FLINK-16747, FLINK-19236), Pandas
> UDAF support (FLIP-137), Python UDTF support (FLINK-14500), row-based
> Operations support in Python Table API (FLINK-20479), etc. He is also
> actively helping on answering questions in the user mailing list, helping
> on the release check, monitoring the status of the azure pipeline, etc.
>
> Please join me in congratulating Wei Zhong and Xingbo Huang for becoming
> Flink committers!
>
> Regards,
> Dian


[jira] [Created] (FLINK-21432) Web UI -- Error - {"errors":["Service temporarily unavailable due to an ongoing leader election. Please refresh."]}

2021-02-21 Thread Bhagi (Jira)
Bhagi created FLINK-21432:
-

 Summary: Web UI -- Error - {"errors":["Service temporarily 
unavailable due to an ongoing leader election. Please refresh."]}
 Key: FLINK-21432
 URL: https://issues.apache.org/jira/browse/FLINK-21432
 Project: Flink
  Issue Type: Bug
  Components: Deployment / Kubernetes
Affects Versions: 1.12.0
 Environment: debian
Reporter: Bhagi
 Fix For: 1.12.0
 Attachments: image-2021-02-22-10-39-06-180.png

Web UI – throwing this Error 

{"errors":["Service temporarily unavailable due to an ongoing leader election. 
Please refresh."]}

Please find Job Manager logs.

 

!image-2021-02-22-10-39-06-180.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [ANNOUNCE] New Apache Flink Committers - Wei Zhong and Xingbo Huang

2021-02-21 Thread Till Rohrmann
Congratulations Wei & Xingbo. Great to have you as committers in the
community now.

Cheers,
Till

On Mon, Feb 22, 2021 at 5:08 AM Xintong Song  wrote:

> Congratulations, Wei & Xingbo~! Welcome aboard.
>
> Thank you~
>
> Xintong Song
>
>
>
> On Mon, Feb 22, 2021 at 11:48 AM Dian Fu  wrote:
>
> > Hi all,
> >
> > On behalf of the PMC, I’m very happy to announce that Wei Zhong and
> Xingbo
> > Huang have accepted the invitation to become Flink committers.
> >
> > - Wei Zhong mainly works on PyFlink and has driven several important
> > features in PyFlink, e.g. Python UDF dependency management (FLIP-78),
> > Python UDF support in SQL (FLIP-106, FLIP-114), Python UDAF support
> > (FLIP-139), etc. He has contributed the first PR of PyFlink and have
> > contributed 100+ commits since then.
> >
> > - Xingbo Huang's contribution is also mainly in PyFlink and has driven
> > several important features in PyFlink, e.g. performance optimizing for
> > Python UDF and Python UDAF (FLIP-121, FLINK-16747, FLINK-19236), Pandas
> > UDAF support (FLIP-137), Python UDTF support (FLINK-14500), row-based
> > Operations support in Python Table API (FLINK-20479), etc. He is also
> > actively helping on answering questions in the user mailing list, helping
> > on the release check, monitoring the status of the azure pipeline, etc.
> >
> > Please join me in congratulating Wei Zhong and Xingbo Huang for becoming
> > Flink committers!
> >
> > Regards,
> > Dian
>