[jira] [Created] (FLINK-18621) Simplify the methods of Executor interface in sql client

2020-07-17 Thread godfrey he (Jira)
godfrey he created FLINK-18621:
--

 Summary: Simplify the methods of Executor interface in sql client
 Key: FLINK-18621
 URL: https://issues.apache.org/jira/browse/FLINK-18621
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / Client
Reporter: godfrey he
 Fix For: 1.12.0


After {{TableEnvironment#executeSql}} is introduced, many methods in 
{{Executor}} interface can be replaced with {{TableEnvironment#executeSql}}. 
Those methods include:
listCatalogs, listDatabases, createTable, dropTable, listTables, listFunctions, 
useCatalog, useDatabase, getTableSchema (use `DESCRIBE xx`)




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-18622) Add limit method in the Python Table API

2020-07-17 Thread Dian Fu (Jira)
Dian Fu created FLINK-18622:
---

 Summary: Add limit method in the Python Table API
 Key: FLINK-18622
 URL: https://issues.apache.org/jira/browse/FLINK-18622
 Project: Flink
  Issue Type: Task
  Components: API / Python
Affects Versions: 1.12.0
Reporter: Dian Fu
 Fix For: 1.12.0


Table.limit was introduced in FLINK-18569. It would be great if we can also 
support it in the Python Table API.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-18623) CREATE TEMPORARY TABLE not documented

2020-07-17 Thread Fabian Hueske (Jira)
Fabian Hueske created FLINK-18623:
-

 Summary: CREATE TEMPORARY TABLE not documented
 Key: FLINK-18623
 URL: https://issues.apache.org/jira/browse/FLINK-18623
 Project: Flink
  Issue Type: Bug
  Components: Documentation, Table SQL / API
Affects Versions: 1.11.0
Reporter: Fabian Hueske


The {{CREATE TEMPORARY TABLE}} syntax that was added with FLINK-15591 is not 
included in the [{{CREATE TABLE}} 
documentation|https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/table/sql/create.html#create-table]
 and therefore not visible to our users.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [VOTE] Release 1.11.1, release candidate #1

2020-07-17 Thread Dian Fu
Generally, I tend to continue the vote if this is not a blocking issue for the 
following reasons:
- As discussed in the discussion thread[1], this release is mainly to fix the 
CDC bug to provide a complete CDC feature.
- We are still in an early stage after 1.11.0 release, so I think there will 
always be more issues raised.

However, I'm glad to hear more opinions. We can cancel the current RC if most 
people think so.

Besides, we can still make it into 1.11.1 if we found some other blocking issue.

[1] 
http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-Releasing-Flink-1-11-1-soon-td43065.html
 

> 在 2020年7月17日,下午2:09,Rui Li  写道:
> 
> Hey,
> 
> I understand it's a little bit late to bring this up, but we have a Hive
> dialect bug for which the PR [1] is ready to merge. So I wonder whether we
> could include it in 1.11.1? The issue is not a blocker, but I believe it's
> good to have in the bug fix release.
> 
> [1] https://github.com/apache/flink/pull/12888
> 
> On Wed, Jul 15, 2020 at 6:02 PM Dian Fu  wrote:
> 
>> Hi everyone,
>> 
>> Please review and vote on the release candidate #1 for the version 1.11.1,
>> as follows:
>> [ ] +1, Approve the release
>> [ ] -1, Do not approve the release (please provide specific comments)
>> 
>> 
>> The complete staging area is available for your review, which includes:
>> * JIRA release notes [1],
>> * the official Apache source release and binary convenience releases to be
>> deployed to dist.apache.org [2], which are signed with the key with
>> fingerprint 6B6291A8502BA8F0913AE04DDEB95B05BF075300 [3],
>> * all artifacts to be deployed to the Maven Central Repository [4],
>> * source code tag "release-1.11.1-rc1" [5],
>> * website pull request listing the new release and adding announcement
>> blog post [6].
>> 
>> The vote will be open for at least 72 hours. It is adopted by majority
>> approval, with at least 3 PMC affirmative votes.
>> 
>> Thanks,
>> Dian
>> 
>> [1]
>> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12348323
>> [2] https://dist.apache.org/repos/dist/dev/flink/flink-1.11.1-rc1/
>> [3] https://dist.apache.org/repos/dist/release/flink/KEYS
>> [4]
>> https://repository.apache.org/content/repositories/orgapacheflink-1378/
>> [5]
>> https://github.com/apache/flink/commit/7eb514a59f6fd117c3535ec4bebc40a375f30b63
>> [6] https://github.com/apache/flink-web/pull/359
> 
> 
> 
> -- 
> Best regards!
> Rui Li



[jira] [Created] (FLINK-18624) Document CREATE TEMPORARY TABLE

2020-07-17 Thread Jingsong Lee (Jira)
Jingsong Lee created FLINK-18624:


 Summary: Document CREATE TEMPORARY TABLE
 Key: FLINK-18624
 URL: https://issues.apache.org/jira/browse/FLINK-18624
 Project: Flink
  Issue Type: New Feature
  Components: Documentation
Reporter: Jingsong Lee
 Fix For: 1.11.1






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: How to write junit testcases for KeyedBroadCastProcess Function

2020-07-17 Thread David Anderson
You could approach testing this in the same way that Flink has implemented
its unit tests for KeyedBroadcastProcessFunctions, which is to use
a KeyedTwoInputStreamOperatorTestHarness with
a CoBroadcastWithKeyedOperator. To learn how to use Flink's test harnesses,
see [1], and for an example of testing a KeyedBroadcastProcessFunction, see
[2].

[1]
https://ci.apache.org/projects/flink/flink-docs-stable/dev/stream/testing.html#unit-testing-stateful-or-timely-udfs--custom-operators
[2]
https://github.com/apache/flink/blob/release-1.11/flink-streaming-java/src/test/java/org/apache/flink/streaming/api/operators/co/CoBroadcastWithKeyedOperatorTest.java

Best,
David

On Wed, Jul 15, 2020 at 8:32 PM bujjirahul45  wrote:

> Hi,
>
> I am new to flink i am trying write junit test cases to test
> KeyedBroadCastProcessFunction. Below is my code ,i am currently calling the
> getDataStreamOutput method in TestUtils class and passing inputdata and
> patternrules to method once the input data is evaluated against list of
> pattern rules and if input data satisfy the condition i will get the signal
> and calling sink function and returning output data as string in
> getDataStreamOutput method
>
>  @Test
> public void testCompareInputAndOutputDataForInputSignal() throws
> Exception {
> Assertions.assertEquals(sampleInputSignal,
> TestUtils.getDataStreamOutput(
> inputSignal,
> patternRules));
> }
>
>
>
> public static String getDataStreamOutput(JSONObject input, Map String> patternRules) throws Exception {
>
> env.setParallelism(1);
>
> DataStream inputSignal = env.fromElements(input);
>
> DataStream> rawPatternStream =
> env.fromElements(patternRules);
>
> //Generate a key,value pair of set of patterns where key is
> pattern name and value is pattern condition
> DataStream>>
> patternRuleStream =
> rawPatternStream.flatMap(new
> FlatMapFunction,
> Tuple2>>() {
> @Override
> public void flatMap(Map
> patternRules,
> Collector Map>> out) throws Exception {
> for (Map.Entry stringEntry :
> patternRules.entrySet()) {
> JSONObject jsonObject = new
> JSONObject(stringEntry.getValue());
> Map map = new HashMap<>();
> for (String key : jsonObject.keySet()) {
> String value =
> jsonObject.get(key).toString();
> map.put(key, value);
> }
> out.collect(new
> Tuple2<>(stringEntry.getKey(), map));
> }
> }
> });
>
> BroadcastStream>>
> patternRuleBroadcast =
> patternStream.broadcast(patternRuleDescriptor);
>
>
> DataStream> validSignal =
> inputSignal.map(new MapFunction Tuple2>() {
> @Override
> public Tuple2 map(JSONObject
> inputSignal) throws Exception {
> String source =
> inputSignal.getSource();
> return new Tuple2<>(source, inputSignal);
> }
> }).keyBy(0).connect(patternRuleBroadcast).process(new
> MyKeyedBroadCastProcessFunction());
>
>
>  validSignal.map(new MapFunction,
> JSONObject>() {
> @Override
> public JSONObject map(Tuple2
> inputSignal) throws Exception {
> return inputSignal.f1;
> }
> }).addSink(new getDataStreamOutput());
>
> env.execute("TestFlink");
> }
> return (getDataStreamOutput.dataStreamOutput);
> }
>
>
> @SuppressWarnings("serial")
> public static final class getDataStreamOutput implements
> SinkFunction {
> public static String dataStreamOutput;
>
> public void invoke(JSONObject inputSignal) throws Exception {
> dataStreamOutput = inputSignal.toString();
> }
> }
> I need to test different inputs with same broadcast rules but each time i
> am calling this function its again and again doing process from beginning
> take input signal broadcast data, is there a way i can broadcast once and
> keeping on sending the input to the method i explored i can use
> CoFlatMapFunction something like below to combine datastream and keep on
> sending the input rules while method is running but for this one of the
> datastream has to keep on getting data from kafka topic again it will
> overburden on method to load kafka utils and server
>
>  DataStream inputSignalFromKafka =
> env.addSource(inputSignalKafka);
>
> DataStream

[jira] [Created] (FLINK-18625) Maintain redundant taskmanagers to speed up failover

2020-07-17 Thread Liu (Jira)
Liu created FLINK-18625:
---

 Summary: Maintain redundant taskmanagers to speed up failover
 Key: FLINK-18625
 URL: https://issues.apache.org/jira/browse/FLINK-18625
 Project: Flink
  Issue Type: New Feature
Reporter: Liu


When flink job fails because of killed taskmanagers, it will request new 
containers when restarting. Requesting new containers can be very slow, 
sometimes it takes dozens of seconds even more. The reasons can be different, 
for example, yarn and hdfs are slow, machine performance is poor. In some 
product scenario, SLA is high and failover should keep be in seconds.

 

To speed up the recovery process, we can maintain redundant taskmanagers in 
advance. When job restarts, it can use the redundant taskmanagers at once 
instead of requesting new taskmanagers.

 

The implemention can be done in SlotManagerImpl. Below is a brief description:
 # In construct method, init redundantTaskmanagerNum from config.
 # In method start(), allocate redundant taskmanagers.
 # In method start(), Change taskManagerTimeoutCheck() to 
redundantTaskmanagerCheck().
 # In method redundantTaskmanagerCheck(), manage redundant taskmanagers and 
timeout taskmanagers. The idle taskmanager number must be not less than 
redundantTaskmanagerNum.

 * If less, allocate from resourceManager until equal.
 * If more, release timeout taskmanagers but keep at least 
redundantTaskmanagerNum idle taskmanagers.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-18626) the result of aggregate SQL on streaming cannot write to upsert table sink

2020-07-17 Thread gaoling ma (Jira)
gaoling ma created FLINK-18626:
--

 Summary: the result of aggregate SQL on streaming cannot write to 
upsert table sink 
 Key: FLINK-18626
 URL: https://issues.apache.org/jira/browse/FLINK-18626
 Project: Flink
  Issue Type: Improvement
  Components: Connectors / JDBC, Table SQL / API
Affects Versions: 1.11.0
Reporter: gaoling ma



{code:java}
StreamExecutionEnvironment bsEnv = 
StreamExecutionEnvironment.getExecutionEnvironment();
EnvironmentSettings bsSettings = 
EnvironmentSettings.newInstance().useBlinkPlanner().inStreamingMode().build();
StreamTableEnvironment bsTableEnv = StreamTableEnvironment.create(bsEnv, 
bsSettings);
bsEnv.setParallelism(1);
..
bsTableEnv.executeSql("CREATE TABLE aaa(\n" +
"`area_code`VARCHAR,\n" +
"`stat_date`DATE,\n" +
"`index`BIGINT,\n" +
"PRIMARY KEY (area_code, stat_date) NOT ENFORCED" +
") WITH (\n" +
"  'connector'  = 'jdbc',\n" +
"  'url'= 'jdbc:mysql://***/laowufp_data_test',\n" +
"  'table-name' = 'aaa',\n" +
"  'driver' = 'com.mysql.cj.jdbc.Driver',\n" +
"  'username'   = '***',\n" +
"  'password'   = '***'\n" +
")");

bsTableEnv.executeSql("INSERT INTO aaa SELECT area_code, 
CURRENT_DATE AS stat_date, count(*) AS index FROM bbb WHERE is_record = '是' 
GROUP BY area_code");
{code}
When I write the aggregate SQL results into upsert stream JDBC table sink, the 
program automatically exits with no hint.
The aggregate results suppose to be a restract stream, but another question is 
how to change the restract stream into upsert stream. Or there is a better way 
to continuous update the aggregate SQL results into JDBC table. Your comment is 
appreciated. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-18627) Get unmatch filter method records to side output

2020-07-17 Thread Roey Shem Tov (Jira)
Roey Shem Tov created FLINK-18627:
-

 Summary: Get unmatch filter method records to side output
 Key: FLINK-18627
 URL: https://issues.apache.org/jira/browse/FLINK-18627
 Project: Flink
  Issue Type: New Feature
  Components: API / DataStream
Reporter: Roey Shem Tov
 Fix For: 1.12.0


Unmatch records to filter functions should send somehow to side output.

Example:

 
{code:java}
datastream
.filter(i->i%2==0)
.sideOutput(oddNumbersSideOutput);
{code}
 

 

That's way we can filter multiple times and send the filtered records to our 
side output instead of dropping it immediatly, it can be useful in many ways.

 

What do you think?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-18628) Invalid error message for overloaded methods with same parameter name

2020-07-17 Thread Timo Walther (Jira)
Timo Walther created FLINK-18628:


 Summary: Invalid error message for overloaded methods with same 
parameter name
 Key: FLINK-18628
 URL: https://issues.apache.org/jira/browse/FLINK-18628
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / API
Reporter: Timo Walther
Assignee: Timo Walther


If a function has overloaded evaluation methods but same argument names, this 
leads to a confusing error message where types are missing:

{code}
Caused by: org.apache.flink.table.api.ValidationException: Invalid input 
arguments. Expected signatures are:
test-catalog.TEST_DB.myScalarFunc(a => )
at 
org.apache.flink.table.types.inference.TypeInferenceUtil.createInvalidInputException(TypeInferenceUtil.java:190)
at 
org.apache.flink.table.planner.functions.inference.TypeInferenceOperandChecker.checkOperandTypesOrError(TypeInferenceOperandChecker.java:131)
at 
org.apache.flink.table.planner.functions.inference.TypeInferenceOperandChecker.checkOperandTypes(TypeInferenceOperandChecker.java:89)
... 79 common frames omitted
Caused by: org.apache.flink.table.api.ValidationException: Invalid input 
arguments.
at 
org.apache.flink.table.types.inference.TypeInferenceUtil.inferInputTypes(TypeInferenceUtil.java:467)
at 
org.apache.flink.table.types.inference.TypeInferenceUtil.adaptArguments(TypeInferenceUtil.java:123)
at 
org.apache.flink.table.types.inference.TypeInferenceUtil.adaptArguments(TypeInferenceUtil.java:102)
at 
org.apache.flink.table.planner.functions.inference.TypeInferenceOperandChecker.checkOperandTypesOrError(TypeInferenceOperandChecker.java:126)
... 80 common frames omitted
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-18629) ConnectedStreams#keyBy can not derive key TypeInformation for lambda KeySelectors

2020-07-17 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-18629:


 Summary: ConnectedStreams#keyBy can not derive key TypeInformation 
for lambda KeySelectors
 Key: FLINK-18629
 URL: https://issues.apache.org/jira/browse/FLINK-18629
 Project: Flink
  Issue Type: Bug
  Components: API / DataStream
Affects Versions: 1.11.0, 1.10.0, 1.12.0
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz
 Fix For: 1.10.2, 1.12.0, 1.11.2


Following test fails:
{code}
@Test
public void testKeyedConnectedStreamsType() {
StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment();

DataStreamSource stream1 = env.fromElements(1, 2);
DataStreamSource stream2 = env.fromElements(1, 2);

ConnectedStreams connectedStreams = 
stream1.connect(stream2)
.keyBy(v -> v, v -> v);

KeyedStream firstKeyedInput = (KeyedStream) 
connectedStreams.getFirstInput();
KeyedStream secondKeyedInput = (KeyedStream) 
connectedStreams.getSecondInput();
assertThat(firstKeyedInput.getKeyType(), equalTo(Types.INT));
assertThat(secondKeyedInput.getKeyType(), equalTo(Types.INT));
}
{code}

The problem is that the wildcard type is evaluated as {{Object}} for lambdas, 
which in turn produces {{GenericTypeInfo}} for any KeySelector provided 
as lambda.

I suggest changing the method signature to:
{code}
public  ConnectedStreams keyBy(
KeySelector keySelector1,
KeySelector keySelector2)
{code}

This would be a code compatible change. Might break the compatibility of state 
backend (would change derived key type info). Nevertheless there is a 
workaround to use:

{code}
public  ConnectedStreams keyBy(
KeySelector keySelector1,
KeySelector keySelector2,
TypeInformation keyType)
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [VOTE] Release 1.11.1, release candidate #1

2020-07-17 Thread Timo Walther
I agree with Dian. We can release a 1.11.2 shortly afterwards. Only 
regressions compared to 1.11.0 should block this vote.


Regards,
Timo

On 17.07.20 10:48, Dian Fu wrote:

Generally, I tend to continue the vote if this is not a blocking issue for the 
following reasons:
- As discussed in the discussion thread[1], this release is mainly to fix the 
CDC bug to provide a complete CDC feature.
- We are still in an early stage after 1.11.0 release, so I think there will 
always be more issues raised.

However, I'm glad to hear more opinions. We can cancel the current RC if most 
people think so.

Besides, we can still make it into 1.11.1 if we found some other blocking issue.

[1] 
http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-Releasing-Flink-1-11-1-soon-td43065.html
 


在 2020年7月17日,下午2:09,Rui Li  写道:

Hey,

I understand it's a little bit late to bring this up, but we have a Hive
dialect bug for which the PR [1] is ready to merge. So I wonder whether we
could include it in 1.11.1? The issue is not a blocker, but I believe it's
good to have in the bug fix release.

[1] https://github.com/apache/flink/pull/12888

On Wed, Jul 15, 2020 at 6:02 PM Dian Fu  wrote:


Hi everyone,

Please review and vote on the release candidate #1 for the version 1.11.1,
as follows:
[ ] +1, Approve the release
[ ] -1, Do not approve the release (please provide specific comments)


The complete staging area is available for your review, which includes:
* JIRA release notes [1],
* the official Apache source release and binary convenience releases to be
deployed to dist.apache.org [2], which are signed with the key with
fingerprint 6B6291A8502BA8F0913AE04DDEB95B05BF075300 [3],
* all artifacts to be deployed to the Maven Central Repository [4],
* source code tag "release-1.11.1-rc1" [5],
* website pull request listing the new release and adding announcement
blog post [6].

The vote will be open for at least 72 hours. It is adopted by majority
approval, with at least 3 PMC affirmative votes.

Thanks,
Dian

[1]
https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12348323
[2] https://dist.apache.org/repos/dist/dev/flink/flink-1.11.1-rc1/
[3] https://dist.apache.org/repos/dist/release/flink/KEYS
[4]
https://repository.apache.org/content/repositories/orgapacheflink-1378/
[5]
https://github.com/apache/flink/commit/7eb514a59f6fd117c3535ec4bebc40a375f30b63
[6] https://github.com/apache/flink-web/pull/359




--
Best regards!
Rui Li







[jira] [Created] (FLINK-18630) Improve solution to Long Rides training exercise

2020-07-17 Thread David Anderson (Jira)
David Anderson created FLINK-18630:
--

 Summary: Improve solution to Long Rides training exercise
 Key: FLINK-18630
 URL: https://issues.apache.org/jira/browse/FLINK-18630
 Project: Flink
  Issue Type: Improvement
  Components: Documentation / Training / Exercises
Reporter: David Anderson
Assignee: David Anderson


The current solution to the Long Rides exercise will incorrectly generate an 
alert in the case where the END event arrives more than two hours before the 
corresponding START event.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-18631) Serializer for scala sealed trait hierarchies

2020-07-17 Thread Roman Grebennikov (Jira)
Roman Grebennikov created FLINK-18631:
-

 Summary: Serializer for scala sealed trait hierarchies
 Key: FLINK-18631
 URL: https://issues.apache.org/jira/browse/FLINK-18631
 Project: Flink
  Issue Type: Improvement
  Components: API / Type Serialization System
Affects Versions: 1.11.0
Reporter: Roman Grebennikov


Currently, when flink serialization system spots an ADT-style class hierarchy 
in the Scala code, it falls back to GenericType and kryo serialization, which 
may introduce performance issues. For example, for code:

{{sealed trait ADT}}{{case class Foo(a: String) extends ADT}}{{case class 
Bar(b: Int) extends 
ADT}}{{env.fromCollection(List[ADT](Foo("a"),Bar(1))).collect()}}

 

It will fall back to Kryo even if there is no problem with dealing with 
List[Foo] or List[Bar] separately. Using ADTs is a convenient way in Scala to 
model different types of messages, but Flink type system performance limits it 
to only a non performance-critical paths.

 

It would be nice to have a sealed trait hierarchies support out of the box 
without kryo fallback.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-18632) RowData's row kind do not assigned from input row data when sink code generate and physical type info is pojo type

2020-07-17 Thread luoziyu (Jira)
luoziyu created FLINK-18632:
---

 Summary: RowData's row kind do not assigned from input row data 
when sink code generate and physical type info is pojo type
 Key: FLINK-18632
 URL: https://issues.apache.org/jira/browse/FLINK-18632
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Planner
Affects Versions: 1.11.0, 1.10.0
 Environment: mac os 10.15.3
Reporter: luoziyu
 Attachments: bug.jpg

I use tuple type and pojo type to test retract stream, the test data is same 
and when i use 

toRetractStream(Table table, Class clazz) api, the retract msg become to 
insert msg from sink conversion. I found the SinkCodeGenerator object did not 
give a row kind to afterIndexModify variable, so the delete msg become insert 
msg when it comes into processElement function generator by SinkCodeGenerator.

At last i add line of code like the pic in attachment and it works, so is it a 
bug? 

 

 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [VOTE] Release 1.11.1, release candidate #1

2020-07-17 Thread Leonard Xu
+1 (non-binding)

- checked/verified signatures and hashes
- built from source code with scala 2.11 succeeded
- checked that there are no missing artifacts
- started a cluster, the Web UI was accessible, submitted a wordcount job and 
ran succeeded, no suspicious log output
- test using SQL Client to submit job and the query result is expected
- go through all issues which fix version property is 1.11.1, all issues are 
closed except FLINK-15794,
  and FLINK-15794 has fixed in master and 1.11.1 just wait for being fixed in 
1.10.2.
- the web PR looks good
 
For FLINK-18588,  I also agree with Timo to put it to 1.11.2 because it's  a 
`Major` bug rather than `Blocker`.

Best,
Leonard

Re: [VOTE] Release 1.11.1, release candidate #1

2020-07-17 Thread Rui Li
OK, I agree FLINK-18588 can wait for the next release.

On Fri, Jul 17, 2020 at 11:56 PM Leonard Xu  wrote:

> +1 (non-binding)
>
> - checked/verified signatures and hashes
> - built from source code with scala 2.11 succeeded
> - checked that there are no missing artifacts
> - started a cluster, the Web UI was accessible, submitted a wordcount job
> and ran succeeded, no suspicious log output
> - test using SQL Client to submit job and the query result is expected
> - go through all issues which fix version property is 1.11.1, all issues
> are closed except FLINK-15794,
>   and FLINK-15794 has fixed in master and 1.11.1 just wait for being fixed
> in 1.10.2.
> - the web PR looks good
>
> For FLINK-18588,  I also agree with Timo to put it to 1.11.2 because it's
> a `Major` bug rather than `Blocker`.
>
> Best,
> Leonard



-- 
Best regards!
Rui Li


Flink Sinks

2020-07-17 Thread Prasanna kumar
Hi ,

I did not find out of box flink sink connector for http and SQS mechanism.

Has anyone implemented it?
Wanted to know if we are writing a custom sink function  , whether  it
would affect semantic exactly one guarantees ?


Thanks ,
Prasanna


thrift support

2020-07-17 Thread Chen Qin
Hi there,

Here in Pinterest, we utilize thrift end to end in our tech stack. As we
have been building Flink as a service platform, the team spent time working
on supporting Flink jobs with thrift format and successfully launched a
good number of important jobs in Production in H1.

In H2, we are looking at supporting Flink SQL with native Thrift support.
We have some prototypes already running in development settings and plan to
move forward on this approach.

In the long run, we thought out of box thrift format support would benefit
other folks as well. So the question is if there is already some effort
around this space we can sync with?

Chen
Pinterest Data


[jira] [Created] (FLINK-18633) Download miniconda is instable

2020-07-17 Thread Dian Fu (Jira)
Dian Fu created FLINK-18633:
---

 Summary: Download miniconda is instable
 Key: FLINK-18633
 URL: https://issues.apache.org/jira/browse/FLINK-18633
 Project: Flink
  Issue Type: Test
  Components: API / Python
Reporter: Dian Fu
 Fix For: 1.12.0, 1.11.2


[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=4606&view=logs&j=bdd9ea51-4de2-506a-d4d9-f3930e4d2355&t=17a7e096-e650-5b91-858e-3d426f9eeb2f]

{code}
RUNNING './flink-python/dev/lint-python.sh'. 
installing environment 
installing wget... 
install wget... [SUCCESS] 
installing miniconda... 
download miniconda... 
Dowload failed.You can try again
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-18634) FlinkKafkaProducerITCase.testRecoverCommittedTransaction failed with "Timeout expired after 60000milliseconds while awaiting InitProducerId"

2020-07-17 Thread Dian Fu (Jira)
Dian Fu created FLINK-18634:
---

 Summary: FlinkKafkaProducerITCase.testRecoverCommittedTransaction 
failed with "Timeout expired after 6milliseconds while awaiting 
InitProducerId"
 Key: FLINK-18634
 URL: https://issues.apache.org/jira/browse/FLINK-18634
 Project: Flink
  Issue Type: Test
  Components: Connectors / Kafka
Affects Versions: 1.11.0
Reporter: Dian Fu


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=4590&view=logs&j=c5f0071e-1851-543e-9a45-9ac140befc32&t=684b1416-4c17-504e-d5ab-97ee44e08a20

{code}
2020-07-17T11:43:47.9693015Z [ERROR] Tests run: 12, Failures: 0, Errors: 1, 
Skipped: 0, Time elapsed: 269.399 s <<< FAILURE! - in 
org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerITCase
2020-07-17T11:43:47.9693862Z [ERROR] 
testRecoverCommittedTransaction(org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerITCase)
  Time elapsed: 60.679 s  <<< ERROR!
2020-07-17T11:43:47.9694737Z org.apache.kafka.common.errors.TimeoutException: 
org.apache.kafka.common.errors.TimeoutException: Timeout expired after 
6milliseconds while awaiting InitProducerId
2020-07-17T11:43:47.9695376Z Caused by: 
org.apache.kafka.common.errors.TimeoutException: Timeout expired after 
6milliseconds while awaiting InitProducerId
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)