Zhijiang created FLINK-16768:
Summary:
HadoopS3RecoverableWriterITCase.testRecoverWithStateWithMultiPart runs without
exit
Key: FLINK-16768
URL: https://issues.apache.org/jira/browse/FLINK-16768
Project:
Some thoughts:
- by virtue of maintaining the past 2 releases we will have to maintain
any Travis infrastructure as long as 1.10 is supported, i.e., until 1.12
- the azure setup doesn't appear to be equivalent yet since the java e2e
profile isn't setting the hadoop switch (-Pe2e-hadoop), as a re
Hi Kurt,
I do not object to promote the concepts of SQL, but I don't think we should
do that by introducing a new dedicate set of connector public interfaces
that is only for SQL. The same argument can be applied to Gelly, CEP, and
Machine Learning, claiming that they need to introduce a dedicated
Hi Robert,
Thanks a lot for your great work!
Overall I'm +1 to switch to Azure as the primary CI tool if it's stable enough
as I think there is no need to run both the travis and Azure for one single PR.
However, there are still some improvements need to do and it would be great if
these issue
Dawid Wysakowicz created FLINK-16769:
Summary: Support new type inference for Table#flatMap
Key: FLINK-16769
URL: https://issues.apache.org/jira/browse/FLINK-16769
Project: Flink
Issue Ty
Zhijiang created FLINK-16770:
Summary: Resuming Externalized Checkpoint (rocks, incremental,
scale up) end-to-end test fails with no such file
Key: FLINK-16770
URL: https://issues.apache.org/jira/browse/FLINK-16770
@Dian we haven't been rebasing PR's against master for months, ever
since we switched to CiBot.
On 25/03/2020 09:29, Dian Fu wrote:
Hi Robert,
Thanks a lot for your great work!
Overall I'm +1 to switch to Azure as the primary CI tool if it's stable enough
as I think there is no need to run b
Hey, thanks for the answer.
But if I add the *AfterMatchSkipStrategy* it simply seems to emit event by
event so in the case described above it does emit: [400], [500]
Shouldn't the *greedy* quantifier guarantee that this will be matched as
many times as possible thus creating [400, 500] ??
Thanks
P.S.
So now my pattern looks like this:
Pattern.begin[AccelVector](EventPatternName,
AfterMatchSkipStrategy.skipPastLastEvent())
.where(_.data() > Threshold)
.oneOrMore
.greedy
.consecutive()
.within(Time.minutes(1))
śr., 25 mar 2020 o 10:03 Dominik Wosiński napisał(a):
> Hey, thanks
Thanks for the information. I'm sorry that I'm not aware of this before and I
have checked the build log of travis and confirmed that this is true.
@Chesnay Are there any specific reasons for this and is it possible to add this
back for Azure Pipelines?
Thanks,
Dian
> 在 2020年3月25日,下午4:43,Chesn
It was left out since it adds significant additional complexity and the
value is dubious at best for PRs that aren't merged shortly after the
build has finished.
On 25/03/2020 10:28, Dian Fu wrote:
Thanks for the information. I'm sorry that I'm not aware of this before and I
have checked the
Rui Li created FLINK-16771:
--
Summary: NPE when filtering by decimal column
Key: FLINK-16771
URL: https://issues.apache.org/jira/browse/FLINK-16771
Project: Flink
Issue Type: Bug
Components
Hi Becket,
Let me clarify a few things first: Historically we thought of Table
API/SQL as a library on top of DataStream API. Similar to Gelly or CEP.
We used TypeInformation in Table API to integrate nicely with DataStream
API. However, the last years have shown that SQL is not just a library
Hi community,
Timo, Fabian and Dawid have some feedbacks about FLIP-84[1]. The feedbacks
are all about new introduced methods. We had a discussion yesterday, and
most of feedbacks have been agreed upon. Here is the conclusions:
*1. about proposed methods in `TableEnvironment`:*
the original propo
Chesnay Schepler created FLINK-16772:
Summary: Bump derby to 10.12.1.1+ or exclude it
Key: FLINK-16772
URL: https://issues.apache.org/jira/browse/FLINK-16772
Project: Flink
Issue Type: Im
Maximilian Michels created FLINK-16773:
--
Summary: Flink 1.10 test execution is broken due to premature test
cluster shutdown
Key: FLINK-16773
URL: https://issues.apache.org/jira/browse/FLINK-16773
jackylau created FLINK-16774:
Summary: expose HBaseUpsertSinkFunction hTableName and schema for
other system
Key: FLINK-16774
URL: https://issues.apache.org/jira/browse/FLINK-16774
Project: Flink
jackylau created FLINK-16775:
Summary: expose FlinkKafkaConsumer/FlinkKafkaProducer Properties
for other system
Key: FLINK-16775
URL: https://issues.apache.org/jira/browse/FLINK-16775
Project: Flink
Hi Godfrey,
thanks for starting the discussion on the mailing list. And sorry again
for the late reply to FLIP-84. I have updated the Google doc one more
time to incorporate the offline discussions.
From Dawid's and my view, it is fine to postpone the multiline support
to a separate method.
Hi Godfrey,
The changes sounds good to me. +1 to start another voting.
A minor question: does the ResultKind contain an ERROR kind?
Best,
Jark
On Wed, 25 Mar 2020 at 18:51, Timo Walther wrote:
> Hi Godfrey,
>
> thanks for starting the discussion on the mailing list. And sorry again
> for the
Rui Li created FLINK-16776:
--
Summary: Support schema evolution for Hive parquet table
Key: FLINK-16776
URL: https://issues.apache.org/jira/browse/FLINK-16776
Project: Flink
Issue Type: Task
This sounds good to go ahead from my side.
I like the approach that Becket suggested - in that case the core
abstraction that everyone would need to understand would be "external
resource allocation" and the "ResourceInfoProvider", and the GPU specific
code would be a specific implementation only
jackylau created FLINK-16777:
Summary: expose Pipeline in JobClient
Key: FLINK-16777
URL: https://issues.apache.org/jira/browse/FLINK-16777
Project: Flink
Issue Type: Improvement
Compon
Hi Jark,
good question. Actually, there was an ERROR kind that could have been
enabled via a config option. Such that everything ends up in the
TableResult. But @Kurt had some concerns which is why we didn't add this
kind of result yet.
Regards,
Timo
On 25.03.20 12:00, Jark Wu wrote:
Hi G
Robert Metzger created FLINK-16778:
--
Summary: the java e2e profile isn't setting the hadoop switch on
Azure
Key: FLINK-16778
URL: https://issues.apache.org/jira/browse/FLINK-16778
Project: Flink
Timo Walther created FLINK-16779:
Summary: Support RAW types through the stack
Key: FLINK-16779
URL: https://issues.apache.org/jira/browse/FLINK-16779
Project: Flink
Issue Type: Sub-task
Thank you for the feedback so far.
Responses to the items Chesnay raised:
- by virtue of maintaining the past 2 releases we will have to maintain any
> Travis infrastructure as long as 1.10 is supported, i.e., until 1.12
>
Okay. I wasn't sure about the exact policy there.
> - the azure setup d
Thank you for your opinions. I updated the FLIP with results of the
discussion. Let me know if you have further concerns.
Best,
Dawid
On 05/03/2020 07:46, Jark Wu wrote:
> Hi Dawid,
>
>> INHERITS creates a new table with a "link" to the original table.
> Yes, INHERITS is a "link" to the original
Thanks for the efforts Robert!
Checking the pipeline failure report [1] the pass rate is relatively low,
and I'm wondering whether we need more efforts to stabilize it before
replacing travis PR runs.
>From the report, uploading log fails 1/5 of the tests, which indicates the
access from azure to
+1 (binding)
Best,
Gary
On Wed, Mar 18, 2020 at 3:16 PM Andrey Zagrebin
wrote:
> Hi All,
>
> The discussion for FLIP-116 looks to be resolved [1].
> Therefore, I start the vote for it.
> The vote will end at 6pm CET on Monday, 23 March.
>
> Best,
> Andrey
>
> [1]
>
> http://mail-archives.apache
Thanks everybody for the voting.
I also vote
+1 (binding)
Hereby the vote is closed and the FLIP-116 is accepted
3 binding votes:
@Till Rohrmann
@g...@apache.org
@azagre...@apache.org (me)
2 non-binding votes:
@Xintong Song
@Yang Wang
no vetos/-1s
Best,
Andrey
On Wed, Mar 25, 2020 at 6:1
Hi Timo,
Thanks for the updating.
Regarding to "multiline statement support", I'm also fine that
`TableEnvironment.executeSql()` only supports single line statement, and we
can support multiline statement later (needs more discussion about this).
Regarding to "StatementSet.explian()", I don't ha
The easiest way to disable travis for pushes is to remove all builds
from the .travis.yml with a push/pr condition.
On 25/03/2020 15:03, Robert Metzger wrote:
Thank you for the feedback so far.
Responses to the items Chesnay raised:
- by virtue of maintaining the past 2 releases we will have
Hello Arvid,
Thanks for joining to the thread!
First, did you take into consideration that I would like to dynamically add
queries on the same source? That means first define one query, later the
day add another one , then another one, and so on. A Week later kill one of
those, start yet another on
Hi Dominik,
I think you are hitting a bug. The greedy quantifier does not work well
if applied for the last element of a pattern. There is a jira issue to
improve support for greedy qualifier[1].
You could work it around with adding an additional state at the end. E.g. :
Pattern.begin[AccelVecto
Bowen Li created FLINK-16780:
Summary: improve Flink lookup join
Key: FLINK-16780
URL: https://issues.apache.org/jira/browse/FLINK-16780
Project: Flink
Issue Type: New Feature
Componen
Bowen Li created FLINK-16781:
Summary: add built-in cache mechanism for LookupableTableSource in
lookup join
Key: FLINK-16781
URL: https://issues.apache.org/jira/browse/FLINK-16781
Project: Flink
Yun Tang created FLINK-16782:
Summary: Avoid unnecessary check on expired time state when we set
visibility as ReturnExpiredIfNotCleanedUp
Key: FLINK-16782
URL: https://issues.apache.org/jira/browse/FLINK-16782
I saw that requirement but I'm not sure if you really need to modify the
query at runtime.
Unless you need reprocessing for newly added rules, I'd probably just
cancel with savepoint and restart the application with the new rules. Of
course, it depends on the rules themselves and how much state th
Thank you for your responses.
@Yu Li: In the current master, the log upload always fails, if the e2e job
failed. I just merged a PR that fixes this issue [1]. The problem was not
really the network stability, rather a problem with the interaction of the
jobs in the pipeline (the e2e job did not se
Igal Shilman created FLINK-16783:
Summary: Add Polyglot Docker-compose example
Key: FLINK-16783
URL: https://issues.apache.org/jira/browse/FLINK-16783
Project: Flink
Issue Type: Improvement
Seth Wiesman created FLINK-16784:
Summary: Support KeyedBroadcastProcessFunction state
bootstrapping.
Key: FLINK-16784
URL: https://issues.apache.org/jira/browse/FLINK-16784
Project: Flink
Igal Shilman created FLINK-16785:
Summary: Add a README to the Python SDK
Key: FLINK-16785
URL: https://issues.apache.org/jira/browse/FLINK-16785
Project: Flink
Issue Type: Improvement
Hequn Cheng created FLINK-16786:
---
Summary: Fix pyarrow version incompatible problem
Key: FLINK-16786
URL: https://issues.apache.org/jira/browse/FLINK-16786
Project: Flink
Issue Type: Bug
Jingsong Lee created FLINK-16787:
Summary: Provide an assigner strategy of average splits allocation
Key: FLINK-16787
URL: https://issues.apache.org/jira/browse/FLINK-16787
Project: Flink
Iss
zhisheng created FLINK-16788:
Summary: ElasticSearch Connector SQL DDL add optional config (eg:
enable-auth/username/password)
Key: FLINK-16788
URL: https://issues.apache.org/jira/browse/FLINK-16788
Proje
Hi Timo,
Thanks for the reply. I totally agree that there must be something new
added to the connector in order to make it work for SQL / Table. My concern
is mostly over what they should be, and how to add them. To be honest, I
was kind of lost when looking at the interfaces such as
DataStructure
Thanks everyone who engaged in this discussion ~
Our goal is "Supports Dynamic Table Options for Flink SQL". After an
offline discussion with Kurt, Timo and Dawid, we have made the final
conclusion, here is the summary:
- Use comment style syntax to specify the dynamic table options: "/*+
Hi all,
After some great discussion, I think we have at least reached a consensus
that we can have a unified sink to handle streaming, batch, hive and HDFS.
And for FileSystem connector, undoubtedly, we reuse DataStream
StreamingFileSink.
I updated the FLIP:
1.Move external HDFS and close exactl
Rong Rong created FLINK-16789:
-
Summary: Support JMX RMI via JMXConnectorServer
Key: FLINK-16789
URL: https://issues.apache.org/jira/browse/FLINK-16789
Project: Flink
Issue Type: New Feature
yuwenbing created FLINK-16790:
-
Summary: enables the interpretation of backslash escapes
Key: FLINK-16790
URL: https://issues.apache.org/jira/browse/FLINK-16790
Project: Flink
Issue Type: Improve
yanxiaobin created FLINK-16791:
--
Summary: Could not deploy Yarn job cluster when using
flink-s3-fs-hadoop-1.10.0.jar
Key: FLINK-16791
URL: https://issues.apache.org/jira/browse/FLINK-16791
Project: Flin
yanxiaobin created FLINK-16792:
--
Summary: flink-s3-fs-hadoop cannot access s3
Key: FLINK-16792
URL: https://issues.apache.org/jira/browse/FLINK-16792
Project: Flink
Issue Type: Bug
Com
jinhai created FLINK-16793:
--
Summary: Add jobName to log4j ConversionPattern
Key: FLINK-16793
URL: https://issues.apache.org/jira/browse/FLINK-16793
Project: Flink
Issue Type: Improvement
54 matches
Mail list logo