Hi, everyone
Thanks Fabian,Kurt for making the multiple version(event time) clear, I also
like the 'PERIOD FOR SYSTEM' syntax which supported in SQL standard. I think we
can add some explanation of the multiple version support in the future section
of FLIP.
For the PRIMARY KEY semantic, I agre
Roc Marshal created FLINK-18423:
---
Summary: Fix Prefer tag in document "Detecting Patterns" page of
"Streaming Concepts"
Key: FLINK-18423
URL: https://issues.apache.org/jira/browse/FLINK-18423
Project: F
Hi, @klion26.I have made some changes based on your suggestions,
which is very helpful for improvement.And could you check it again for me
if you have free time ?Thank you so much.
在 2020-06-23 14:10:48,"Congxian Qiu" 写道:
>Hi Roc
>
>Thanks for your contribution. I've reviewed it and gave some c
RocMarshal created FLINK-18422:
--
Summary: Update Prefer tag in documentation 'Fault Tolerance
training lesson'
Key: FLINK-18422
URL: https://issues.apache.org/jira/browse/FLINK-18422
Project: Flink
Hi Thomas,
Thanks for these valuable feedbacks and suggestions, and I think they are very
helpful for making us better.
I can give an direct answer for this issue:
> checkpoint alignment buffered metric missing - note that this job isn't using
> the new unaligned checkpointing that should be op
Dian Fu created FLINK-18421:
---
Summary: Elasticsearch (v6.3.1) sink end-to-end test instable
Key: FLINK-18421
URL: https://issues.apache.org/jira/browse/FLINK-18421
Project: Flink
Issue Type: Bug
Dian Fu created FLINK-18420:
---
Summary: SQLClientHBaseITCase.testHBase failed with
"ArgumentError: wrong number of arguments (0 for 1)"
Key: FLINK-18420
URL: https://issues.apache.org/jira/browse/FLINK-18420
Hi,
Thanks for putting together the RC!
I have some preliminary feedback from testing with commit
934f91ead00fd658333f65ffa37ab60bd5ffd99b
An internal benchmark application that reads from Kinesis and checkpoints
~12GB performs comparably to 1.10.1
There were a few issues hit upgrading our code
Thanks Kurt,
Yes, you are right.
The `PERIOD FOR SYSTEM_TIME` that you linked before corresponds to the
VERSION clause that I used and would explicitly define the versioning of a
table.
I didn't know that the `PERIOD FOR SYSTEM_TIME` cause is already defined by
the SQL standard.
I think we would n
Hi Aljoscha,
Thank you for bringing this up. IMHO the situation is different for minor &
patch version upgrades.
1) I don't think we need to provide any guarantees across Flink minor
versions (e.g. 1.10.1 -> 1.11.0). It seems reasonable to expect users to
recompile their user JARs when upgrading
Hi Fabian,
I agree with you that implicitly letting event time to be the version of
the table will
work in most cases, but not for all. That's the reason I mentioned `PERIOD
FOR` [1]
syntax in my first email, which is already in sql standard to represent the
validity of
each row in the table.
If
Dawid Wysakowicz created FLINK-18419:
Summary: Can not create a catalog
Key: FLINK-18419
URL: https://issues.apache.org/jira/browse/FLINK-18419
Project: Flink
Issue Type: Bug
C
appleyuchi created FLINK-18418:
--
Summary: document example error
Key: FLINK-18418
URL: https://issues.apache.org/jira/browse/FLINK-18418
Project: Flink
Issue Type: Bug
Reporter: appl
Hi,
this has come up a few times now and I think we need to discuss the
guarantees that we want to officially give for this. What I mean by
cross-version compatibility is using, say, a Flink 1.10 Kafka connector
dependency/jar with Flink 1.11, or a Flink 1.10.0 connector with Flink
1.10.1. In
Timo Walther created FLINK-18417:
Summary: Support List as a conversion class for ARRAY
Key: FLINK-18417
URL: https://issues.apache.org/jira/browse/FLINK-18417
Project: Flink
Issue Type: Sub-
Hi everyone,
Every table with a primary key and an event-time attribute provides what is
needed for an event-time temporal table join.
I agree that, from a technical point of view, the TEMPORAL keyword is not
required.
I'm more sceptical about implicitly deriving the versioning information of
a (
Hi, Cranmer.
I'm Roland Wang. I've read the FLIP you wrote, and agree with your
design.
Recently, I'm working on this feature too, and have made some progress:
1. I add two methods: getOrRegisterConsumer & subscribeToShard on
KinesisProxyInterface.
2. I re-implement the KinesisProx
Jark Wu created FLINK-18416:
---
Summary: Deprecate TableEnvironment#connect API
Key: FLINK-18416
URL: https://issues.apache.org/jira/browse/FLINK-18416
Project: Flink
Issue Type: Task
Re
Hi Febian,
I do not think that issue would block the current testing purpose, since the
codes of RC2 will not cover that compile issue.
You can checkout the RC2 tag [1] for compiling if needed. And we might prepare
for the next formal votable RC3 soon.
[1] https://dist.apache.org/repos/dist/d
I tested RC2 successfully on EMR with 160 cores across 5 nodes using the
performance benchmark [1] with s3 file backend.
I used different backpressure settings to compare aligned and unaligned
checkpoints on 1.11-rc2 and aligned checkpoints of 1.10.1. I saw no errors
and no regressions (rather we
Hi,
Thanks again for uploading the missing artifacts. Unfortunately this rc does
not fully compile due to [1].
Would it be possible for testing purposed to quickly include this fix into the
rc or do you think it is necessary to open a complete new one?
[1] https://issues.apache.org/jira/brows
21 matches
Mail list logo