Hi Till,
it actually seems to be two issues, one is a bug and one would be an
improvement (it never worked before). I'll split the tickets later. I'll
raise the bug one to Critical since I want to fix it for 1.14, but it was
also broken in 1.13 already, so it shouldn't be a blocker.
Best
Ingo
O
Does this also affect Flink 1.14.0? If yes, do we want to fix this issue
for the upcoming release? If yes, then please make this issue a blocker or
at least critical.
Cheers,
Till
On Mon, Aug 23, 2021 at 8:39 AM Ingo Bürk wrote:
> Thanks Timo for the confirmation. I've also raised FLINK-23911[1
Thanks Timo for the confirmation. I've also raised FLINK-23911[1] for this.
[1] https://issues.apache.org/jira/browse/FLINK-23911
Best
Ingo
On Mon, Aug 23, 2021 at 8:34 AM Timo Walther wrote:
> Hi everyone,
>
> this sounds definitely like a bug to me. Computing metadata might be
> very expens
Hi everyone,
this sounds definitely like a bug to me. Computing metadata might be
very expensive and a connector might expose a long list of metadata
keys. It was therefore intended to project the metadata if possible. I'm
pretty sure that this worked before (at least when implementing
Suppor
Hi Jingsong,
thanks for your answer. Even if the source implements
SupportsProjectionPushDown, #applyProjections will never be called with
projections for metadata columns. For example, I have the following test:
@Test
def test(): Unit = {
val tableId = TestValuesTableFactory.registerData(Seq()
Hi,
I remember the projection only works with SupportsProjectionPushDown.
You can take a look at
`PushProjectIntoTableSourceScanRuleTest.testNestProjectWithMetadata`.
Will applyReadableMetadata again in the PushProjectIntoTableSourceScanRule.
But there may be bug in
PushProjectIntoTableSourceSc