Hi Fuyao,
yes I agree. The code evolved quicker than the docs. I will create an
issue for this.
Regards,
Timo
On 25.11.20 19:27, Fuyao Li wrote:
Hi Timo,
Thanks for your information. I saw the Flink SQL can actually do the
full outer join in the test code with interval join semantic. Howe
Hi Timo,
Thanks for your information. I saw the Flink SQL can actually do the full
outer join in the test code with interval join semantic. However, this is
not explicitly shown in the Flink SQL documentation. That makes me thinking
this might not be available for me to use. Maybe the doc could be
Hi Fuyao,
great that you could make progress.
2. Btw nice summary of the idleness concept. We should have that in the
docs actually.
4. By looking at tests like `IntervalJoinITCase` [1] it seems that we
also support FULL OUTER JOINs as interval joins. Maybe you can make use
of them.
5. "b
AFAIK, FLINK-10886 is not implemented yet.
cc @Becket may know more plans about this feature.
Best,
Jark
On Sat, 21 Nov 2020 at 03:46, wrote:
> Hi Timo,
>
> One more question, the blog also mentioned a jira task to solve this
> issue. https://issues.apache.org/jira/browse/FLINK-10886. Will thi
Hi Timo,
One more question, the blog also mentioned a jira task to solve this
issue. https://issues.apache.org/jira/browse/FLINK-10886. Will this
feature be available in 1.12? Thanks!
Best,
Fuyao
On 11/20/20 11:37, fuyao...@oracle.com wrote:
Hi Timo,
Thanks for your reply! I think your s
Hi Timo,
Thanks for your reply! I think your suggestions is really helpful! The
good news is that I had managed to figure out it something by myself few
days ago.
1. Thanks for the update about the table parallelism issue!
2. After trying out the idleness setting. It prevents some idle subta
Hi Fuyao,
sorry for not replying earlier.
You posted a lot of questions. I scanned the thread quickly, let me try
to answer some of them and feel free to ask further questions afterwards.
"is it possible to configure the parallelism for Table operation at
operator level"
No this is not pos
Hi Matthias,
Just to provide more context on this problem. I only have 1 partition per
each Kafka Topic at the beginning before the join operation. After reading
the doc:
https://ci.apache.org/projects/flink/flink-docs-stable/dev/connectors/kafka.html#kafka-consumers-and-timestamp-extractionwaterm
Hi Matthias,
One more question regarding Flink table parallelism, is it possible to
configure the parallelism for Table operation at operator level, it seems
we don't have such API available, right? Thanks!
Best,
Fuyao
On Fri, Nov 13, 2020 at 11:48 AM Fuyao Li wrote:
> Hi Matthias,
>
> Thanks
Hi Matthias,
Thanks for your information. I have managed to figure out the first issue
you mentioned. Regarding the second issue. I have got some progress on it.
I have sent another email with the title 'BoundedOutOfOrderness Watermark
Generator is NOT making the event time to advance' using anot
Hi Fuyao,
for your first question about the different behavior depending on whether
you chain the methods or not: Keep in mind that you have to save the return
value of the assignTimestampsAndWatermarks method call if you don't chain
the methods together as it is also shown in [1].
At least the fol
Hi Flink Users and Community,
For the first part of the question, the 12 hour time difference is caused
by a time extraction bug myself. I can get the time translated correctly
now. The type cast problem does have some workarounds to solve it..
My major blocker right now is the onTimer part is no
Hi Flink Community,
I am doing some research work on Flink Datastream and Table API and I meet
two major problems. I am using Flink 1.11.2, scala version 2.11, java 8. My
use case looks like this. I plan to write a data processing pipeline with
two stages. My goal is to construct a business object
13 matches
Mail list logo