Hi Feng,
Thanks for your good question, It's very attractive if we can support
run the original
UDTF asynchronously without introducing new UDTFs.
But I think it's not easy, because the original UDTFs are executed one
instance per parallelism
So there is no thread-safe problem to user. But for
ok:
- I start a Flink 1.17.1 cluster, run the job, then run `flink stop` and
generate a savepoint. This savepoint will have Kryo 2.x data from standard
Flink 1.17.1.
- I start a Flink 1.18-SNAPSHOT cluster with the pull-request, run the job with
resume from the savepoint from Flink 1.17, then I
hi, Aitozi
Thank you for your proposal.
In our production environment, we often encounter efficiency issues with
user-defined functions (UDFs), which can lead to slower processing speeds.
I believe that this FLIP will make it easier for UDFs to be executed more
efficiently.
I have a small quest
As you can see that you must use `UNIX_TIMESTAMP` to do this work, that's
where the time zone happens.
What I'm talking about is casting timestamp/timestamp_ltz to long directly,
that's why the semantic is tricky when you are casting timestamp to long
using time zone.
For other systems, such as S
Hi Ron,
Thanks for your reply!
After our offline discussion, at present, there may be many of flink jobs using
non-atomic CTAS functions, especially Stream jobs,
If we only infer whether atomic CTAS is supported based on whether
DynamicTableSink implements the SupportsStaging interface,
then aft
Hi Jing
Thanks for your good questions. I have updated the example to the FLIP.
> Only one row for each lookup
lookup can also return multi rows, based on the query result. [1]
[1]:
https://github.com/apache/flink/blob/191ec6ca3943d7119f14837efe112e074d815c47/flink-table/flink-table-common/sr
Rui Fan created FLINK-32294:
---
Summary: The CI fails due to HiveITCase
Key: FLINK-32294
URL: https://issues.apache.org/jira/browse/FLINK-32294
Project: Flink
Issue Type: Bug
Components: Co
Hi, Mang
In FLIP-214, we have discussed that atomicity is not needed in streaming
mode, so we have implemented the initial version that doesn't support
atomicity. In addition, we introduce the option
"table.ctas.atomicity-enabled" to enable the atomic ability. According to
your FLIP-315 descriptio
Zhipeng Zhang created FLINK-32293:
-
Summary: Support vector with long index
Key: FLINK-32293
URL: https://issues.apache.org/jira/browse/FLINK-32293
Project: Flink
Issue Type: New Feature
Zhipeng Zhang created FLINK-32292:
-
Summary: TableUtils.getRowTypeInfo fails to get type information
of Tuple
Key: FLINK-32292
URL: https://issues.apache.org/jira/browse/FLINK-32292
Project: Flink
Thanks Jing, it makes sense to me and I have updated the FLIP
Best,
Shammon FY
On Thu, Jun 8, 2023 at 11:15 PM Jing Ge wrote:
> Hi Shammon,
>
> If we take a look at the JDK Event design as a reference, we can even add
> an Object into the event [1]. Back to the CatalogModificationEvent,
> eve
Hi Aitozi,
Thanks for the clarification. The code example looks interesting. I would
suggest adding them into the FLIP. The description with code examples will
help readers understand the motivation and how to use it. Afaiac, it is a
valid feature for Flink users.
As we knew, lookup join is based
Dear Flink developers & users,
We hope this email finds you well. We are excited to announce the Call for
Presentations for the upcoming Flink Forward Seattle 2023, the premier
event dedicated to Apache Flink and stream processing technologies. As a
prominent figure in the field, we invite you to
Chesnay Schepler created FLINK-32291:
Summary: Hive E2E test fails consistently
Key: FLINK-32291
URL: https://issues.apache.org/jira/browse/FLINK-32291
Project: Flink
Issue Type: Technica
Thanks Martijn.
Personally, I'm already using a local fork of Statefun that is compatible
with Flink 1.16.x, so I wouldn't have any need for a released version
compatible with 1.15.x. I'd be happy to do the PRs to modify Statefun to
work with new versions of Flink as they come along.
As for testi
Hi Ron,
Thanks for sharing the insight. Agree that it is not doable to rewrite the
entire planner module with Java. That was the reason why it has been hidden
instead of replaced. I thought, since the community decided to walk away
from scala, we should at least not add any more new scala code. Ac
Hi ShengKai,
Good point with the ANALYZE TABLE and CALL PROCEDURE statements.
> Can we remove the jars if the job is running or gateway exits?
Yes, I think it would be okay to remove the resources after the job is
submitted.
It should be Gateway’s responsibility to remove them.
> Can we use t
Hi Shammon,
If we take a look at the JDK Event design as a reference, we can even add
an Object into the event [1]. Back to the CatalogModificationEvent,
everything related to the event could be defined in the Event. If we want
to group some information into the Context, we could also consider add
On 08/06/2023 16:06, Kurt Ostfeld wrote:
If I understand correctly, the scenario is resuming from multiple checkpoint
files or from a savepoint and checkpoint files which may be generated by
different versions of Flink
No; it's the same version of Flink, you just didn't do a full migration
If the Flink project is planning to completely drop all stateful upgrade
compatibility within the near year for a Flink 2.0 release, then providing a
stateful migration pathway from Kryo 2.x to Kryo 5.x is probably unnecessary.
Is that correct? Is the Flink project pretty confident that Flink 2.
Thank you very much for the feedback.
- With this pull-request build, Flink runs successfully with a JDK 17 runtime
for applications without saved state or with applications with saved state from
this pull-request build which is using Kryo 5.x. FYI, the Maven build is still
run with JDK 8 or 11
Chesnay Schepler created FLINK-32290:
Summary: Enable -XX:+IgnoreUnrecognizedVMOptions
Key: FLINK-32290
URL: https://issues.apache.org/jira/browse/FLINK-32290
Project: Flink
Issue Type: S
Hi all,
Apologies for the late reply.
I'm willing to help out with merging requests in Statefun to keep them
compatible with new Flink releases and create new releases. I do think that
validation of the functionality of these releases depends a lot on those
who do these compatibility updates, wit
Hi Weihua,
Thanks a lot for your input!
I see the difference here is implementing the file distribution mechanism
in the generic CLI or in the SQL Driver. The CLI approach could benefit
non-pure-SQL applications (which is not covered by SQL Driver) as well.
Not sure if you’re proposing the CliFr
Leonard Xu created FLINK-32289:
--
Summary: The metadata column type is incorrect in Kafka table
connector example
Key: FLINK-32289
URL: https://issues.apache.org/jira/browse/FLINK-32289
Project: Flink
xingbe created FLINK-32288:
--
Summary: Improve the scheduling performance of
AdaptiveBatchScheduler
Key: FLINK-32288
URL: https://issues.apache.org/jira/browse/FLINK-32288
Project: Flink
Issue Type:
Thank you for the proposal, yuxia! The FLIP looks good to me.
Best,
Jark
> 2023年6月8日 11:39,yuxia 写道:
>
> Hi, all.
> Thanks everyone for the valuable input. If there are are no further concerns
> about this FLIP[1], I would like to start voting next monday (6/12).
>
> [1]
> https://cwiki.apa
Thank you for the great work, Mang! The updated proposal looks good to me.
Best,
Jark
> 2023年6月8日 11:49,Jingsong Li 写道:
>
> Thanks Mang for updating!
>
> Looks good to me!
>
> Best,
> Jingsong
>
> On Wed, Jun 7, 2023 at 2:31 PM Mang Zhang wrote:
>>
>> Hi Jingsong,
>>
>>> I have some doub
luoyuxia created FLINK-32287:
Summary: Add doc for truncate table statement
Key: FLINK-32287
URL: https://issues.apache.org/jira/browse/FLINK-32287
Project: Flink
Issue Type: Sub-task
C
luoyuxia created FLINK-32286:
Summary: Align the shade pattern that Hive connector using for
calcite related class with flink-table-planner
Key: FLINK-32286
URL: https://issues.apache.org/jira/browse/FLINK-32286
30 matches
Mail list logo