Yes, as mentioned in the first email, what I want is something like spark
transform.
But I found that lambda function is not supported in Calcite
https://issues.apache.org/jira/browse/CALCITE-3679, so it may be hard to do the
same thing in Flink.
I'll write some customized UDF for my require
Hi
Thanks @Alexey. I think what @kui needs is quite similar to transform in
Spark, right?
Best,
Shammon
On Wed, Feb 22, 2023 at 10:12 PM Alexey Novakov
wrote:
> Xuekui, I guess you want high-order functions support in Flink SQL like
> Spark has https://spark.apache.org/docs/latest/api/sql/#tr
退订请发送邮件到 user-zh-unsubscr...@flink.apache.org
Best,
Shammon
On Thu, Feb 23, 2023 at 12:34 AM zhangjunjie
wrote:
> 退订
>
>
>
退订
Xuekui, I guess you want high-order functions support in Flink SQL like
Spark has https://spark.apache.org/docs/latest/api/sql/#transform ?
Best regards,
Alexey
On Wed, Feb 22, 2023 at 10:31 AM Xuekui wrote:
> Hi Yuxia and Shammon,
>
> Thanks for your reply.
>
> The requirements is dynamic in m
Hi Daniel,
Thanks for reporting this issue. According to the FLIP [1], this should be
a bug, and I've created a Jira ticket [2] to track this.
> We will introduce a declarative concept to `BuiltInFunctionDefinitions`
> and `FlinkSqlOperatorTable` that maintain a function name + version to
> insta
Frank, I had a similar issue with JSONB data-type in Postgres and ended up
extending the existing Postgres connector with capabilities to write more
data-types. Feel free to reach out to me in private and I can point you in
the right direction and share the code if needed.
On Wed, Feb 22, 2023 at
Thanks for the help, guys. I can work with that.
Maybe it makes sense to add something like that to the parquet doc file:
https://nightlies.apache.org/flink/flink-docs-release-1.16/docs/connectors/table/formats/parquet/
This documentation does not mention hadoop at all, and it seemed just as
strai
Hi Yuxia and Shammon,
Thanks for your reply.
The requirements is dynamic in my case. If I move the logic into udf, it's not
flexiable.
For example, there's one users column in my talbe whose type is Rowhttps://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/functions/udfs/#type-i
Hi Milind Vaidya,
I would recommend checking out the release notes for each version that
you're upgrading to and/or skipping.
Best regards,
Martijn
On Mon, Feb 6, 2023 at 10:46 PM Milind Vaidya wrote:
> Thanks for your suggestion Martjin.
>
> I am in the process of upgrading but this is kind
Hi Frank,
Parquet always requires Hadoop. There is a Parquet ticket to make it
possible to read/write Parquet without depending on Hadoop, but that's
still open. So in order for Flink to be able to work with Hadoop, it
requires the necessary Hadoop dependencies as outlined in
https://nightlies.apa
Hi Frank,
There's currently no workaround for this as far as I know. I'm looping in
Timo who at one point wanted to work on
https://issues.apache.org/jira/browse/FLINK-29267 to mitigate this.
Best regards,
Martijn
On Mon, Feb 13, 2023 at 9:16 AM Frank Lyaruu wrote:
> HI Flink community, I'm t
12 matches
Mail list logo