Re: Object-Reuse with Table API / Flink SQL

2025-01-23 Thread Zhanghao Chen
Chen From: dominik.buen...@swisscom.com Sent: Wednesday, January 22, 2025 17:33 To: user@flink.apache.org Subject: Object-Reuse with Table API / Flink SQL Dear Flink Community I’ve got into a discussion with a colleague over the use of “object-reuse”. As far as I understood it is safe to use

Object-Reuse with Table API / Flink SQL

2025-01-22 Thread Dominik.Buenzli
Dear Flink Community I’ve got into a discussion with a colleague over the use of “object-reuse”. As far as I understood it is safe to use this configuration with the Table API or Flink SQL, but only if you’re not using the HashMap StateBackend (link<https://nightlies.apache.org/flink/fl

Flink SQL .print() to logs

2024-12-30 Thread Yarden BenMoshe
Hi all, I am new to flink sql and having trouble understanding how to debug my code. I wish to use some kind of a print function to see the output from my queries in the pipeline's logs. Below there's a pipeline similar to the one i am using, which integrates with kafka and sends an av

State Migration flink sql

2024-11-28 Thread Lasse Nedergaard
Hi I have a little challenge and I would like some input. I have a streaming job and some parts of it use a Flink SQL statement with a partition by clause (Partition by column1) running in production using Flink 1.20. Now I have to modify the partition by so it now is (Partition by column1

Re: Flink SQL CDC connector do nothing to the setting "debizum.*" when create the source table

2024-10-09 Thread Leonard Xu
ysql-cdc/ > > On Wed, Oct 9, 2024 at 12:46 PM Ken CHUAN YU <mailto:ken.hung...@gmail.com>> wrote: > Hi there > I have issue to use flink sql connector to capture change data from > MariaDB(MySQL) when configure “debezium.* settings here are more details: > I have fo

Re: Flink SQL CDC connector do nothing to the setting "debizum.*" when create the source table

2024-10-09 Thread Ken CHUAN YU
ot "debezium.snapshot.". > > On Wed, Oct 9, 2024 at 12:46 PM Ken CHUAN YU > wrote: > >> Hi there >> I have issue to use flink sql connector to capture change data from >> MariaDB(MySQL) when configure “debezium.* settings here are more details: >

Re: Flink SQL CDC connector do nothing to the setting "debizum.*" when create the source table

2024-10-09 Thread Yaroslav Tkachenko
ect * from client_cdc; and it still take a full snapshot. > In other word it seems to me the Flink CDC connector is ignoring the > settings are Prefix debezium.* Am I missing anything here? > According to the document I be able to config the debezium but doesn’t > seems the case. > &g

Flink SQL CDC connector do nothing to the setting "debizum.*" when create the source table

2024-10-09 Thread Ken CHUAN YU
Hi there I have issue to use flink sql connector to capture change data from MariaDB(MySQL) when configure “debezium.* settings here are more details: I have following table in the source database (MariaDB): ‘’’CREATE TABLE `client_test` ( `id` int(11) unsigned NOT NULL AUTO_INCREMENT, `name

Flink SQL Kafka Upsert connector produce intermittent Tombstones

2024-10-03 Thread Qing Lim
Hi Flink user, I am using Flink SQL Kafka Upsert connector in my sink. I learned that the sink connector, specifically code in https://github.com/apache/flink-connector-kafka/blob/main/flink-connector-kafka/src/main/java/org/apache/flink/streaming/connectors/kafka/table

flink sql作业如何实现迟到数据侧输出?

2024-09-22 Thread casel.chen
flink sql使用起来直接明了,但针对迟到数据默认会直接drop掉,这在某些业务场景下是不合适的,我们需要进一步分析数据延迟的原因再进行数据修复。 有没有可能通过sql hint来实现迟到数据侧输出呢?

Flink SQL 1.15作业未能成功提交消费kafka offset

2024-08-09 Thread casel.chen
kafka 2.3集群发生脑裂重启后,在flink重试机制作用下flink sql 1.15作业有正常恢复kafka消费(写入下游数据库,能够从数据库查询到最新写入数据),但没有提交flink消费kafka消息的offset,导致消息积压告警。请问这是flink 1.15已知的issue么?如果是的话 issue编号是多少?谢谢!

Re: [External] Re: [External] Re:Re: Backpressure issue with Flink Sql Job

2024-07-02 Thread Ashish Khatkar via user
t; > I mean ,if it helps, you can check out > https://www.ververica.com/blog/how-to-write-fast-flink-sql . > > > Regards > > On Tue, Jun 25, 2024 at 4:30 PM Ashish Khatkar via user < > user@flink.apache.org> wrote: > >> Hi Xuyang, >> >> The i

Re: [External] Re:Re: Backpressure issue with Flink Sql Job

2024-07-01 Thread Penny Rastogi
Hi Ashish, How are you performing the backfill operation? Some time window? Can you specify details? I mean ,if it helps, you can check out https://www.ververica.com/blog/how-to-write-fast-flink-sql . Regards On Tue, Jun 25, 2024 at 4:30 PM Ashish Khatkar via user < user@flink.apache.

Re: [External] Re:Re: Backpressure issue with Flink Sql Job

2024-06-25 Thread Ashish Khatkar via user
try increasing the > parallelism of this vertex, or, as Penny suggested, we could try to enhance > the memory of tm. I checked k8s metrics for the taskmanagers and I don't see any IO issues, I can relook at it again at the time we started backfill. Regarding parallelism, I don't thi

Re:Re: Backpressure issue with Flink Sql Job

2024-06-24 Thread Xuyang
source and sink throughput, if possible? On Mon, Jun 24, 2024 at 3:32 PM Ashish Khatkar via user wrote: Hi all, We are facing backpressure in the flink sql job from the sink and the backpressure only comes from a single task. This causes the checkpoint to fail despite enabling unaligned check

Re: Backpressure issue with Flink Sql Job

2024-06-24 Thread Penny Rastogi
sink throughput, if possible? On Mon, Jun 24, 2024 at 3:32 PM Ashish Khatkar via user < user@flink.apache.org> wrote: > Hi all, > > We are facing backpressure in the flink sql job from the sink and the > backpressure only comes from a single task. This causes the checkpoint

Backpressure issue with Flink Sql Job

2024-06-24 Thread Ashish Khatkar via user
Hi all, We are facing backpressure in the flink sql job from the sink and the backpressure only comes from a single task. This causes the checkpoint to fail despite enabling unaligned checkpoints and using debloating buffers. We enabled flamegraph and the task spends most of the time doing

Re: monitoring message latency for flink sql app

2024-05-16 Thread Hang Ruan
Hi, mete. As Feng Jin said, I think you could make use of the metric ` currentEmitEventTimeLag`. Besides that, if you develop your job with the DataStream API, you could add a new operator to handle it by yourself. Best, Hang Feng Jin 于2024年5月17日周五 02:44写道: > Hi Mete > > You can refer to the m

Re: monitoring message latency for flink sql app

2024-05-16 Thread Feng Jin
Hi Mete You can refer to the metrics provided by the Kafka source connector. https://nightlies.apache.org/flink/flink-docs-release-1.19/docs/connectors/datastream/kafka//#monitoring Best, Feng On Thu, May 16, 2024 at 7:55 PM mete wrote: > Hello, > > For an sql application using kafka as sourc

RE: monitoring message latency for flink sql app

2024-05-16 Thread Anton Sidorov
Hello mete. I found this SO article https://stackoverflow.com/questions/54293808/measuring-event-time-latency-with-flink-cep If I'm not mistake, you can use Flink metrics system for operators and get time of processing event in operator. On 2024/05/16 11:54:44 mete wrote: > Hello, > > For an

monitoring message latency for flink sql app

2024-05-16 Thread mete
Hello, For an sql application using kafka as source (and kafka as sink) what would be the recommended way to monitor for processing delay? For example, i want to be able to alert if the app has a certain delay compared to some event time field in the message. Best, Mete

Re: Re: Evolving Flink SQL statement set and restoring from savepoints

2024-05-08 Thread Keith Lee
Hi Xuyang, Thank you for anticipating my questions and pointing me to the right resources. I've requested JIRA account, will create feature request once approved and then start discussion on dev mailing list. I am interested in contributing to this feature, would appreciate it if you can point m

Re:Re: Evolving Flink SQL statement set and restoring from savepoints

2024-05-07 Thread Xuyang
Hi, if the processing logic is modified, then the representation of the topology would change. Consequently, the UIDs that are determined by the topological order might change as well, which could potentially cause state recovery to fail. For further details, you can refer to [1]. Currently, th

Re: Evolving Flink SQL statement set and restoring from savepoints

2024-05-07 Thread Talat Uyarer via user
Hi Keith, When you add a new insert statement to your EXECUTE STATEMENT you change your job graph with independent two graphs.Unfortunately, Flink doesn't currently provide a way to directly force specific UIDs for operators through configuration or SQL hints. This is primarily due to how Flink's

Evolving Flink SQL statement set and restoring from savepoints

2024-05-07 Thread Keith Lee
Hello, I'm running into issues restoring from savepoint after changing SQL statement. [ERROR] Could not execute SQL statement. Reason: > java.lang.IllegalStateException: Failed to rollback to > checkpoint/savepoint > file:/tmp/flink-savepoints/savepoint-52c344-3bedb8204ff0. Cannot map > checkpoin

Re: Flink SQL checkpoint failed when running on yarn

2024-04-29 Thread Biao Geng
Hi there, Would you mind sharing the whole JM/TM log? It looks like the error log in the previous email is not the root cause. Best, Biao Geng ou...@139.com 于2024年4月29日周一 16:07写道: > Hi all: >When I ran flink sql datagen source and wrote to jdbc, checkpoint kept > failing

Re: Flink SQL Client does not start job with savepoint

2024-04-29 Thread Lee, Keith
April 2024 at 11:37 To: "Lee, Keith" Cc: "user@flink.apache.org" Subject: RE: [EXTERNAL] Flink SQL Client does not start job with savepoint CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the

Flink SQL checkpoint failed when running on yarn

2024-04-29 Thread ou...@139.com
Hi all: When I ran flink sql datagen source and wrote to jdbc, checkpoint kept failing with the following error log. 2024-04-29 15:46:25,270 ERROR org.apache.flink.runtime.rest.handler.job.checkpoints.CheckpointingStatisticsHandler [] - Unhandled exception

Re: Flink SQL Client does not start job with savepoint

2024-04-26 Thread Biao Geng
th" > *Date: *Thursday, 25 April 2024 at 21:41 > *To: *"user@flink.apache.org" > *Subject: *Flink SQL Client does not start job with savepoint > > > > Hi, > > Referring to > https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/sqlclien

Re: Flink SQL Client does not start job with savepoint

2024-04-25 Thread Lee, Keith
link.apache.org" Subject: Flink SQL Client does not start job with savepoint Hi, Referring to https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/sqlclient/#start-a-sql-job-from-a-savepoint I’ve followed the instruction however I do not see evidence of the job being started

Flink SQL Client does not start job with savepoint

2024-04-25 Thread Lee, Keith
Hi, Referring to https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/sqlclient/#start-a-sql-job-from-a-savepoint I’ve followed the instruction however I do not see evidence of the job being started with savepoint. See SQL statements excerpt below: Flink SQL> STOP

Re: Flink SQL query using a UDTAGG

2024-03-12 Thread Junrui Lee
Hi Pouria, Table aggregate functions are not currently supported in SQL, they have been introduced in the Table API as per FLIP-29: https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=97552739. Best, Junrui Pouria Pirzadeh 于2024年3月13日周三 02:06写道: > Hi, > I am using the SQL api on F

Flink SQL query using a UDTAGG

2024-03-12 Thread Pouria Pirzadeh
Hi, I am using the SQL api on Flink 1.18 and I am trying to write a SQL query which uses a 'user-defined table aggregate function' (UDTAGG). However, the documentation [1] only includes a Table API example w

Re: Re: Running Flink SQL in production

2024-03-08 Thread Robin Moffatt via user
gt; Zhanghao Chen > -- > *From:* Feng Jin > *Sent:* Friday, March 8, 2024 9:46 > *To:* Xuyang > *Cc:* Robin Moffatt ; user@flink.apache.org < > user@flink.apache.org> > *Subject:* Re: Re: Running Flink SQL in production > > Hi, > >

Re: Re: Running Flink SQL in production

2024-03-08 Thread Zhanghao Chen
nghao Chen From: Feng Jin Sent: Friday, March 8, 2024 9:46 To: Xuyang Cc: Robin Moffatt ; user@flink.apache.org Subject: Re: Re: Running Flink SQL in production Hi, If you need to use Flink SQL in a production environment, I think it would be better to use the Table API [1] and package

Re: Re: Running Flink SQL in production

2024-03-07 Thread Feng Jin
Hi, If you need to use Flink SQL in a production environment, I think it would be better to use the Table API [1] and package it into a jar. Then submit the jar to the cluster environment. [1] https://nightlies.apache.org/flink/flink-docs-release-1.18/docs/dev/table/common/#sql Best, Feng On

Re:Re: Running Flink SQL in production

2024-03-07 Thread Xuyang
Hi. Hmm, if I'm mistaken, please correct me. Using a SQL client might not be very convenient for those who need to verify the results of submissions, such as checking for exceptions related to submission failures, and so on. -- Best! Xuyang 在 2024-03-07 17:32:07,"Robin Moffatt"

Re:Running Flink SQL in production

2024-03-06 Thread Xuyang
Hi, IMO, both the SQL Client and the Restful API can provide connections to the SQL Gateway service for submitting jobs. A slight difference is that the SQL Client also offers a command-line visual interface for users to view results. In your production scenes, placing the SQL to be submitted in

Running Flink SQL in production

2024-03-06 Thread Robin Moffatt via user
I'm reading the deployment guide[1] and wanted to check my understanding. For deploying a SQL job into production, would the pattern be to write the SQL in a file that's under source control, and pass that file as an argument to SQL Client with -f argument (as in this docs example[2])? Or script a

Re: Query on using two sinks for a Flink job (Flink SQL)

2023-12-07 Thread elakiya udhayanan
day, December 6, 2023 7:21:50 PM > *收件人:* elakiya udhayanan ; user@flink.apache.org < > user@flink.apache.org> > *主题:* Re: Query on using two sinks for a Flink job (Flink SQL) > > Hi Elakiya, > > You can try executing TableEnvironmentImpl#executeInternal for non-insert &

Re: Query on using two sinks for a Flink job (Flink SQL)

2023-12-06 Thread Chen Yu
1:50 PM 收件人: elakiya udhayanan ; user@flink.apache.org 主题: Re: Query on using two sinks for a Flink job (Flink SQL) Hi Elakiya, You can try executing TableEnvironmentImpl#executeInternal for non-insert statements, then using StatementSet.addInsertSql to add multiple insertion statetments,

Re: Query on using two sinks for a Flink job (Flink SQL)

2023-12-06 Thread Feng Jin
s-master/docs/dev/table/sql/insert/ >> >> >> >> -- >> Best! >> Xuyang >> >> >> At 2023-12-06 17:49:17, "elakiya udhayanan" wrote: >> >> Hi Team, >> I would like to know the possibility of having two sinks in a &g

Re: Query on using two sinks for a Flink job (Flink SQL)

2023-12-06 Thread elakiya udhayanan
elakiya udhayanan" wrote: > > Hi Team, > I would like to know the possibility of having two sinks in a > single Flink job. In my case I am using the Flink SQL based job where I try > to consume from two different Kafka topics using the create table (as > below) DDL and th

Re:Query on using two sinks for a Flink job (Flink SQL)

2023-12-06 Thread Xuyang
would like to know the possibility of having two sinks in a single Flink job. In my case I am using the Flink SQL based job where I try to consume from two different Kafka topics using the create table (as below) DDL and then use a join condition to correlate them and at present write it to a

Re: Query on using two sinks for a Flink job (Flink SQL)

2023-12-06 Thread Zhanghao Chen
Sent: Wednesday, December 6, 2023 17:49 To: user@flink.apache.org Subject: Query on using two sinks for a Flink job (Flink SQL) Hi Team, I would like to know the possibility of having two sinks in a single Flink job. In my case I am using the Flink SQL based job where I try to consume from two

Query on using two sinks for a Flink job (Flink SQL)

2023-12-06 Thread elakiya udhayanan
Hi Team, I would like to know the possibility of having two sinks in a single Flink job. In my case I am using the Flink SQL based job where I try to consume from two different Kafka topics using the create table (as below) DDL and then use a join condition to correlate them and at present write

Re: Flink SQL and createRemoteEnvironment

2023-11-28 Thread sangram reddy
Hi, createRemoteEnvironment(...) methods have some obscure documentation. createRemoteEnvironment(String host,int port,String ... jarF

Flink SQL and createRemoteEnvironment

2023-11-27 Thread Oxlade, Dan
Hi, If I use StreamExecutionEnvironment.createRemoteEnvironment and then var tEnv = StreamTableEnvironment.create(env) from the resulting remote StreamExecutionEvironment will any sql executed using tEnv.executeSql be executed remotely inside the flink cluster? I'm seeing unexpected behavior wh

Re: Handling default fields in Avro messages using Flink SQL

2023-11-13 Thread Hang Ruan
; I want to use the default values in the same way that I do in other Kafka > consumers. (Specifically, that when a message on the topic is missing a > value for one of these properties, the default value is used instead). > > > > e.g. > > > > CREATE TABLE `simplified

Handling default fields in Avro messages using Flink SQL

2023-11-13 Thread Dale Lane
ue for one of these properties, the default value is used instead). e.g. CREATE TABLE `simplified-recreate` ( `favouritePhrase` STRING DEFAULT 'Hello World', `favouriteNumber` INT DEFAULT 42, `isItTrue` BOOLEAN NOT NULL ) WITH ( 'connector' = 'kafka'

Re: Re: Query on Flink SQL create DDL primary key for nested field

2023-11-01 Thread elakiya udhayanan
o, you have asked me to use Datastream API to extract the id and then > use the TableAPI feature, since I have not used the Datastream API, could > you help me with any example if possible, meanwhile i will try to do some > learning on using the DataStream API. > > Thanks, > Elakiya

Re:Re: Query on Flink SQL create DDL primary key for nested field

2023-11-01 Thread Xuyang
tream API. Thanks, Elakiya On Tue, Oct 31, 2023 at 7:34 AM Xuyang wrote: Hi, Flink SQL doesn't support a inline field in struct type as pk. You can try to raise an issue about this feature in community[1]. For a quick solution, you can try to transform it by DataStream API first by ex

Re: Query on Flink SQL create DDL primary key for nested field

2023-11-01 Thread elakiya udhayanan
I have not used the Datastream API, could you help me with any example if possible, meanwhile i will try to do some learning on using the DataStream API. Thanks, Elakiya On Tue, Oct 31, 2023 at 7:34 AM Xuyang wrote: > Hi, Flink SQL doesn't support a inline field in struct type as pk.

Re:Query on Flink SQL create DDL primary key for nested field

2023-10-30 Thread Xuyang
Hi, Flink SQL doesn't support a inline field in struct type as pk. You can try to raise an issue about this feature in community[1]. For a quick solution, you can try to transform it by DataStream API first by extracting the 'id' and then convert it to Table API to use SQL

Query on Flink SQL create DDL primary key for nested field

2023-10-30 Thread elakiya udhayanan
Hi team, I have a Kafka topic named employee which uses confluent avro schema and will emit the payload as below: { "employee": { "id": "123456", "name": "sampleName" } } I am using the upsert-kafka connector to consume the events from the

Re: Barriers in Flink SQL

2023-10-25 Thread Giannis Polyzos
Hi Ralph, can you explain a bit more? When you say "barriers" you should be referring to the checkpoints, but from your description seems more like watermarks. What functionality is supported in Flink and not Flink SQL? In terms of watermarks, there were a few shortcomings between th

Barriers in Flink SQL

2023-10-25 Thread Ralph Matthias Debusmann
Hi, one question - it seems that "barriers" are perfectly supported by Flink, but not yet supported in Flink SQL. When I e.g. do a UNION of two views derived from one source table fed by Kafka, I get thousands of intermediate results which are incorrect (the example I am using is this

Re:Re: Re: Flink SQL: non-deterministic join behavior, is it expected?

2023-10-24 Thread Xuyang
+U records arrive the sink with random order? What is the parallelism of these operators ? It would be better if you could post an example that can be reproduced. -- Best! Xuyang At 2023-10-20 04:31:09, "Yaroslav Tkachenko" wrote: Hi everyone, I noticed that a

Re: Re: Flink SQL: non-deterministic join behavior, is it expected?

2023-10-23 Thread Yaroslav Tkachenko
you means the one +I record and two +U records arrive >> the sink with random order? What is the parallelism of these operators ? It >> would be better if you could post an example that can be reproduced. >> >> >> >> >> -- >> Best! >>

Re:Re: Flink SQL: non-deterministic join behavior, is it expected?

2023-10-22 Thread Xuyang
these operators ? It would be better if you could post an example that can be reproduced. -- Best! Xuyang At 2023-10-20 04:31:09, "Yaroslav Tkachenko" wrote: Hi everyone, I noticed that a simple INNER JOIN in Flink SQL behaves non-deterministicly. I'd like to

Re: Flink SQL: MySQL to Elaticsearch soft delete

2023-10-22 Thread Feng Jin
Hi Hemi You can not just filter the delete records. You must use the following syntax to generate a delete record. ``` CREATE TABLE test_source (f1 xxx, f2, xxx, f3 xxx, deleted boolean) with (.); INSERT INTO es_sink SELECT f1, f2, f3 FROM ( SELECT *, ROW_NUMBER() OVER (PARTITION BY f1

Re: Flink SQL: MySQL to Elaticsearch soft delete

2023-10-21 Thread Feng Jin
Hi Hemi, One possible way, but it may generate many useless states. As shown below: ``` CREATE TABLE test_source (f1 xxx, f2, xxx, f3 xxx, deleted boolean) with (.); INSERT INTO es_sink SELECT f1, f2, f3 FROM ( SELECT *, ROW_NUMBER() OVER (PARTITION BY f1, f2 ORDER BY proctime()) as

Re: Flink SQL exception on using cte

2023-10-20 Thread elakiya udhayanan
Thanks Robin and Aniket for the suggestions you have given. Will try and update on the same. Thanks, Elakiya On Fri, Oct 20, 2023 at 2:34 AM Robin Moffatt wrote: > CTEs are supported, you can see an example in the docs [1] [2]. In the > latter doc, it also says > > > CTEs are supported in View

Re: Flink SQL: non-deterministic join behavior, is it expected?

2023-10-20 Thread Yaroslav Tkachenko
eproduced. > > > > > -- > Best! > Xuyang > > > At 2023-10-20 04:31:09, "Yaroslav Tkachenko" wrote: > > Hi everyone, > > I noticed that a simple INNER JOIN in Flink SQL behaves > non-deterministicly. I'd like to understand if it's

Flink SQL: MySQL to Elaticsearch soft delete

2023-10-19 Thread Hemi Grs
hello everyone, right now I'm using flink to sync from mysql to elasticsearch and so far so good. If we insert, update, or delete it will sync from mysql to elastic without any problem. The problem I have right now is the application is not actually doing hard delete to the records in mysql, but

Flink SQL: non-deterministic join behavior, is it expected?

2023-10-19 Thread Yaroslav Tkachenko
Hi everyone, I noticed that a simple INNER JOIN in Flink SQL behaves non-deterministicly. I'd like to understand if it's expected and whether an issue is created to address it. In my example, I have the following query: SELECT a.funder, a.amounts_added, r.amounts_removed FROM table_a

Re: Flink SQL exception on using cte

2023-10-19 Thread Robin Moffatt
CTEs are supported, you can see an example in the docs [1] [2]. In the latter doc, it also says > CTEs are supported in Views, CTAS and INSERT statement So I'm just guessing here, but your SQL doesn't look right. The CTE needs to return a column called `pod`, and the `FROM` clause for the `SELECT

RE: Flink SQL exception on using cte

2023-10-19 Thread Aniket Sule
cte1 AS a, cte2 as b . ); Hope this helps you. Regards, Aniket Sule From: elakiya udhayanan Sent: Thursday, October 19, 2023 3:04 AM To: user@flink.apache.org Subject: Flink SQL exception on using cte CAUTION:External email. Do not click or open attachments unless you

Flink SQL exception on using cte

2023-10-19 Thread elakiya udhayanan
Hi Team, I have a Flink job which uses the upsert-kafka connector to consume the events from two different Kafka topics (confluent avro serialized) and write them to two different tables (in Flink's memory using the Flink's SQL DDL statements). I want to correlate them using the SQL join statemen

?????? Flink SQL runtime

2023-10-12 Thread Enric Ott
For example, operator state and checkpoint listener of flink sql runtime.I'm trying to modify flink sql compiled behavior programmatically and get corresponding flink sql ru

Re: Flink SQL runtime

2023-10-12 Thread liu ron
Hi, What SQL Runtime are you referring to? Why do you need to get it? Best, Ron Enric Ott <243816...@qq.com> 于2023年10月12日周四 14:26写道: > Hi,Team: > Is there any approach to get flink sql runtime via api ? > Any help would be appreciated. >

Flink SQL runtime

2023-10-11 Thread Enric Ott
Hi,Team: Is there any approach to get flink sql runtime via api ? Any help would be appreciated.

Re: Flink SQL query with window-TVF fails

2023-08-14 Thread liu ron
Hi, Pouria Flink SQL uses the calcite to parse SQL, this is the calcite limitation, the minimum precision it supports is Second [1]. [1] https://github.com/apache/calcite/blob/main/core/src/main/codegen/templates/Parser.jj#L5067 Best, Ron Pouria Pirzadeh 于2023年8月15日周二 08:09写道: > I am try

Flink SQL query with window-TVF fails

2023-08-14 Thread Pouria Pirzadeh
I am trying to run a window aggregation SQL query (on Flink 1.16) with Windowing TVF for a TUMBLE window with a size of 5 Milliseconds it seems Flink does not let a window size use a time unit smaller than seconds. Is that correct? (The documentation

Production use case of Flink SQL gateway

2023-08-06 Thread Rajat Ahuja
Hi Flinkers, I am fairly new to the flink ecosystem, and currently exploring flink SQL gateway. I have the following questions if someone has experience can throw some light. 1) Anyone using Flink SQL gateway to production to run workloads ? If yes, how is it being used exactly in production

Re: Flink sql client doesn't work with "partition by" clause

2023-07-31 Thread liu ron
Hi, dongwoo You can check the SqlClient log first, I think the exception has been logged in SqlClient log file. Best, Ron Dongwoo Kim 于2023年7月31日周一 22:38写道: > > > -- Forwarded message - > 보낸사람: Dongwoo Kim > Date: 2023년 7월 31일 (월) 오후 11:36 > Subject: Re

Fwd: Flink sql client doesn't work with "partition by" clause

2023-07-31 Thread Dongwoo Kim
-- Forwarded message - 보낸사람: Dongwoo Kim Date: 2023년 7월 31일 (월) 오후 11:36 Subject: Re: Flink sql client doesn't work with "partition by" clause To: liu ron Hi, ron. Actually I'm not receiving any exception message when executing the *partition by* clause in

Re: Suggestions for Open Source FLINK SQL editor

2023-07-31 Thread liu ron
rote: >> > Hi team, >> > >> > I have set up a session cluster on k8s via sql gateway. I am looking >> for >> > an open source Flink sql editor that can submit sql queries on top of >> the >> > k8s session cluster. Any suggestions for sql editor to submit queries ? >> > >> > >> > Thanks >> > >> >

Re: Flink sql client doesn't work with "partition by" clause

2023-07-28 Thread Dongwoo Kim
Hello all, I've realized that the previous mail had some error which caused invisible text. So I'm resending the mail below. Hello all, I have found that the Flink sql client doesn't work with the *"partition by"* clause. Is this a bug? It's a bit weird since wh

Re: Suggestions for Open Source FLINK SQL editor

2023-07-28 Thread Guanghui Zhang
t of custom > development and bug fixing on its master branch. > > On 2023/07/19 16:47:43 Rajat Ahuja wrote: > > Hi team, > > > > I have set up a session cluster on k8s via sql gateway. I am looking for > > an open source Flink sql editor that can submit sql queries on

Flink sql client doesn't work with "partition by" clause

2023-07-28 Thread Dongwoo Kim
Hello all, I have found that flink sql client doesn't work with "partition by" clause. Is this bug? It's bit weird since when I execute same sql with tableEnv.executeSql(statement) code it works as expected. Has anyone tackled this kind of issue? I have tested in flink 1.16.

RE: Suggestions for Open Source FLINK SQL editor

2023-07-26 Thread Jiabao Sun
> > I have set up a session cluster on k8s via sql gateway. I am looking for > an open source Flink sql editor that can submit sql queries on top of the > k8s session cluster. Any suggestions for sql editor to submit queries ? > > > Thanks >

RE: Suggestions for Open Source FLINK SQL editor

2023-07-19 Thread Guozhen Yang
teway. I am looking for > an open source Flink sql editor that can submit sql queries on top of the > k8s session cluster. Any suggestions for sql editor to submit queries ? > > > Thanks >

Re: Suggestions for Open Source FLINK SQL editor

2023-07-19 Thread Shammon FY
Hi Rajat, Currently sql-gateway supports REST[1] and Hive[2] endpoints. For Hive endpoints, you can submit sql jobs with existing Hive clients, such as hive jdbc, apache superset and other systems. For REST endpoints, you can use flink sql-client to submit your sql jobs. We support jdbc-driver

Suggestions for Open Source FLINK SQL editor

2023-07-19 Thread Rajat Ahuja
Hi team, I have set up a session cluster on k8s via sql gateway. I am looking for an open source Flink sql editor that can submit sql queries on top of the k8s session cluster. Any suggestions for sql editor to submit queries ? Thanks

Re: Query on Flink SQL primary key for nested field

2023-07-11 Thread elakiya udhayanan
a new field which would be outside of the employee object, if flink doesn't support having compound identifier as primary key. Thanks, Elakiya On Tue, Jul 11, 2023 at 3:12 PM Jane Chan wrote: > Hi Elakiya, > > Did you encounter a ParserException when executing the DDL? AFAIK, F

Re: Query on Flink SQL primary key for nested field

2023-07-11 Thread Jane Chan
Hi Elakiya, Did you encounter a ParserException when executing the DDL? AFAIK, Flink SQL does not support declaring a nested column (compound identifier) as primary key at syntax level. A possible workaround is to change the schema to not contain record type, then you can change the DDL to the

Re: Query on Flink SQL primary key for nested field

2023-07-10 Thread elakiya udhayanan
ot;: "sampleName"}}I am using the upsert-kafka connector to consume the events from the above Kafka topic as below using the Flink SQL DDL statement, also here I want to use the id field as the Primary key. But I am unable to use the id field since it is inside the object.DDL Stateme

Re: Query on Flink SQL primary key for nested field

2023-07-10 Thread Hang Ruan
ul 10, 2023 at 1:09 PM Hang Ruan >>> wrote: >>> >>>> Hi, elakiya. >>>> >>>> The upsert-kafka connector will read the primary keys from the Kafka >>>> message keys. We cannot define the fields in the Kafka message values as >&

Re: Query on Flink SQL primary key for nested field

2023-07-10 Thread elakiya udhayanan
not define the fields in the Kafka message values as >>> the primary key. >>> >>> Best, >>> Hang >>> >>> elakiya udhayanan 于2023年7月10日周一 13:49写道: >>> >>>> Hi team, >>>> >>>> I have a Kafka topic named employe

Re: Query on Flink SQL primary key for nested field

2023-07-10 Thread Hang Ruan
es as >> the primary key. >> >> Best, >> Hang >> >> elakiya udhayanan 于2023年7月10日周一 13:49写道: >> >>> Hi team, >>> >>> I have a Kafka topic named employee which uses confluent avro schema and >>> will emit the payload as bel

Re: Query on Flink SQL primary key for nested field

2023-07-10 Thread elakiya udhayanan
t; elakiya udhayanan 于2023年7月10日周一 13:49写道: > >> Hi team, >> >> I have a Kafka topic named employee which uses confluent avro schema and >> will emit the payload as below: >> >> { >> "employee": { >> "id": "123456", >>

Re: Query on Flink SQL primary key for nested field

2023-07-10 Thread Hang Ruan
hich uses confluent avro schema and > will emit the payload as below: > > { > "employee": { > "id": "123456", > "name": "sampleName" > } > } > I am using the upsert-kafka connector to consume the events from the above > Kafka to

Query on Flink SQL primary key for nested field

2023-07-09 Thread elakiya udhayanan
Hi team, I have a Kafka topic named employee which uses confluent avro schema and will emit the payload as below: { "employee": { "id": "123456", "name": "sampleName" } } I am using the upsert-kafka connector to consume the events from the

flink on native k8s里如何使用flink sql gateway

2023-07-03 Thread chaojianok
大家好,请教个问题。 用native kubernetes方式在k8s集群上部署好了flink,现在需要在这个flink集群里使用flink sql gateway,大家有什么好的方案吗? 目前的做法是,进入pod里启动sql gateway,然后在k8s创建flink-sql-gateway service,这样就可以通过这个service来访问sql gateway了,但是这个方法有个问题,部署过程中必需进入pod启服务,这是不利于自动化部署的,具体的操作命令如下,大家帮忙看看有没有好的解决方案来避免这个问题。 1、创建flink集群 ./bin/kubernetes

Re: Query on Flink SQL primary key for nested field

2023-06-24 Thread Shammon FY
confluent avro schema and > will emit the payload as below: > > { > "employee": { > "id": "123456", > "name": "sampleName" > } > } > I am using the upsert-kafka connector to consume the events from the above > Kafka topic

Query on Flink SQL primary key for nested field

2023-06-23 Thread elakiya udhayanan
Hi team, I have a Kafka topic named employee which uses confluent avro schema and will emit the payload as below: { "employee": { "id": "123456", "name": "sampleName" } } I am using the upsert-kafka connector to consume the events from the

Re: Flink SQL Async UDF

2023-05-23 Thread Aitozi
Hi Giannis: I think this is due to the User Defined AsyncTableFunction have not been supported yet. It has a little different in type inference. I just opened a thread discuss about supporting this feature, you can refer to: https://lists.apache.org/thread/7vk1799ryvrz4lsm5254q64ctm89mx2l Than

Flink Sql erroring at runtime Flink 1.16

2023-05-17 Thread neha goyal
Hello, Looks like there is a bug with Flink 1.16's IF operator. If I use UPPER or TRIM functions(there might be more such functions), I am getting the exception. These functions used to work fine with Flink 1.13. select if( address_id = 'a', 'default', upper(address_id) ) as addres

Re: Flink Sql erroring at runtime

2023-05-11 Thread neha goyal
Hi Hang and community, There is a correction in my earlier email. The issue comes when I use the UPPER or TRIM function with IF. Looks like there is a bug with Flink 1.16's IF operator. If I use UPPER or TRIM functions(there might be more such functions), I am getting the exception. These functions

  1   2   3   4   5   6   7   8   9   10   >