QL> show tables;
+---+
|table name |
+---+
| RandomNumbers |
+---+
1 row in set
```
Please do not forget to package all classes about TestFileSystemCatalogFactory,
TestFileSystemCatalog and TestFileSystemTable
What about using "./bin/start-cluster.sh -Dtable.catalog-store.kind=file
-Dtable.catalog-store.file.path=/Users/vagarwal/tmp/flink/store/" to start sql
client?
--
Best!
Xuyang
在 2025-01-07 00:30:19,"Vinay Agarwal" 写道:
Still doesn't show.
```SQL
Flin
/#configure-catalog-store
--
Best!
Xuyang
在 2025-01-05 05:59:22,"Vinay Agarwal" 写道:
I got Flink 1.20.0 to work (tweaked my `build.gradle` based on the
documentation) and created a catalog store and a file system catalog. But still
don't see my catalog or table in SQL cli
How about porting the `TestFileSystemCatalogFactory` back to 1.19 and rebuild
this catalog jar?
--
Best!
Xuyang
在 2025-01-03 06:23:25,"Vinay Agarwal" 写道:
Thanks again for your answer. I have to use Flink version 1.20.0 because
`TestFileSystemCatalogFactory` doesn&
/blob/master/flink-test-utils-parent/flink-table-filesystem-test-utils/src/main/java/org/apache/flink/table/file/testutils/catalog/TestFileSystemCatalogFactory.java
--
Best!
Xuyang
在 2025-01-01 00:31:43,"Vinay Agarwal" 写道:
Thanks Xuyang for your answer. I, however, can&
hints/#dynamic-table-options
--
Best!
Xuyang
At 2024-12-26 20:24:56, "Guillermo Ortiz Fernández"
wrote:
I'm using Flink SQL and have several Kafka topics with different partition
counts, ranging from 4 to 320 partitions. Is it possible to specify the level
of para
/#catalog-store
--
Best!
Xuyang
At 2024-12-27 01:28:38, "Vinay Agarwal" wrote:
Hello,
I am looking for an elaboration, or work around, of an old answer to this
question here
(https://stackoverflow.com/questions/56049472/create-sql-table-from-a-datastream-in-a-java-scala-p
You are right... Currently, the Flink SQL Client submits DML statements
asynchronously to the Flink cluster,
which means it is not possible to determine the final success or failure of the
job directly from the console.
--
Best!
Xuyang
在 2024-12-26 20:38:56,"Guillermo
with the Kafka source not reading any
data, rather than being related to the stateful over agg with expression
"ARRAY_REMOVE(IFNULL(LAG(zoneIds, 1) OVER (PARTITION BY code ORDER BY ts),
ARRAY[-1]), -1) AS prev_zoneIds.".
--
Best!
Xuyang
At 2024-12-20 19:22:15, "
ether these solutions have been
effective.
[1]
https://nightlies.apache.org/flink/flink-docs-release-1.13/docs/dev/table/config/#table-exec-source-idle-timeout
--
Best!
Xuyang
At 2024-12-20 00:19:11, "Guillermo Ortiz Fernández"
wrote:
I am working on a program usi
+Release#id-2.0Release-TimelinePlan
--
Best!
Xuyang
At 2024-12-17 15:42:54, "Anuj Jain" wrote:
Hi,
Will Flink 2.0 support Open JDK - Java 17 ?
And Is there any plan for adding Java 17 support in the Flink 1.x series ? All
I could see in the documentation is th
!
Xuyang
At 2024-11-21 23:39:23, "santosh techie" wrote:
Hello,
While evaluating Apache Flink, I came across Apache StateFul functions.
https://nightlies.apache.org/flink/flink-statefun-docs-stable/.
It is quite interesting and useful.
But I have queries regarding the
/jira/browse/CALCITE-2798
--
Best!
Xuyang
在 2024-11-17 19:54:55,"Feng Jin" 写道:
Hi Guillermo
> "When is this ordering done, and until when?"
Assuming the current watermark is 10:00
1. Currently, data before 10:00 will be sorted.
2. If data after 10:00 arrives
Hi, Shiyuan.
The current mailing list primarily focuses on user issues.
For discussions related to the new flip, please move to the dev mailing list[1].
[1] d...@flink.apache.org
--
Best!
Xuyang
At 2024-11-14 11:00:40, "Linda Li" wrote:
Hi,
Starting a
/flink-docs-master/docs/dev/table/sql/queries/joins/#lookup-join
--
Best!
Xuyang
在 2024-11-05 10:41:38,"Xuyang" 写道:
Hi.
The Upsert-Kafka, as far as I know, does not implement a lookup table
interface. Therefore, what you’re describing
resembles a temporal join[1]. Simi
[2] https://lists.apache.org/thread/mnb872m4s1yww6dl5f680dz33synd9j8
--
Best!
Xuyang
在 2024-11-04 22:35:06,"Guillermo Ortiz Fernández"
写道:
We are trying to migrate a kafka streams applications to FlinkSql. Kafka
Streams app uses GKTables to avoid shuffles for the lookup
/jdbc/#lookup-cache
--
Best!
Xuyang
在 2024-11-04 17:54:36,"Xuyang" 写道:
Hi,
The BROADCAST[1] join hint currently applies only to batch mode.
[1]
https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/sql/queries/hints/#broadcast[1]
--
Best!
Hi,
The BROADCAST[1] join hint currently applies only to batch mode.
[1]
https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/sql/queries/hints/#broadcast[1]
--
Best!
Xuyang
At 2024-11-04 17:06:59, "Guillermo Ortiz Fernández"
wrote:
Hi,
I
://cwiki.apache.org/confluence/display/FLINK/2.0+Release
--
Best!
Xuyang
在 2024-10-31 22:54:04,"Anil Dasari" 写道:
Hi Yanquan,
Yes. Mainly, I am looking for JDK 11 support. Thank you for sharing the
details.
On Thu, Oct 31, 2024 at 4:26 AM Yanquan Lv wrote:
Hi, Anil.
://issues.apache.org/jira/browse/FLINK-20527
[2]
https://nightlies.apache.org/flink/flink-docs-stable/docs/dev/table/functions/systemfunctions/#temporal-functions
--
Best!
Xuyang
在 2024-09-23 11:03:14,"casel.chen" 写道:
flink sql使用起来直接明了,但针对迟到数据默认会直接drop掉,这在某些业务场景下是不合适的,我们需要进一步分析数据延迟的原因再进行
k/log/"
--
Best!
Xuyang
At 2024-09-21 14:21:14, "Apollo Elon" wrote:
When I tried to calculate the median of the age field in a table with 120
million rows, I implemented a custom UDAF. However, there was a significant
performance difference between the tw
`TableDescriptor.Builder#option(...)` to
add the `with` clause attributes.
[1]
https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/common/#connector-tables
--
Best!
Xuyang
在 2024-09-06 11:17:00,"Enric Ott" <243816...@qq.com> 写道:
Hi,Xuyang:
Hi, Enric.
Could you elaborate on your scenario?
When you refer to "dynamic", are you asking how to determine the actual "with
option" at runtime?
--
Best!
Xuyang
在 2024-09-04 11:30:00,"Enric Ott" <243816...@qq.com> 写道:
Hi,Community:
reduced. As for whether
the state cleanup in RocksDB is lazy or eager, I'm not too familiar with that.
--
Best!
Xuyang
在 2024-08-01 14:56:51,"banu priya" 写道:
Hi Xuyang,
Thanks a lot for the reply.
I am using standard alone flink. Now I have found a workin
r mode, where
changes to the yaml file only affect the job, not the cluster.
--
Best!
Xuyang
At 2024-07-30 16:23:24, "banu priya" wrote:
Hi All,
I am trying to change GC settings of Taskmanager. I am using flink 1.18 and
Java 8. Jave 8 uses parallel GC by default. I
--
Best!
Xuyang
At 2024-07-17 17:22:01, "liu ze" wrote:
Hi,
Currently, Flink's windows are based on time (or a fixed number of elements). I
want to trigger window computation based on specific events (marked within the
data). In the DataStream API, this can be
Hi, this is a bug fixed in
https://github.com/apache/flink/pull/25075/files#diff-4ee2dd065d2b45fb64cacd5977bec6126396cc3b56e72addfe434701ac301efeL405.
You can try to join input_table and udtf first, and then use it as the input of
window tvf to bypass this bug.
--
Best!
Xuyang
Sorry, I mean "could not".
--
Best!
Xuyang
在 2024-07-10 15:21:48,"Xuyang" 写道:
Hi, which Flink version does you use? I could re-produce this bug in master. My
test sql is below:
```
CREATE TABLE UNITS_DATA(
proctime AS PROCTIME()
---+
| +I | 2024-07-10 15:19:17.661 | 64911 |-772065898 | 342255977 |
|
| +I | 2024-07-10 15:19:20.651 | 64911 |1165938984 | 162006411 |
342255977 |
| +I | 2024-07-10 15:19:22.614 | 64911 | -1000903042 | -2059780314 |
162006411 |
```
--
!
Xuyang
At 2024-07-02 02:42:20, "Ran Jiang via user" wrote:
Hi team,
We noticed that there wasn't any new version of Stateful Function released
since last year. Is it still actively being developed? We also noticed that the
dependencies of it were also old. Is there a ro
Hi, Ganesh.
Can you take a look if the cache strategy in jdbc lookup table can meet your
requirement?
[1]
https://nightlies.apache.org/flink/flink-docs-release-1.17/docs/connectors/table/jdbc/#lookup-cache
--
Best!
Xuyang
At 2024-07-02 03:35:41, "Ganesh Walse&qu
memory of tm.
--
Best!
Xuyang
在 2024-06-24 21:28:58,"Penny Rastogi" 写道:
Hi Ashish,
Can you check a few things.
1. Is your source broker count also 20 for both topics?
2. You can try increasing the state operation memory and reduce the disk I/O.
Increase the number of CU
filesystem connector as a temporary table to work around?
--
Best!
Xuyang
At 2024-06-07 03:26:27, "Phil Stavridis" wrote:
Hello,
I am trying to create an in-memory table in PyFlink to use as a staging table
after ingesting some data from Kafka but it doesn’t work as expected
Hi,
Could you provide more details about it, such as a minimum reproducible sql?
--
Best!
Xuyang
在 2024-06-06 09:03:16,"Fokou Toukam, Thierry"
写道:
Hi, i'm trying to deploy flink job but i have this error. How to solve it
please?
|
Thierry FOKOU | IT
IDE either, I've encountered this before, and it
was an issue with jar package conflicts.
--
Best!
Xuyang
At 2024-05-15 15:20:45, "Fidea Lidea" wrote:
Hi Team,
I've written a flink job & enabled slf4j logging mechanism for it.
Flow ofFlink Job : Kafka sour
+Redis+HyperLogLog+Connector+for+Flink
--
Best!
Xuyang
At 2024-05-08 07:28:08, "Zhou, Tony" wrote:
Hi team,
I need a Redis sink connector for my Flink app but the best I can find is from
Bahir, which is deprecated. I am wondering if someone in the community is
/main/java/org/apache/flink/table/planner/plan/nodes/exec/ExecNodeBase.java#L243C38-L243C62
[2] https://issues.apache.org/jira/projects/FLINK/issues/
--
Best!
Xuyang
在 2024-05-08 06:13:29,"Talat Uyarer via user" 写道:
Hi Keith,
When you add a new insert statement to yo
master/docs/dev/datastream/side_output/
--
Best!
Xuyang
At 2024-04-17 16:56:54, "Sachin Mittal" wrote:
Hi,
Suppose my pipeline is:
data
.keyBy(new MyKeySelector())
.window(TumblingEventTimeWindows.of(Time.seconds(60)))
.allowedLateness(Time.seconds(180))
Thanks for driving this ;)
--
Best!
Xuyang
在 2024-04-16 10:47:56,"Xiaolong Wang" 写道:
Reported. JIRA link: https://issues.apache.org/jira/browse/FLINK-35117?filter=-2
On Tue, Apr 16, 2024 at 10:05 AM Xiaolong Wang
wrote:
By adding `'org.apache.commo
s bug by this way, I think we should
open an bug issue for it.
--
Best!
Xuyang
At 2024-04-09 18:11:27, "Xiaolong Wang" wrote:
Hi,
I found a ClassNotFound exception when using Flink 1.19's AsyncScalarFunction.
Stack trace:
Caused b
ython+UDF+in+SQL+Function+DDL
--
Best!
Xuyang
At 2024-04-03 07:56:14, "Zhou, Tony" wrote:
Hi everyone,
Out of curiosity, I have a high level question with Flink: I have a use case
where I want to define some UDFs in Python while have the main logics written
/flink-docs-master/docs/dev/table/tuning/#split-distinct-aggregation
--
Best!
Xuyang
在 2024-03-29 21:49:42,"Lei Wang" 写道:
Perhaps I can keyBy(Hash(originalKey) % 10)
Then in the KeyProcessOperator using MapState instead of ValueState
MapState mapState
There'
Hi, Kartik.
On flink ui, is there any operator that maintains a relatively high busy? Could
you also try using a flame graph to provide more information?[1]
[1]
https://nightlies.apache.org/flink/flink-docs-master/docs/ops/debugging/flame_graphs/
--
Best!
Xuyang
At 2024-03-30
Hi, ouywl.
IMO, +1 for this option. You can start a discussion on the dev mailing list[1]
to seek more input from more community developers.
[1] d...@flink.apache.org
--
Best!
Xuyang
At 2024-03-27 11:28:37, "ou...@139.com" wrote:
When using the jdbc sink connector, t
-connector-elasticsearch/pull/62
--
Best!
Xuyang
在 2024-03-22 13:21:04,"Fidea Lidea" 写道:
Hi Xuyang
I am new to Flink & I don't know how to implement this dependency into the
code.
Can you please share some examples so that I can refer those.
Thanks
Nida
On Fri, M
Hi, Nida.
Can you explain more details about "unable to use it." ? Did you get an
exception after using it?
--
Best!
Xuyang
在 2024-03-21 21:14:53,"Fidea Lidea" 写道:
Thank you Xuyang.
I added the above flink-connector-elasticsearch dependency in my project.
Bu
Cheers!
--
Best!
Xuyang
在 2024-03-21 10:28:45,"Rui Fan" <1996fan...@gmail.com> 写道:
>Congratulations!
>
>Best,
>Rui
>
>On Thu, Mar 21, 2024 at 10:25 AM Hang Ruan wrote:
>
>> Congrattulations!
>>
>> Best,
>> Hang
>>
/java/org/apache/flink/connector/file/table/FileSystemTableSink.java#L232
--
Best!
Xuyang
At 2024-03-20 20:41:11, "Lasse Nedergaard"
wrote:
>Hi.
>
>Anyone know if it’s possible to control the file names eg change the uuid file
>names and extensions to so
Hi, Nick.
Can you try `cascading window aggregation` here[1] if it meets your needs?
[1]
https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/sql/queries/window-agg/#cascading-window-aggregation
--
Best!
Xuyang
At 2024-03-16 06:21:52, "Nick Hecht" wro
Hi, Meng.
I think you can follow this jira[1] and ping the creator about the latest
progress.
[1] https://issues.apache.org/jira/browse/FLINK-34491
--
Best!
Xuyang
At 2024-03-13 04:02:09, "Meng, Ping via user" wrote:
Hi,
The latest Flink 1.18.1 with Java 17 support
] https://github.com/apache/flink-connector-elasticsearch/pull/62
--
Best!
Xuyang
At 2024-03-12 18:28:46, "Fidea Lidea" wrote:
Hi ,
I am trying to read data from elasticsearch & store in a stream.
Could you please share a few examples to read/get all data from Elasti
g. Can you take a look at this strange situation?
--
Best!
Xuyang
在 2024-03-10 16:49:16,"Daniel Peled" 写道:
Hello,
I am sorry I am addressing you personally.
I have tried sending the request in the user group and got no response
If you can't help me please let m
=97552739
[2] https://issues.apache.org/jira/projects/FLINK/issues
--
Best!
Xuyang
在 2024-03-08 03:12:19,"Jad Naous" 写道:
Hi Junrui,
Thank you for the pointer. I had read that page, and I can use the function
with the Java Table API ok, but I'm trying to use the Top2 acc
Hi, Sunny.
A watermark always comes from one subtask of this window operator's input(s),
and this window operator will retain all watermarks about multi input subtasks.
The `currentWatermark` in the window operator is the min value of these
watermarks.
--
Best!
Xuyang
At 20
Hi.
Hmm, if I'm mistaken, please correct me. Using a SQL client might not be very
convenient for those who need to verify the
results of submissions, such as checking for exceptions related to submission
failures, and so on.
--
Best!
Xuyang
在 2024-03-07 17:32:07,"Rob
Hi, can you provide more details about this Flink batch job? For instance,
through a flame graph, the threads are found spending most of their time on
some certain tasks.
--
Best!
Xuyang
At 2024-03-07 08:40:32, "Charles Tan" wrote:
Hi all,
I have been looking
into a file and then
using the '-f' command in SQL Client to submit the file sounds a bit
roundabout. You can just use the Restful API to submit them directly?
--
Best!
Xuyang
At 2024-03-07 04:11:01, "Robin Moffatt via user" wrote:
I'm reading the deplo
ated
discussions in the dev mail again.
[1] https://issues.apache.org/jira/browse/FLINK-10031
[2] https://issues.apache.org/jira/browse/FLINK-20527
--
Best!
Xuyang
At 2024-03-06 01:55:03, "Sunny S" wrote:
Hi,
I am using Flink SQL to create a table something like this :
docs-master/docs/dev/table/tuning/#performance-tuning
[2]
https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/tuning/#split-distinct-aggregation
--
Best!
Xuyang
At 2024-02-22 15:44:26, "" wrote:
Hi all,
I am currently facing the problem of having
ocs-master/docs/dev/table/concepts/determinism/#32-non-deterministic-update-in-streaming
[3]
https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/sql/explain/#explaindetails
[4] https://issues.apache.org/jira/projects/FLINK/summary
--
Best!
Xuyang
在 2024-02-08 19:53:42,
Hi, Yunhong.
Thanks for your volunteering :)
--
Best!
Xuyang
在 2024-02-06 09:26:55,"yh z" 写道:
Hi, Xuyang, I hope I can also participate in the development of the remaining
flip features. Please cc me if there are any further developments. Thank you !
Xuyang 于2024年
or the dev
mailing list.
--
Best!
Xuyang
At 2024-02-02 00:46:01, "Prabhjot Bharaj via user"
wrote:
Hi Feng,
Thanks for your prompt response.
If we were to solve this in Flink, my higher level viewpoint is:
1. First to implement Broadcast join in Flink Streaming SQ
run the
SinkUpsertMaterializerTest test class to observe and test youself.
Regarding the upsert key, you can use EXPLAIN CHANGELOG_MODE ... to view them
in the plan.
If there are any issues with the above, please correct me.
--
Best!
Xuyang
At 2024-01-31 20:24:57, "
Hi, Robin and Hang.
This Flip is not finished yet. Although the syntax `DESCRIBE CATALOG xxx` can
be parsed
successfully, but cannot continue to process it at a later stage.
I have added a jira[1] for it.
[1] https://issues.apache.org/jira/browse/FLINK-34254
--
Best!
Xuyang
!
Xuyang
在 2024-01-24 20:22:13,"Deepti Sharma S via user" 写道:
Hello Team,
Can you please let me know the lifecycle for Flink 1.x versions? Also does any
version supports Java17?
Regards,
Deepti Sharma
before. You can see more
here[1].
[1]
https://nightlies.apache.org/flink/flink-docs-master/docs/ops/state/state_backends/
--
Best!
Xuyang
在 2024-01-12 12:37:43,"Junrui Lee" 写道:
Hi Вова
In Flink, there is no built-in mechanism for caching SQL query results; ev
Hi, Gyula.
If you want flink to fill the unspecified column with NULL, you can try the
following sql like :
```
INSERT INTO Sink(a) SELECT a from Source
```
--
Best!
Xuyang
在 2024-01-11 16:10:47,"Giannis Polyzos" 写道:
Hi Gyula,
to the best of my knowledge, this is no
Hi, Praveen Chandna.
I don't know much about this plan, you can ask on the dev mailing list. [1]
[1]https://flink.apache.org/what-is-flink/community/
--
Best!
Xuyang
在 2023-12-20 14:53:52,"Praveen Chandna via user" 写道:
Hello Xuyang
One more query, is there
Hi, Praveen Chandna.
Please correct me if I'm wrong. From the release note of Flink 1.18 [1], you
can see that although 1.18 already supports Java 17, it is still in beta mode.
[1]
https://nightlies.apache.org/flink/flink-docs-master/zh/release-notes/flink-1.18/
--
Best!
X
Hi, Yad.
These SQLs seem to be fine. Have you tried using the print connector as a sink
to test whether there is any problem?
If everything goes fine with print sink, then the problem occurs on the kafka
sink.
--
Best!
Xuyang
在 2023-12-15 18:48:45,"T, Yadhunath" 写道:
Hi, Yad.
Can you share the smallest set of sql that can reproduce this problem?
BTW, the last flink operator you mean is the sink with kafka connector?
--
Best!
Xuyang
在 2023-12-15 04:38:21,"Alex Cruise" 写道:
Can you share your precise join semantics?
I
/community/
--
Best!
Xuyang
At 2023-12-13 14:28:17, "Praveen Chandna via user"
wrote:
Hello
In the Flink version 1.18, the connector for OpenSearch sink is not available.
It’s mentioned “There is no connector (yet) available for Flink version 1.18.”
Hi, Elakiya.
Are you following the example here[1]? Could you attach a minimal, reproducible
SQL?
[1]
https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/sql/insert/
--
Best!
Xuyang
At 2023-12-06 17:49:17, "elakiya udhayanan" wrote:
Hi Team,
I
ng/#split-distinct-aggregation
--
Best!
Xuyang
At 2023-12-04 23:24:34, "arjun s" wrote:
Hello team,
I'm currently working on a Flink use case where I need to calculate the sum of
occurrences for each "customer_id" within a 10-minute duration and send the
re
de to use the last avg in map state if there are no elements, or
using the new avg and set the new avg in map state.
Hope it helps you!
--
Best!
Xuyang
在 2023-12-04 21:34:06,"Eugenio Marotti" 写道:
>Hi everyone,
>
>I’m currently using Flink to calculate a moving av
Hi,
Can you attach the log about the exception when job failed?
--
Best!
Xuyang
在 2023-12-04 15:56:04,"nick toker" 写道:
Hi
restart the job it's ok and i do that , but i must cancel the job and submit a
new one and i dont want the data from the state
forget to
Hi,
IIUC, yes.
--
Best!
Xuyang
在 2023-12-04 15:13:56,"arjun s" 写道:
Thank you for providing the details. Can it be confirmed that the Hashmap
within the accumulator stores the map in RocksDB as a binary object and
undergoes deserialization/serialization during the ex
wOperator' or
'SlicingWindowOperator' to find more detail.
[1]
https://github.com/apache/flink/blob/026bd4be9bafce86ced42d2a07e8b8820f7e6d9d/flink-table/flink-table-common/src/main/java/org/apache/flink/table/data/binary/BinaryRowData.java#L380
--
Best!
Xuyang
At 2
in order for the new job graph to take effect, it is necessary to
restart the job.
--
Best!
Xuyang
At 2023-12-03 21:49:23, "nick toker" wrote:
Hi
when i add or remove an operator in the job graph , using savepoint i must
cancel the job to be able run the new graph
e
Hi, Prashant.
I think Yu Chen has given a professional troubleshooting ideas. Another thing I
want to ask is whether you use some
user defined function to store some objects? You can firstly dump the memory
and get more details to check for memory leaks.
--
Best!
Xuyang
在 2023
> up the region and credentials unless I set the environment variables locally
I think you need make sure your local machin can connect to AWS environment
firstly.
Overall, I think `StreamExecutionEnvironment#createRemoteEnvironment ` can meet
your requirements.
--
Best!
Xuyang
Hi, patricia.
Can you attach full stack about the exception. It seems the thread reading
source is stuck.
--
Best!
Xuyang
At 2023-11-23 16:18:21, "patricia lee" wrote:
Hi,
Flink 1.18.0
Kafka Connector 3.0.1-1.18
Kafka v 3.2.4
JDK 17
I get erro
.
Can you check if it is ok by reading the kafka source and directly printing it
using DataStream api?
--
Best!
Xuyang
在 2023-11-22 14:45:19,"Tauseef Janvekar" 写道:
Hi Xuyang,
Taking inspiration from your code, I modified my code and got the earlier error
resolved.
Hi, Dale.
Thanks for your professional explanation ;)
--
Best!
Xuyang
在 2023-11-22 00:39:47,"Dale Lane" 写道:
FYI in case it’s relevant for this discussion
> I'm not sure what is the ` Avro JSON` means
Avro supports two encoding mechanisms – binary
.toString()));
}
})
.returns(C2.class);
return ret;
}
```
[1]
https://nightlies.apache.org/flink/flink-docs-master/docs/learn-flink/etl/#flatmap
--
Best!
Xuyang
--
Best!
Xuyang
在 2023-11-21 23:3
Hi, Tauseef.
This is an example to use custom POJO with flatmap[1]. If possible, can you
post your code and tag the flink version?
[1]
https://nightlies.apache.org/flink/flink-docs-master/docs/learn-flink/etl/#flatmap
--
Best!
Xuyang
At 2023-11-21 22:48:41, "Ta
2]
https://github.com/apache/flink-connector-opensearch/blob/ab36cebc12db3aa0fa9df8a770b1845a78afe5bf/flink-connector-opensearch/src/main/java/org/apache/flink/connector/opensearch/table/RowOpensearchEmitter.java#L110C32-L110C45
--
Best!
Xuyang
At 2023-11-21 13:32:35, "Pravee
Hi, Tauseef.
Currently flink community only supports sink and look up source for ES, but
you can follow this JIRA[1] that supports ES source.
[1] https://issues.apache.org/jira/browse/FLINK-25568
--
Best!
Xuyang
At 2023-11-15 14:33:53, "Tauseef Janvekar" wr
maven2/org/apache/flink/flink-sql-connector-kafka/
--
Best!
Xuyang
At 2023-11-09 16:00:24, "patricia lee" wrote:
Hi,
I am upgrading my project to Flink 1.18 but seems kafka connector 1.18.0 not
available yet?
I couldn't pull the jar file flink-kafka-connector. But
Hi, Puneet.
Do you mean this doc[1]?
[1]
https://nightlies.apache.org/flink/flink-docs-master/docs/dev/datastream/fault-tolerance/queryable_state/
--
Best!
Xuyang
At 2023-11-07 01:36:37, "Puneet Kinra" wrote:
Hi All
We are using flink 1.10 version which w
.
--
Best!
Xuyang
在 2023-11-04 17:37:55,"Bo" <99...@qq.com> 写道:
Hi, Yu
Thanks for the suggestion.
Ideally the data need to come from the sink being audited, adding another sink
serves part of the purpose, but if anything goes wrong in the original sink, I
pre
taken, Flink will not
store the state actively.
[1]https://nightlies.apache.org/flink/flink-docs-master/docs/dev/datastream/fault-tolerance/state/#state-time-to-live-ttl
--
Best!
Xuyang
在 2023-11-06 21:04:11,"arjun s" 写道:
Thanks for your response.
I have shared my scen
nk/flink-docs-master/docs/dev/table/data_stream_api/
--
Best!
Xuyang
At 2023-11-01 15:28:25, "elakiya udhayanan" wrote:
Hi Xuyang,
Thank you for your response. Since, I have no access to create a ticket in the
ASF jira I have requested for the access and once I get
.
[1]
https://issues.apache.org/jira/projects/FLINK/issues/FLINK-33400?filter=allopenissues
--
Best!
Xuyang
At 2023-10-30 16:42:03, "elakiya udhayanan" wrote:
Hi team,
I have a Kafka topic named employee which uses confluent avro schema and will
emit the payload as be
Are you using Flink SQL? If using Flink SQL, the window is triggered when and
only when the special data (with the expected timestamp after watermark)
enters. It is not possible to trigger the window without changing the
window-start and window-end column.
--
Best!
Xuyang
At
/docs/dev/table/tuning/#local-global-aggregation
--
Best!
Xuyang
At 2023-10-27 16:36:15, "Kenan Kılıçtepe" wrote:
Hi,
Can someone tell what GlobalWindowAggregate is?
it is always %100 busy in my job graph.
GlobalWindowAggregate(groupBy=[deviceId, fwVersion, modelName, m
.
--
Best!
Xuyang
在 2023-10-23 22:35:45,"Yaroslav Tkachenko" 写道:
Hi, sure, sharing it again:
SELECT a.funder, a.amounts_added, r.amounts_removed FROM table_a AS a JOIN
table_b AS r ON a.funder = r.funder
and the Optimized Execution Plan:
Calc(select=[funder, vid AS a_vid, vid
Hi, Could you share the original SQL? If not, could you share the plan after
executing 'EXPLAIN ...'. There should be one node 'Exchange' as the both inputs
of the 'Join' in both "== Optimized Physical Plan ==" and "== Optimized
Execution Plan ==&qu
Hi, currently I think there is no ha about gateway. When the gateway crashes,
the job about being submitted sync will be cancelled, and the async job will
continue running. When the gateway restarts, the async job could be found by
gateway. BTW, the work about ha is in continuous progress.
At 20
Hi, You're right, there's no keyword 'update' in flink.
--
Best!
Xuyang
在 2022-09-21 22:40:03,pod...@gmx.com 写道:
Thank you - I'll try.
There is no 'UPDATE' clause in Flink SQL?
Sent: Monday, September 19, 2022 at 4:09 AM
From: &qu
1 - 100 of 132 matches
Mail list logo