It possibly fails with incompatibility. Flink doesn't promise such
compatibility but it MIGHT work.
Best,
tison.
wangl...@geekplus.com.cn 于2019年12月30日周一 下午3:17写道:
>
> The flink cluster version is 1.8.2
> The application source code needs some feature only supported in 1.9.1.
> So it is compile
The flink cluster version is 1.8.2
The application source code needs some feature only supported in 1.9.1. So it
is compiled with flink-1.9.1 denendency and builed to a fat jar with all the
flink dependencies.
What it will happen if I submit the high version builed jar to the low verion
flink
Dear Flink Users,
I'm running the Yahoo streaming benchmarks (the original version) [1]
on Flink 1.8.3 and got 60K tuples per second. Because I got 282K
tuples per second with Flink 1.1.3, I would like to ask your opinions
where I should look at.
I have been using one node for a JobManager and 10
Hi Steve,
There are some discussion in [1], this has been considered, but it is not
supported in the current version.
>From Fabian's word:
> I think timestamp fields of source-sink tables should be handled as
follows when emitting the table:
- proc-time: ignore
- from-field: simply write
Hi RKandoji,
FYI: Blink-planner subplan reusing: [1] 1.9 available.
Join Join
/ \ / \
Filter1 Filter2 Filter1 Filter2
||=> \ /
Project1 Project2Project1
||
Hi RKandoji~
Could you provide more info about your poc environment?
Stream or batch? Flink planner or blink planner?
AFAIK, blink planner has done some optimization to deal such duplicate task for
one same query. You can have a try with blink planner :
https://ci.apache.org/projects/flink/flin
Hi Kaibo,
> Validate SQL syntax not need to depend on connector jar
At present, sql function strongly need jar dependent support too , but the
overall approach is still under discussion, and there is no clear plan at
present.
But you are right, it really important for platform users.
Another way i
Hi Mohammad
I expected to find a description of a mechanism for detecting and ignoring
duplicate events in the documentation, although I got the two-phase commit
protocol issuing something utterly different.
Flink would not detect and ignore duplicate events when processing them but
ensure chec
Examining the *org.apache.flink.table.descriptors.Kafka* class in Flink
v1.9, it seems to not have the ability to set whether the Kafka producer
should attach a timestamp to the message. The *FlinkKafkaProducer* class
has a setter for controlling this producer attribute.
Can/should this attribute
Hi Dan,
You can learn more about Flink’s metrics system at [1]
You would be able to either setup a reporter that would export the metrics
to an external system, or query the metrics via the REST API, or simply use
Flink’s web ui to obtain them.
If I understand the second part of your question cor
Hi Team,
I'm doing a POC with flink to understand if it's a good fit for my use
case.
As part of the process, I need to filter duplicate items and created below
query to get only the latest records based on timestamp. For instance, I
have "Users" table which may contain multiple messages for the
Dear Flink team,
I have some ambiguity when it comes to Flink's exactly-once guaranteeing.
1. Based on what I understand, when a failure occurs, some events will be
replayed which causes them to appear twice in the computations. I cannot
realize how the two-phase commit protocol can avoid this prob
Hi all
I'm trying to get hold of some metrics from the functions that I have
created but can't find a way to get them. It's the metrics mentioned here
I'm interested about:
https://statefun.io/deployment_operations/metrics.html
Any suggestions are appreciated.
I also have a question regarding "be
Dear community,
Happy to share a short community update this week. Due to the holiday, the
dev@ mailing list is pretty quiet these days.
Flink Development
==
* [sql] Jark proposes to correct the terminology of "Time-windowed Join" to
"Interval Join" in Table API & SQL before 1.10 is
Hi Benchao,
Thanks for your reply, key word `antlr` appeared in three modules of master
branch,
1. flink-python
2. flink-sql-client
3. flink-hive-connector
I also found just tag `exclusion` is used for `Antlr`, so the aim of which is
to exclude it from 'hive-exec’ dependency, right? That make
15 matches
Mail list logo