If you have specified the LOCAL_DIRECTORIES[1] , then the LOG will go into
the LOCAL_DIRECTORIES.
[1]
https://ci.apache.org/projects/flink/flink-docs-release-1.9/ops/config.html#state-backend-rocksdb-localdir
Best,
Congxian
Yun Tang 于2019年12月30日周一 下午7:03写道:
> Hi Alex
>
> First of all, RocksDB
BTW, you could also have a more efficient version of deduplicating
user table by using the topn feature [1].
Best,
Kurt
[1]
https://ci.apache.org/projects/flink/flink-docs-release-1.9/dev/table/sql.html#top-n
On Tue, Dec 31, 2019 at 9:24 AM Jingsong Li wrote:
> Hi RKandoji,
>
> In theory, you
I created a issue to trace this feature:
https://issues.apache.org/jira/browse/FLINK-15440
Best,
Kurt
On Tue, Dec 31, 2019 at 8:00 AM Fanbin Bu wrote:
> Kurt,
>
> Is there any update on this or roadmap that supports savepoints with Flink
> SQL?
>
> On Sun, Nov 3, 2019 at 11:25 PM Kurt Young w
Hi RKandoji,
In theory, you don't need to do something.
First, the optimizer will optimize by doing duplicate nodes.
Second, after SQL optimization, if the optimized plan still has duplicate
nodes, the planner will automatically reuse them.
There are config options to control whether we should reu
Kurt,
Is there any update on this or roadmap that supports savepoints with Flink
SQL?
On Sun, Nov 3, 2019 at 11:25 PM Kurt Young wrote:
> It's not possible for SQL and Table API jobs playing with savepoints yet,
> but I
> think this is a popular requirement and we should definitely discuss the
Thanks Terry and Jingsong,
Currently I'm on 1.8 version using Flink planner for stream proessing, I'll
switch to 1.9 version to try out blink planner.
Could you please point me to any examples (Java preferred) using
SubplanReuser?
Thanks,
RK
On Sun, Dec 29, 2019 at 11:32 PM Jingsong Li wrote:
Hi Igal and Thanks for your quick response and yes, you got my second
question right.
I'm a building a small PoC around fraudulent trades and in short, I've
fine-grained the
functions to the level participantId + "::" + instrumentId (ie
"BankA::AMAZON")
In this flow of stock exchange messages, th
Hi Lei
It's better to use the SAME version to submit job from client side. Even the
major version of Flink is the same, the compatibility has not been declared to
support. There exist a known issue due to some classes missing
'serialVersionUID'. [1]
[1] https://issues.apache.org/jira/browse/FL
Hi Alex
First of all, RocksDB is not created by Flink checkpoint mechanism. RocksDB
would be launched once you have configured and use keyed state no mater whether
you have ever enabled checkpoint.
If you want to check configuration and data in RocksDB, please login the task
manager node. The
Hi all.
How can I connect RocksDB which created by Flink checkpoint, aim to check
the rocksdb configuration and data in rocksdb. Thanks very much.
AlexFu
> Regarding the event-time processing and watermarking, I have got that if
> an event will be received late, after the allowed lateness time, it will be
> dropped even though I think it is an antithesis of exactly-once semantic.
>
> Yes, allowed lateness is a compromise between exactly-once semanti
Hi Shinhyung,
Can you compare the performance of the different Flink versions based on
the same environment (Or at least the same configuration of the node and
framework)?
I see there are some different configurations of both clusters and
frameworks. It would be better to comparison in the same e
12 matches
Mail list logo