Hi, Charles
It is not a bug, this is because the primary keys provided by sink are not
exactly match input changeLogUpsertKeys, so fallback to before and after
mode, you can see [1] for more detail.
[1]
https://github.com/apache/flink/blob/d8630cb5db0608a630de95df0dd1d0c9f0b56aa2/flink-table/flin
Hi everyone,
I noticed some unexpected behavior with Upsert changelogs in Flink 1.17.1
and I wanted to post here to see if anyone has encountered a similar issue.
I’m running a Flink application which performs SQL queries using the Flink
SQL and Table APIs, then I convert the resulting table to a
essing.
issues.apache.org
Best,
Zhanghao Chen
From: Valentina Predtechenskaya
Sent: Wednesday, August 3, 2022 1:32
To: user@flink.apache.org
Subject: (Possible) bug in flink-kafka-connector (metrics rewriting)
Hello !
I would like to report a bug with m
Hello !
I would like to report a bug with metrics registration on KafkaProducer
initialization.
Firstly we found the problem with our Flink cluster: metric
KafkaProducer.outgoing-byte-rate was periodically missing (was equals zero or
near zero) on several subtasks, in the same time other sub
Thank you for reporting! That is definitely a bug, and I have opened a
ticket to fix which you can track here.
https://issues.apache.org/jira/browse/FLINK-26374
Seth
On Thu, Feb 24, 2022 at 4:18 PM Jonathan Weaver
wrote:
> Using the latest SNAPSHOT BUILD.
>
> If I have a column definition as
>
Using the latest SNAPSHOT BUILD.
If I have a column definition as
.column(
"events",
DataTypes.ARRAY(
DataTypes.ROW(
DataTypes.FIELD("status",
DataTypes.STRING().notNull()),
DataTypes.FIELD("times
Ok,
I think it was premature alert :)
1. We have a framework guarantee that start method will be called only once
per SplitEnumerator instance, hence context.callAsync will be called only
once
2. callAsync uses ScheduledExecutorService::scheduleAtFixedRate under the
hood so If any execution of thi
Hi,
in documentation for SplitEnumeratorContext::callAsync method we read that:
"(...) When this method is invoked multiple times, The Callables may be
executed in a thread pool concurrently.
It is important to make sure that the callable does not modify any shared
state, especially the states th
Hi David
thanks for the confirmation, good to know that.
Best,
Congxian
David Magalhães 于2020年7月21日周二 下午11:42写道:
> Hi Congxian, the leftover files were on the local disk of the TaskManager.
> But looking better into the issue, I think the issue was the "logs". The
> sink, in this case, was
Hi Congxian, the leftover files were on the local disk of the TaskManager.
But looking better into the issue, I think the issue was the "logs". The
sink, in this case, was writing one line into the logger (I was writing 8
GB in total), and that makes more sense. So nothing wrong with the
Flink/Save
Hi David
Sorry for the late reply, seems I missed your previous email.
I'm not sure I fully understand here, do the leftover files on s3
filesystem or the local disk of Taskmanager?. Currently, the savepoint data
will directly write to output stream of the underlying file(here is s3
file), yo
Hi Till, I'm using s3:// schema, but not sure what was the default used if
s3a or s3p.
then the state backend should try to directly write to the target file
> system
That was the behaviour that I saw the second time I've run this with more
slots. Does the savepoint write directly to S3 via stre
Hi David,
which S3 file system implementation are you using? If I'm not mistaken,
then the state backend should try to directly write to the target file
system. If this should result in temporary files on your TM, then this
might be a problem of the file system implementation. Having access to the
Hi Congxian, sorry for the late reply.
I'm using the filesystem with an S3 path as the default state backend in
flink-conf.yml (state.backend: filesystem).
The Flink version I'm using is 1.10.1.
By "The task manager did not clean up the state", I mean what the
taskmanager was writing on disk the
Hi David
As you say the savepoint use local disk, I assume that you use
RocksDBStateBackend.
What's the flink version are you using now?
What do you mean "The task manager did not clean up the state"?, does that
mean the local disk space did not clean up, do the task encounter failover
in this p
Hi, yesterday when I was creating a savepoint (to S3, around 8GB of state)
using 2 TaskManager (8 GB) and it failed because one of the task
managers fill up the disk (probably didn't have enough RAM to save the
state into S3 directly,I don't know what was the disk space, and reached
100% usage spac
Hey,
thanks a lot for filing a ticket! I put a link into SO.
It might take a few days till there's a response on the ticket.
On Tue, Apr 28, 2020 at 10:42 PM Marie May wrote:
> Thanks for responding Robert. I reported the issue here:
>
> issues.apache.org/jira/browse/FLINK-17444
>
> I do not ha
Hello I am running into the same issue this person posted here:
https://stackoverflow.com/questions/61246683/flink-streamingfilesink-on-azure-blob-storage
I see no one has answered so I thought maybe I could report it as a bug but
the site said to mail here first if unsure its a bug or not first.
18 matches
Mail list logo