process memory.
>>>
>>> So basically the spec is a config shorthand, there is no reason to
>>> override it as you won't get a different behaviour at the end of the day.
>>>
>>> Gyula
>>>
>>> On Wed, Jun 14, 2023 at 11:55 AM Robin Cas
t;
> So basically the spec is a config shorthand, there is no reason to
> override it as you won't get a different behaviour at the end of the day.
>
> Gyula
>
> On Wed, Jun 14, 2023 at 11:55 AM Robin Cassan via user <
> user@flink.apache.org> wrote:
>
>> He
Hello all!
I am using the flink kubernetes operator and I would like to set the value
for `taskmanager.memory.process.size`. I set the desired value in the
flinkdeployment resource specs (here, I want 55gb), however it looks like
the value that is effectively passed to the taskmanager is the same
elp to make incremental checkpoint size small and
>>> stable which could make the CPU more stable.
>>>
>>> [1] https://issues.apache.org/jira/browse/FLINK-28699
>>> [2]
>>> https://nightlies.apache.org/flink/flink-docs-release-1.16/docs/dev/datastream/fault
ttps://issues.apache.org/jira/browse/FLINK-28699
> [2]
> https://nightlies.apache.org/flink/flink-docs-release-1.16/docs/dev/datastream/fault-tolerance/checkpointing/#state-backend-incremental
> [3]
> https://nightlies.apache.org/flink/flink-docs-release-1.16/docs/deployment/config
Hello all!
We are trying to bring our flink job closer to real-time processing and
currently our main issue is latency that happens during checkpoints. Our
job uses RocksDB with periodic checkpoints, which are a few hundred GBs
every 15 minutes. We are trying to reduce the checkpointing duration b
ps.
>
> In long term, I think we probably need to separate the compaction process
> from the internal db and control/schedule the compaction process ourselves
> (compaction takes a good amount of CPU and reduces TPS).
>
> Best.
> Yuan
>
>
>
> On Thu, Oct 13, 2022 at 11
Hello all, hope you're well :)
We are attempting to build a Flink job with minimal and stable latency (as
much as possible) that consumes data from Kafka. Currently our main
limitation happens when our job checkpoints the RocksDB state: backpressure
is applied on the stream, causing latency. I am w
Thanks a lot for your answers, this is reassuring!
Cheers
Le mer. 7 sept. 2022 à 13:12, Chesnay Schepler a
écrit :
> Just to squash concerns, we will make sure this license change will not
> affect Flink users in any way.
>
> On 07/09/2022 11:14, Robin Cassan via user wrote:
> &
Hi all!
It seems Akka have announced a licensing change
https://www.lightbend.com/blog/why-we-are-changing-the-license-for-akka
If I understand correctly, this could end-up increasing cost a lot for
companies using Flink in production. Do you know if the Flink developers
have any initial reaction a
Thanks a lot Alexander and Tzu-Li for your answers, this helps a lot!!
Cheers,
Robin
Le ven. 8 juil. 2022 à 17:40, Tzu-Li (Gordon) Tai a
écrit :
> Hi Robin,
>
> Apart from what Alexander suggested, I think you could also try the
> following first:
> Let the job use a "new" Kafka source, which y
Hello all!
We have a need where, for specific recovery cases, we would need to
manually reset our Flink kafka consumer offset to a given date but have the
Flink job restore its state. As I understand, restoring from a checkpoint
necessarily sets the group's offset to the one that was in the checkp
12 matches
Mail list logo