working as expected now.
Thanks for the help!
Chris
From: Chesnay Schepler
Date: Wednesday, May 12, 2021 at 5:24 AM
To: "Slotterback, Chris" ,
"user@flink.apache.org"
Subject: Re: [EXTERNAL] Re: sideOutputLateData not propagating late reports
once window expires
Ah, sorry for t
Hi Chesnay,
That doesn’t compile, as WindowedStream doesn’t have the operator
getSideOutput, only SingleOutputStreamOperator has that operation.
Chris
From: Chesnay Schepler
Date: Tuesday, May 11, 2021 at 6:09 AM
To: "Slotterback, Chris" ,
"user@flink.apache.org"
Su
Hey Flink Users,
I am having some issues with getting sideOutputLateData to properly function
with late event time reports. I have the following code that, per my
understanding, should be allowing reports that fall after the window has
triggered and beyond allowed lateness to pass through to th
realized multi-d map,
kryo stopped choking.
Chris
From: "Slotterback, Chris"
Date: Sunday, August 23, 2020 at 2:17 AM
To: user
Subject: Re: Not able to force Avro serialization
And here is the deserde block where the Schema is used to generate a
GenericRecord:
@Override
public Map d
, null));
Map map = new HashMap<>();
record.getSchema().getFields().forEach(field -> map.put(field.name(),
record.get(field.name(;
return map;
}
Chris
From: "Slotterback, Chris"
Date: Sunday, August 23, 2020 at 2:07 AM
To: user
Subject: Not able to force Avro serializ
Hey guys,
I have been trying to get avro deserialization to work, but I’ve run into the
issue where flink (1.10) is trying to serialize the avro classes with kryo:
com.esotericsoftware.kryo.KryoException: java.lang.UnsupportedOperationException
Serialization trace:
reserved (org.apache.avro.Sche
Hi Ori,
Another more temporary brute-force option, while not officially flink, could be
building a modified version of the metrics plugin into flink where you manually
manipulate the prefixes yourself. It’s actually pretty easy to build the jar,
and to test it you drop the jar into the plugin p
As the answer on SO suggests, Prometheus comes with lots of functionality to do
what you’re requesting using just a simple count metric:
https://prometheus.io/docs/prometheus/latest/querying/functions/
If you want to implement the function on your own inside flink, you can make
your own metrics
and
write time.
From: Congxian Qiu
Date: Friday, June 5, 2020 at 10:42 PM
To: Arvid Heise
Cc: "Slotterback, Chris" ,
"user@flink.apache.org"
Subject: Re: [EXTERNAL] Re: Inconsistent checkpoint durations vs state size
Hi Chris
From the given exception, seems there is so
GC?
Because the default GC was too "lazy". ;-)
Best,
Aljoscha
On 21.05.20 18:09, Slotterback, Chris wrote:
> For those who are interested or googling the mail archives in 8 months, the
> issue was garbage collection related.
>
> The default 1.8 jvm garbage collector (
stable.
On 5/20/20, 9:05 AM, "Slotterback, Chris"
wrote:
What I've noticed is that heap memory ends up growing linearly with time
indefinitely (past 24 hours) until it hits the roof of the allocated heap for
the task manager, which leads me to believe I am leaking somew
ek" wrote:
On 15.05.20 15:17, Slotterback, Chris wrote:
> My understanding is that while all these windows build their memory
state, I can expect heap memory to grow for the 24 hour length of the
SlidingEventTimeWindow, and then start to flatten as the t-24hr window frames
expire a
Hey Flink users,
I wanted to see if I could get some insight on what the heap memory profile of
my stream app should look like vs my expectation. My layout consists of a
sequence of FlatMaps + Maps, feeding a pair of 5 minute
TumblingEventTimeWindows, intervalJoined, into a 24 hour (per 5 minut
Hey Flink users,
Currently using Flink 1.7.2 with a job using FlinkKafkaProducer with its write
semantic set to Semantic.EXACTLY_ONCE. When there is a job failure and restart
(in our case from checkpoint timeout), it begins a failure loop that requires a
cancellation and resubmission to fix. Th
Hi Timothy,
I recently faced a similar issue that spawned a bug discussion from the devs:
https://issues.apache.org/jira/browse/FLINK-11654
As far as I can tell your understanding is correct, we also renamed the UID
using the jobname to force uniqueness across identical jobs writing to the same
Hey all,
I am running into an issue where if I run 2 flink jobs (same jar, different
configuration), that produce to different kafka topics on the same broker,
using the 1.7 FlinkKafkaProducer set with EXACTLY_ONCE semantics, both jobs go
into a checkpoint exception loop every 15 seconds or so:
We are running a Flink job that uses FlinkKafkaProducer09 as a sink with
consumer checkpointing enabled. When our job runs into communication issues
with our kafka cluster and throws an exception after the configured retries,
our job restarts but we want to ensure at least once processing so we
17 matches
Mail list logo