u could do the same? Always emit artificial events, and filter
> them out in your windowing code? The window should still fire since it will
> always have events, even if you don't use them?
>
>
>
>
> On Mon, May 9, 2022 at 8:55 AM Shilpa Shankar
> wrote:
>
>> He
Hello,
We are building a flink use case where we are consuming from a kafka topic
and performing aggregations and generating alerts based on average, max,
min thresholds. We also need to notify the users when there are 0 events in
a Tumbling Event Time Windows. We are having trouble coming up with
Is there a way we can update flink UI to display UTC time instead of local
time?
[image: image.png]
Thanks,
Shilpa
We are using flink 1.13.1 and its running in a kubernetes environment.
- Shilpa
On Thu, Oct 14, 2021 at 9:44 AM Shilpa Shankar
wrote:
> Hi Ingo,
>
> I am using google chrome and there are no errors on the console.
>
> Thanks,
> Shilpa
>
> On Thu, Oct 14, 2021 at
On Thu, Oct 14, 2021 at 3:17 PM Shilpa Shankar
> wrote:
>
>> We enabled flame graphs to troubleshoot an issue with our job by adding
>> rest.flamegraph.enabled:
>> true to flink.conf . The UI does not display anything when we select an
>> operator and go to FlameGra
We enabled flame graphs to troubleshoot an issue with our job by
adding rest.flamegraph.enabled:
true to flink.conf . The UI does not display anything when we select an
operator and go to FlameGraph. Is there something else that needs to be
enabled on the flink cluster?
[image: image.png]
Than
Hello ,
We have enabled DataDogHTTPReporter to fetch metrics on flink v1.13.1
running on kubernetes. The metric flink.operator.KafkaConsumer.records_lag_max
is not displaying accurate values. It also displays 0 most of the time and
when it does fetch a value, it seems to be wrong when I compare th
s
>> like? How are you deploying, i.e., standalone with your own manifests, the
>> Kubernetes integration of the Flink CLI, some open-source operator, etc.?
>>
>> Also, are you using a High Availability setup for the JobManager?
>>
>> Best,
>> Austin
>>
Hello,
We have a flink session cluster in kubernetes running on 1.12.2. We
attempted an upgrade to v 1.13.1, but the jobmanager pods are continuously
restarting and are in a crash loop.
Logs are attached for reference.
How do we recover from this state?
Thanks,
Shilpa
2021-06-30 16:03:25,965 ER
Hello,
We are using pyflink's datastream api v1.12.1 to consume from kafka and
want to use one of the fields to act as the "rowtime" for windowing.
We realize we need to convert BIGINT to TIMESTAMP before we use it as
"rowtime".
py4j.protocol.Py4JJavaError: An error occurred while calling o91.sel
10 matches
Mail list logo