Just to add a bit of context: The first-level members all-exceptions,
root-exceptions, truncated and timestamp have been around for a longer
time. The exceptionHistory was added in Flink 1.13. As part of this change,
the aforementioned members were deprecated (see [1]). We kept them for
backwards-c
Hi John,
Going with processing time is perfectly sound if the results meet your
requirements and you can easily live with events misplaced into the wrong time
window.
This is also quite a bit cheaper resource-wise.
However you might want to keep in mind situations when things break down
(networ
Hi Rahul
>From your description I guess maybe you could try different flink.yaml(one
for server and another for client).
I am not an expert about SSL and security stuff. So please correct me if I
am wrong.
Best,
Guowei
On Wed, Nov 24, 2021 at 3:54 AM Rahul wrote:
> Hello,
> I am trying to se
Hi Joseph
Would you like to give more details about the error message?
Best,
Guowei
On Thu, Nov 25, 2021 at 2:59 AM Joseph Lorenzini
wrote:
> Hi all,
>
>
>
> I have an implementation of KafkaDeserializationSchema interface that
> deserializes a kafka consumer record into a generic record.
>
>
>
Hi Jonas,
Previously Flink indeed does not support checkpoints after some tasks finished.
In 1.14 we implement a first version for this feature (namely
https://issues.apache.org/jira/browse/FLINK-2491),
and it could be enabled by set
execution.checkpointing.checkpoints-after-tasks-finish.enable
I'm trying to track down a couple errors I've hit related to key groups. I
want to verify that all of my keys have stable hashes. I tried to print
out the execution plan but it doesn't contain enough info.
Well what I'm thinking for 100% accuracy no data loss just to base the
count on processing time. So whatever arrives in that window is counted. If
I get some events of the "current" window late and they go into another
window it's ok.
My pipeline is like so
browser(user)->REST API-->l
I included the user ML in the thread.
@users Are you still using Zookeeper 3.4? If so, were you planning to
upgrade Zookeeper in the near future?
I'm not sure about ZK compatibility, but we'd also upgrade Curator to
5.x, which doesn't support ookeeperK 3.4 anymore.
On 25/11/2021 21:56, Till
Hi community,
I am using Flink 1.11 + Java 8 and I was updating my application from
Spring boot 1 to spring boot 2.6. Then my Integration Test of Flink + Kafka
started giving me this error: "java.lang.NoClassDefFoundError:
scala/concurrent/ExecutionContext$parasitic$". The older version of spring
Hi all,
I have been struggling with this issue for a couple of days now.
Checkpointing appears to fail as the Task Source ( kinesis stream in this
case) appears to be in a FINISHED state.
Excerpt from Jobmanager logs:
2021-11-25 12:52:00,479 INFO
org.apache.flink.runtime.executiongraph.Executio
Hi John,
… just a short hint:
With datastream API you can
* hand-craft a trigger that decides when an how often emit intermediate,
punctual and late window results, and when to evict the window and stop
processing late events
* in order to process late event you also need to specify for
Can you clarify when each exception occurs? Is the latter causing the
first one?
There are a few possible explanations.
One could be an implementation issue in Micrometer where they use the
JVMs common pool. In this case a thread may or may not use the user-code
classloader.
Regarding singl
Thanks. Using, data streaming.
On Wed, 24 Nov 2021 at 20:56, Caizhi Weng wrote:
> Hi!
>
> Are you using the datastream API or the table / SQL API? I don't know if
> datastream API has this functionality, but in table / SQL API we have the
> following configurations [1].
>
>- table.exec.emit.
root-exception: The last exception that caused a job to fail.
all-exceptions: All exceptions that occurred the last time a job failed.
This is primarily useful for completed jobs.
exception-history: Exceptions that previously caused a job to fail.
On 25/11/2021 11:52, Mahima Agarwal wrote:
Hi
Hi Team,
Please find the query below regarding exceptions
API(/jobs/:jobid/exceptions)
In response of above rest api:
Users are getting 3 types of exceptions:
1. exceptionHistory
2. all-exceptions
3. root-exception
What is the purpose of the above 3 exceptions?
Any leads would be appreciat
You're welcome!
Piotrek
śr., 24 lis 2021 o 17:48 Shazia Kayani napisał(a):
> Hi Piotrek,
>
> Thanks for you message!
>
> Ok that does sound interesting and is a approach I had not considered
> before, will take a look into and further investigate
>
>
> Thank you!
>
> Best wishes,
>
> Shazia
>
>
Will Flink loss some old Keyed State when changing the parallelism, like 2 ->
5, or 5->3?
17 matches
Mail list logo