ConfluentRegistryAvroDeserializationSchema.forGeneric() is require reader
schema .How we can used it deseralize using writer schema.
On Fri, Sep 13, 2019 at 12:04 AM Lasse Nedergaard
wrote:
> Hi Elias
>
> Thanks for letting me know. I have found it but we also need the option to
> register Avro
Hi,
It would be helpful for understanding the problem if you could share the
logs.
Thank you~
Xintong Song
On Wed, Jan 15, 2020 at 12:23 AM burgesschen wrote:
> Hi guys,
>
> Out team is observing a stability issue on our Standalone Flink clusters.
>
> Background: The kafka cluster our flink
Hi folks,
I have two questions about types in Flink when using Scala:
1. scala case class:
This my case class define:
case class SensorReading(var id: String , var timestamp: Long, var temperature:
Double)
In documentation, Scala case class is supported:
`Scala case classes (including Scala tu
Thanks!
I was able to track this down. Essentially it was a deserialization error
which propagated and might have prevented the channel from closing down
properly. This could be considered as a problem, but I'm not further down
the rabbit hole chasing down a solution for the original deserializati
Hi Cam,
could you share a bit more details about your job (e.g. which sources are
you using, what are your settings, etc.). Ideally you can provide a minimal
example in order to better understand the program.
>From a high level perspective, there might be different problems: First of
all, Flink d
Great, thanks a lot for looking into the problem and fixing it. I assume
that your PR will be merged very soon.
Cheers,
Till
On Tue, Jan 14, 2020 at 7:18 PM Benoit Hanotte wrote:
> Hello Till,
> thanks for your reply!
> I have been able to debug the issue and reported it in
> https://issues.apa
Hi David,
I'm pulling in Kostas who worked on the StreamingFileSink and might be able
to answer some of your questions.
Cheers,
Till
On Mon, Jan 13, 2020 at 2:45 PM Leonard Xu wrote:
> Hi, David
>
> For you first description, I’m a little confused about duplicated records
> when backfilling, c
Hi Itamar,
for further debugging it would be helpful to get the full logs of Flink and
more information about your environment. Since I'm not too familiar with
Flink's PubSub connector, I have pulled in Richard (original author),
Becket and Robert (both helped with reviewing and merging this conne
Hello Till,
thanks for your reply!
I have been able to debug the issue and reported it in
https://issues.apache.org/jira/browse/FLINK-15577.
It seems the old planner does not add the window specs to the Logical nodes'
digests, leading the HepPlanner to consider the aggregations to be equivalent,
Hi Benoit,
thanks for reporting this issue. Since I'm not too familiar with the SQL
component I've pulled in Timo and Jingsong who know much better what could
be wrong than I do.
Cheers,
Till
On Mon, Jan 13, 2020 at 11:48 AM Benoit Hanotte
wrote:
> Hello,
>
> We seem to be facing an issue with
Hi guys,
Out team is observing a stability issue on our Standalone Flink clusters.
Background: The kafka cluster our flink jobs read from/ write to have some
issues and every 10 to15 mins one of the partition leaders switch. This
causes jobs that write to/ read from that topic fail and restart. U
Hi,
What you are asking is certainly possible with the DataStream API. Table
API/SQL is a declarative API, where say what to want to compute not how.
As a rule of thumb I would say whenever you need to manually handle your
state or timers the DataStream API and ProcessFunction[1] will be a
better
Hi All,
I read through the doc below and I am wondering if I can clean up the state
based on custom logic rather min and max retention time?
For example, I want to say clean up all the state where the key = foo or
say the value = bar. so until the keys reach a particular value just keep
accumulat
13 matches
Mail list logo