Hello everyone,
is there any example of UDF which has Structs as input parameters or
outputs?
I'm currently implementing a UDF which should be something like:
Struct merge(Struct before, Struct after)
(these structs are from nested JSON objects, like {"field1": "value",
"before": {}, "after
As an additional question, I would like to ask if it's compulsory or not to
specify the schema of the input struct statically within the UDF
implementation.
To add a bit more details, my use case as a matter of fact is the following:
I have a handful of topics on which are published JSON with the
Hi Ryanne,
> I frequently demo this stuff, where I pull the plug on entire DCs and
apps keep running like nothing happened.
Is there any public recording, documentation about these demos?
I would be very useful to see how it works.
Thanks,
Peter
On Thu, 13 Feb 2020 at 00:42, Ryanne Dolan wrote:
Hi, all,
Sorry for the confusion. I didn’t look too closely at it, I was just going by
the fact that it was listed under the scope of KIP-221.
I agree that the final design of the KIP doesn’t have too much to do with the
description of KAFKA-4835. Maybe we should remove that ticket from the KIP
Hello Kafka users, developers and client-developers,
This is the first candidate for release of Apache Kafka 2.4.1.
This is a bug fix release and it includes fixes and improvements from 38
JIRAs, including a few critical bugs.
Release notes for the 2.4.1 release:
https://home.apache.org/~bbejeck
Hi community,
I ran into disk failure when using Kafka, and fortunately it did not crash
the entire cluster. So I am wondering how Kafka handles multiple disks and
it manages to work in case of single disk failure. The more detailed, the
better. Thanks !
Whether your brokers have a single data directory or multiple data directories
on separate disks, when a disk fails, the topic partitions located on that disk
become unavailable. What happens next depends on how your cluster and topics
are configured.
If the topics on the affected broker have r