Hi All,
is there any metric that I can use to check whether the memory allocated
for kafka is sufficient for the given load on the brokers and whether kafka
is optimally making use of page cache for consumer fetch reads which are
not going to disk for each read slowing down the overall consumer
pr
The canonical form for Avro schemas is to use a single "name" key whose
value is the concatenation of the namespace, if any, with the record
name:
https://avro.apache.org/docs/current/spec.html#Transforming+into+Parsing+Canonical+Form
There is a common, non-canonical alternative out in the wil
Checked Java DNS resolution caching:
```
sun.net.InetAddressCachePolicy.get();
sun.net.InetAddressCachePolicy.getNegative();
```
and those return 30 and 10 respectively. So it seems fine and it shouldn't
cache for too long.
On 2021/05/14 12:55:00, Michał Łowicki wrote:
> Hey,
>
> Had inciden
Hey,
Had incident where one broker died and got later different IP address. Some
clients / pods (everything lives on K8s) detected IP change and logged:
[Producer clientId=producer-1] Hostname for node 1 changed from to
> .
(logged by org.apache.kafka.clients.ClusterConnectionState