Kafka community,
Getting *java.lang.OutOfMemoryError: Requested array size exceeds VM *
*limit,* when Atlas hook in Hive tries to send Kafka message.
Let me know if any workaround possible on this issue or this is because of
JDK issue https://bugs.openjdk.java.net/browse/JDK-8154035.
2018-09-10
Thanks Guozhang,
Thanks for a very good answer!
I now understand, so the idea is that the client cleans up after itself and
that way there is a minimal amount of garbage in the repartition topic.
We actually figured out we had another max open files limit we did hit
indeed, and adjusting that lim
Hi
We are on Kafka 1.1 and have 3 Kafka brokers, and help your need to
understand the error message, and what it would take to fix the problem.
On Broker-1 we see the following logs for several and *some producers fails
to write to Kafka*:
[2018-10-08 12:28:25,609] INFO [ReplicaFetcher replicaId
Anyone ? We really hit the wall deciphering this error log, and we don't
know how to fix it.
On Wed, Oct 10, 2018 at 12:52 PM Raghav wrote:
> Hi
>
> We are on Kafka 1.1 and have 3 Kafka brokers, and help your need to
> understand the error message, and what it would take to fix the problem.
>
>
Hi,
I'm exploring whether it is possible to use Kafka Streams for batch
processing with at-least-once semantics.
What I want to do is to insert records in an external storage in bulk, and
execute offset-commit after the bulk insertion to achieve at-least-once
semantics.
A processing topology can
Thanks for the quick answer, Bratt.
You are right, my brokers did not shut down cleanly. The issue was not related
to the update.
Thanks for the pointers!
Best,
Claudia
-Ursprüngliche Nachricht-
Von: Brett Rann
Gesendet: Dienstag, 9. Oktober 2018 03:26
An: Users
Betreff: Re: Problem