Another question, according to my memory, the broker needs to be restarted
after replacing disk to recover this. Is that correct? If so, I take that
Kafka cannot know by itself that the disk has been replaced, manually
restart is necessary.
张祥 于2020年3月4日周三 下午2:48写道:
> Thanks Peter, it makes a lo
Hello Experts,
Any thought on this?
From: Sunil CHAUDHARI
Sent: Tuesday, March 3, 2020 5:46 PM
To: users@kafka.apache.org
Subject: Please help: How to print --reporting-interval in the perf metrics?
Hi,
I want to test consumer perf using kafka-consumer-perf-test.sh
I am running below command:
./
Oh, that's nice! Thank you!
Best Regards
Magnus Reftel
-Opprinnelig melding-
Fra: Matthias J. Sax
Sendt: onsdag 4. mars 2020 05:27
Til: users@kafka.apache.org
Emne: Re: Does the response to a OffsetFetch include non-committed
transactional offsets?
-BEGIN PGP SIGNED MESSAGE-
Ha
Hi,
I read here and there that Kafka is not CPU intensive, but mostly disk and
network. Seems to be reasonable, but that's not what I see on my monitoring.
Could anyone help me to see if the CPU usage I see is about the expected
usage or there is something how we use Kafka that makes it more CPU
Hi Bill,
I built from source and ran unit and integration tests. They passed.
There was a large number of skipped tests, but I'm assuming that is
intentional.
Cheers
Eno
On Tue, Mar 3, 2020 at 8:42 PM Eric Lalonde wrote:
>
> Hi,
>
> I ran:
> $ https://github.com/elalonde/kafka/blob/master/bin/
Hello,
Does the KafkaConsumer subscribe methods allow for incremental topic
subscriptions?
By incremental I mean that only the added and removed topics and
subscribed/unsubscribes respectively and the other topics are not
unsubscribed and subscribed back.
>From the javadoc API on the subscribe m
Need help integrating Kafka with 'Stateful Spark Streaming' application.
In a Stateful Spark Streaming application I am writing the 'OutputRow' in
the 'updateAcrossEvents' but I keep getting this error (*Required attribute
'value' not found*) while it's trying to write to Kafka. I know from the
do
Yes, you should restart the broker. I don’t believe there’s any code to check
if a Log directory previously marked as failed has returned to healthy.
I always restart the broker after a hardware repair. I treat broker restarts as
a normal, non-disruptive operation in my clusters. I use a minimum
Hey there,
have you already sought help from Spark community? Currently I don't think
we could attribute the symptom to Kafka.
Boyang
On Wed, Mar 4, 2020 at 7:37 AM Something Something
wrote:
> Need help integrating Kafka with 'Stateful Spark Streaming' application.
>
> In a Stateful Spark Str
Yes, I have. No response from them. I thought someone in Kafka community
might know the answer. Thanks.
On Wed, Mar 4, 2020 at 9:49 AM Boyang Chen
wrote:
> Hey there,
>
> have you already sought help from Spark community? Currently I don't think
> we could attribute the symptom to Kafka.
>
> Boy
Right. Let me start a kip to add in few ideas to discuss .
Thanks,
Koushik
-Original Message-
From: Matthias J. Sax
Sent: Tuesday, March 3, 2020 8:35 PM
To: users@kafka.apache.org
Subject: [EXTERNAL] Re: Issue in retention with compact,delete cleanup policy
-BEGIN PGP SIGNED MESSAG
By simply adding 'toJSON' before 'writeStream' the problem was fixed. Maybe
it will help someone.
On Wed, Mar 4, 2020 at 10:38 AM Something Something <
mailinglist...@gmail.com> wrote:
> Yes, I have. No response from them. I thought someone in Kafka community
> might know the answer. Thanks.
>
>
Hi
We are on Kafka 1.1.1. We add bunch of new entries (say ~ 10 new entries)
in truststore and restart for Kafka to read the truststore file. Everything
works fine.
We wanted to move to Kafka 2.0.x to get this new features, wherein we can
dynamically remove something from truststore. Let's say, w
Thanks Peter, really appreciate it.
Peter Bukowinski 于2020年3月4日周三 下午11:50写道:
> Yes, you should restart the broker. I don’t believe there’s any code to
> check if a Log directory previously marked as failed has returned to
> healthy.
>
> I always restart the broker after a hardware repair. I trea
I’m seeing behaviour that I don’t understand when I have Consumers fetching
from multiple Partitions from the same Topic. There are two different
conditions arising:
1. A subset of the Partitions allocated to a given Consumer not being consumed
at all. The Consumer appears healthy, the Thread
Hi guys,
Sorry if this mail has bothered you. Currently, I want to set up a topic but it
listen two column timestamp to consum data when the data is changed.
Example: timestamp.column.name = createddate or modifieddate
Thank for your help, glad to hear from you soon.
Thanks,
Hung Pham
Applicati
16 matches
Mail list logo