HI Anish,
Yeah, changing the input topic partitions at runtime could be problematic. But
it doesn’t seem like that’s what’s going on here. (For regex the application it
will work fine).
Are there any broker failures going on while test is running? Also, I wonder
about how the rest of your code
Hi all,
In fact this is a question regarding Zookeeper but very related to Kafka.
I'm doing a test in order to check that we can create up to 100k topics in
a kafka cluster, so we can manage multitenancy this way.
After a proper setup, I have managed to create those 100k topics in Kafka,
but now I
Hello Eno,
Yes, I have followed a similar code to setup the streams application. Does any
other code inside the application affect the bootstrapping steps?
I have a custom interface in which I populate Streams properties using
environment variables. I attach the state store to the builder using:
Hello,
I have actually formatted the question with sufficient information here:
https://stackoverflow.com/questions/45512063/console-producer-error-after-implementing-with-tls-ssl
In Summary, I have tried to debug this and it looks like the certificates
are being printed and recognised as "Trust
Sorry for the delayed reply, i was on holidays.
Manikumar, it works :).
Thank you very much for your help.
Gabriel.
2017-07-31 10:27 GMT+02:00 Manikumar :
> We should pass necessary ssl configs using --command-config command-line
> option.
>
> >>security.protocol=SSL
> >>ssl.truststore.locatio
Hi,
I have a 3 nodes cluster with 18 GB RAM and 2 GB swap.
Each node have the following JVMs (Xms=Xmx) :
- Zookeeper 2GB
- Kafka 12 GB
- Kafka mirror-maker DCa 1 GB
- Kafka mirror-maker DCb 1 GB
All th JVMs consume 16 GB. It leaves 2 GB for the OS (debian jessie 64
bits).
Why i have no swap free
To avoid swap you should set swappiness to 1, not 0. 1 is a request (don't
swap if avoidable) whereas 0 is a demand (processes will be killed as OOM
instead of swapping.
However, I'm wondering why you are running such large heaps. Most of the ZK
heap is used for storage of the data in memory, and
Just to make it clear Haitao, in your case you do not have to restart brokers
(since you are changing at the topic level).
On 8/6/17, 11:37 PM, "Kaufman Ng" wrote:
Hi Haitao,
The retention time (retention.ms) configuration can exist as a broker-level
and/or topic-level config.
Thanks Todd, i will set swapiness to 1.
Theses machines will be the future production cluster for our main
datacenter . We have 2 remote datacenters.
Kafka will bufferize logs and elasticsearch will index its.
Is it a bad practice to have all these JVMs on the same virtual machine ?
What do you r
Hi,
I'm having this error when trying to connect zookeeper once I have created
+70k topics.
I have played with the java property jute.maxbuffer with no success.
Have anybody found this error before?
Thanks in advance,
David
In production, you probably want to avoid stacking up the applications like
this. There’s a number of reasons:
1) Kafka’s performance is significantly increased by other applications not
polluting the OS page cache
2) Zookeeper has specific performance requirements - among them are a
dedicated disk
One of the most appealing features of the streams-based architecture is the
ability to replay history. This concept was highlighted in a blog post
[0] just the other day.
Practically, though, I am stuck on the mechanics of replaying data when
that data is also periodically expiring. If your logs e
You are not going to get that kind of latency (i.e. less than 100
microseconds). In my experience, consumer->producer latency averages around:
20 milliseconds (cluster is in AWS with enhanced networking).
On 8/3/17, 2:32 PM, "Chao Wang" wrote:
Hi,
I observed that it took 2-6 mill
Thanks, David. I was trying to do Kafka pub/sub on a local, closed
network. In my case, I observed microsecond-latency with bare-bone
sockets, and I would like to know how to configure Kafka to achieve
similar result; if it turned out to be infeasible, what might be the
cause of the additional
Hi all,
we are fighting with offset rewinds of seemingly random size and hitting
seemingly random partitions on restarting any node in our kafka cluster. We
are running out of ideas - any help or pointers to things to investigate
are highly appreciated.
Our kafka setup is dual data center with tw
Dmitry, which KIP are you referring to? I see this behavior too sometimes.
On Fri, Aug 4, 2017 at 10:25 AM, Dmitry Minkovsky
wrote:
> Thank you Matthias and Bill,
>
> Just want to confirm that was my offsets *were *being committed but I was
> being affected by `offsets.retention.minutes` which I
Hi Christiane,
Thanks for the email. That looks like
https://issues.apache.org/jira/browse/KAFKA-5600
Ismael
On Mon, Aug 7, 2017 at 7:04 PM, Christiane Lemke wrote:
> Hi all,
>
> we are fighting with offset rewinds of seemingly random size and hitting
> seemingly random partitions on restartin
Thanks for your sharing Sahil, just FYI there is a KIP proposal for
considering always turn on "log.cleaner.enable" here:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-184%3A+Rename+LogCleaner+and+related+classes+to+LogCompactor
Guozhang
On Thu, Aug 3, 2017 at 5:58 AM, sahil aggarwal
Hello Dmitry,
The right way to think of reprocessing is that:
1) you can reprocess the source stream from a given Kafka topic only within
the source topic's retention period. For example, if an event happens and
produced to the source topic at time t0, and that topic retention period is
t1, then
Damien,
Thanks for pointing out the error. I had tried a different version of
initializing the store.
Now that I am able to compile, I started to get the below error. I looked
up other suggestions for the same error and followed up to upgrade Kafka to
0.11.0.0 version. I still get this error :/
Hi Garrett,
This one
https://issues.apache.org/jira/plugins/servlet/mobile#issue/KAFKA-5510
Best,
Dmitry
пн, 7 авг. 2017 г. в 14:22, Garrett Barton :
> Dmitry, which KIP are you referring to? I see this behavior too sometimes.
>
> On Fri, Aug 4, 2017 at 10:25 AM, Dmitry Minkovsky
> wrote:
>
>
Bingo! Thanks Dmitry, that is exactly what I'm running into. Looks like I
just have to set offset.retention.minutes to be greater than my largest
log.retention.hours topics to be safe.
On Mon, Aug 7, 2017 at 9:03 PM, Dmitry Minkovsky
wrote:
> Hi Garrett,
>
> This one
> https://issues.apache.or
Hi,
When producer had sent records which contain header values to broker
servers, brokers did not receive the records and logged this message.
==
[2017-08-03 21:00:01,130] ERROR [Replica Manager on Broker 1]: Error
processing append operation on partition topic-0
(kafka.server.ReplicaManager)
23 matches
Mail list logo