Hi All,
I am facing error while creating topic manually using kafka-node client.
The code is mentioned below.
Can anyone help please?
let topicsToCreate = [{ topic: topicName, partitions: 1, replicationFactor:
2 }];
admin.createTopics(topicsToCreate, (err, data) => {
if (err)
Hi,
After reading http://www.evanjones.ca/jvm-mmap-pause.html and
https://bugs.openjdk.java.net/browse/JDK-8076103 (alongside the linked
e-mail trail) I'm considering adding this flag when running Kafka.
I'm assuming this is safe to use and there are no unintended side effects?
Does anyone have
I'm in the same boat
On Thu, Jan 24, 2019, 4:36 AM Rahul Singh <
rahul.si...@smartsensesolutions.com wrote:
> Hi All,
>
> I am facing error while creating topic manually using kafka-node client.
> The code is mentioned below.
>
> Can anyone help please?
>
> let topicsToCreate = [{ topic: topicNam
Hi Peter,
Thanks for the clarification.
When you hit the "stop" button, AFAIK it does send a SIGTERM, but I don't
think that Streams automatically registers a shutdown hook. In our examples
and demos, we register a shutdown hook "outside" of streams (right next to
the code that calls start() ).
U
Hello,
In my application when I send hundreds of thousands of messages I use the
Metadata in the callback to save the offset of the record for future usage.
However sometimes in something like 1% of the cases the metadata.offset()
returns -1 which makes things hard for me later as I can't find the
Hi Ashish,
Whats your replica.lag.time.max.ms set to and do you see any network
issues between brokers.
-Harsha
On Jan 22, 2019, 10:09 PM -0800, Ashish Karalkar
, wrote:
> Hi All,
> We just upgraded from 0.10.x to 1.1 and enabled rack awareness on an existing
> clusters which has a
Hi,
When you kerberoize Kafka and enable zookeeper.set.acl to true, all the
zookeeper nodes created under zookeeper root will have ACLs to allow only Kafka
Broker’s principal. Since all topic creation will go directly to zookeeper, i.e
Kafka-topic.sh script creates a zookeeper node under /
Hi All,
I encountered once kafka OOM, then I add a monitor to monitor Heap Memory
of Kafka.
here is a current status, I set kafka heap max is 96G. you can see it
changes significant. so anyone can help to point out where the problem is?
thanks in advance.
kafka version: 1.0.0
VM parameters:
-Xm
One of our internal customers is working on a service that spans around 120
kubernetes pods. Due to design constraints, every one of these pods has a
single kafka consumer, and they're all using the same consumer group id.
Since it's kubernetes, and the service is sized according to volume
through
Hi Marcos,
I think what you need is static membership which reduces the no.of
rebalances required. There is active discussion and work going for this KIP
https://cwiki.apache.org/confluence/display/KAFKA/KIP-345%3A+Introduce+static+membership+protocol+to+reduce+consumer+rebalances
-Ha
Hello,
In my application when I send hundreds of thousands of messages I use the
Metadata in the callback to save the offset of the record for future usage.
However sometimes in something like 1% of the cases the metadata.offset()
returns -1 which makes things hard for me later as I can't find the
Hi.
I have something good (and personally mysterious) to report.
We do indeed run 1.1.x in production.
And today when I was almost finished cleaning up my test case for public
display, I had been forced by corp policies to update osx, and suddenly
when I had my test in a "non hacky improvised p
Hi John,
On 1/24/19 3:18 PM, John Roesler wrote:
Hi Peter,
Thanks for the clarification.
When you hit the "stop" button, AFAIK it does send a SIGTERM, but I don't
think that Streams automatically registers a shutdown hook. In our examples
and demos, we register a shutdown hook "outside" of str
Not a problem. Glad that you've not seen it anymore now.
If it occurs again please feel free to reach out to the community again.
Guozhang
On Thu, Jan 24, 2019 at 2:32 PM Niklas Lönn wrote:
> Hi.
>
> I have something good (and personally mysterious) to report.
>
> We do indeed run 1.1.x in pr
Thanks to Sam. We are ongoing the evaluation.
On 2019/01/22 09:50:53, Sam Pegler wrote:
> Sounds like you're reaching the limits of what your disks will do either on
> reads or writes. Debug it as you would any other disk based app,
> https://haydenjames.io/linux-server-performance-disk-io-slow
Hi All,
We have a Spring based web app.
We are planning to build an 'Audit Tracking' feature and plan to use Kafka
- as a sink for storing Audit messages (which will then be consumed and
persisted to a common DB).
We are planning to build a simple, ‘pass-through’ REST service which will
take a
I don’t feel it would be a big hit in performance because Kafka works very
fast. I think the speed difference would be negligible. Why are you worried
about stability? I’m just curious because it doesn’t seem like it would be
unstable, but maybe it would be a bit overkill for one app and some de
Hi John,
Haven't been able to reinstate the demo yet, but I have been re-reading
the following scenario of yours
On 1/24/19 11:48 PM, Peter Levart wrote:
Hi John,
On 1/24/19 3:18 PM, John Roesler wrote:
The reason is that, upon restart, the suppression buffer can only
"remember" what
18 matches
Mail list logo