Hi All,
I am connecting to a secured kafka cluster from spark. My jaas.conf looks
like below -
KafkaClient {
com.sun.security.auth.module.Krb5LoginModule required
useTicketCache=true
keyTab="./user.keytab"
principal="u...@example.com";
};
export KAFKA_OPTS="-Djava.security.auth.login.config=/home
No. only the number of consmers
Liel Shraga
ENGINEER.SOFTWARE ENGINEERING
lshr...@cisco.com
Tel: +972 2 588 6394
Cisco Systems, Inc.
32 HaMelacha St., (HaSharon Bldg) P.O.Box 8735, I.Z.Sapir
SOUTH NETANYA
42504
Israel
cisco.com
Think before you print.
This email may contain confidential and
Does it mean that the number of brokers changes from time to time?
On Mon, Sep 18, 2017 at 11:10 PM, Liel Shraga (lshraga)
wrote:
> Hi,
>
> My docker compose file is :
>
> version: '2'
> services:
> zookeeper:
> image: wurstmeister/zookeeper
> ports:
> - "2181:2181"
> kafka:
>
Hi,
My docker compose file is :
version: '2'
services:
zookeeper:
image: wurstmeister/zookeeper
ports:
- "2181:2181"
kafka:
image: wurstmeister/kafka
ports:
- "9092:9092"
environment:
KAFKA_ADVERTISED_HOST_NAME: lshraga-ubuntu-sp-nac
KAFKA_ADVERTIS
Can you provide your docker file, or compose file?
On Wed, Sep 13, 2017 at 1:55 AM, Liel Shraga (lshraga)
wrote:
> Hi,
>
> I didn’t define the partition size. How can I do it with kafka-clients API?
>
> Thanks,
>
>
>
>
>
> Liel Shraga
> ENGINEER.SOFTWARE ENGINEERING
> lshr...@cisco.com
> Tel: +9
We are running into KAFKA-1641 where the log cleaner thread dies with the below
INFO. Is there a work around for this issue? We are running kafka-0.9.0.
2017-09-18 16:12:41,621 INFO kafka.log.LogCleaner: Cleaner 0: Building offset
map for log __consumer_offsets-16 for 2149 segments in offset ra
Kafka Version: 0.10.0.1 / 0.10.2.1
在 2017/9/19 9:44, Zor X.L. 写道:
Hi,
Recently in our experiment, we find that even though no resource usage
achieve 80%, consumers will slow down producer (which we were not
expected), especially when there is no message in the topic.
*We wonder if we did
Hi,
Recently in our experiment, we find that even though no resource usage
achieve 80%, consumers will slow down producer (which we were not
expected), especially when there is no message in the topic.
*We wonder if we did something wrong (where?), or this is a Kafka's
characteristic(expla
Thanks, Guozhang.
On Mon, Sep 18, 2017 at 5:23 PM, Guozhang Wang wrote:
> It is available online now:
> https://www.confluent.io/kafka-summit-sf17/resource/
>
>
> Guozhang
>
> On Tue, Sep 19, 2017 at 8:13 AM, Raghav wrote:
>
> > Hi
> >
> > Just wondering if the videos are available anywhere fro
It is available online now:
https://www.confluent.io/kafka-summit-sf17/resource/
Guozhang
On Tue, Sep 19, 2017 at 8:13 AM, Raghav wrote:
> Hi
>
> Just wondering if the videos are available anywhere from Kafka Summit 2017
> to watch ?
>
> --
> Raghav
>
--
-- Guozhang
Hi
Just wondering if the videos are available anywhere from Kafka Summit 2017
to watch ?
--
Raghav
Thanks, Vito .. that worked !
On Sun, Sep 17, 2017 at 9:02 PM, 鄭紹志 wrote:
> Hi, Karan,
>
> It looks like you need to add a property 'value.deserializer' to
> kafka-console-consumer.sh.
>
> For example:
> $ bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic
> kstreams4 --fro
Hi Scott,
There is nothing preventing a replica running a newer version from being in
sync as long as the instructions are followed (i.e.
inter.broker.protocol.version has to be set correctly and, if there's a
message format change, log.message.format.version). That's why I asked
Yogesh for more d
Hi everyone,
We've seen some instances of our consumer groups when running normally not
process any messages from some partitions for minutes while other
partitions are seeing regularly updates in seconds. In some cases when a
consumer group had a significant lag (hours of messages), some partitio
Hi Hugues.
How 'big' are your transactions? In particular, how many produce records
are in a single transaction? Can you share your actual producer code?
Also, did you try the `kafka-producer-perf-test.sh` tool with a
transactional id and see what the latency is for transactions with that
tool?
Can we get some clarity on this point:
>older version leader is not allowing newer version replicas to be in sync,
so the data pushed using this older version leader
That is super scary.
What protocol version is the older version leader running?
Would this happen if you are skipping a protocol v
Hi Yogesh,
Can you please clarify what you mean by "observing data loss"?
Ismael
On Mon, Sep 18, 2017 at 5:08 PM, Yogesh Sangvikar <
yogesh.sangvi...@gmail.com> wrote:
> Hi Team,
>
> Please help to find resolution for below kafka rolling upgrade issue.
>
> Thanks,
>
> Yogesh
>
> On Monday, Sept
Hi Team,
Please help to find resolution for below kafka rolling upgrade issue.
Thanks,
Yogesh
On Monday, September 18, 2017 at 9:03:04 PM UTC+5:30, Yogesh Sangvikar
wrote:
>
> Hi Team,
>
> Currently, we are using confluent 3.0.0 kafka cluster in our production
> environment. And, we are plani
Hi,
I am testing an app with transactions on the producer side of kafka
(0.11.0.1) . I defined the producer config (see below) and added the
necessary lines in the app (#initTransaction, #begintransaction and
#commitTransaction) around the existing #send
The problem I am facing is that each t
Understood, but since we haven't updated to use 5.7.3 yet, I think it's
best to test against what is currently deployed.
Thanks.
On Mon, Sep 18, 2017 at 9:56 AM, Ted Yu wrote:
> We're using rocksdb 5.3.6
>
> It would make more sense to perform next round of experiment using rocksdb
> 5.7.3 whic
We're using rocksdb 5.3.6
It would make more sense to perform next round of experiment using rocksdb
5.7.3 which is latest.
Cheers
On Mon, Sep 18, 2017 at 5:00 AM, Bill Bejeck wrote:
> I'm following up from your other thread as well here. Thanks for the info
> above, that is helpful.
>
> I th
Hi,
I just sent you a follow-up message on the other thread we have going
regarding state store performance.
I guess we can consider this thread closed and we'll continue working on
the State Store thread.
Thanks!
Bill
On Mon, Sep 18, 2017 at 7:27 AM, dev loper wrote:
> Hi Ted, Damian, Bill
I'm following up from your other thread as well here. Thanks for the info
above, that is helpful.
I think the AWS instance type might be a factor here, but let's do some
more homework first.
For a next step, we could enable logging for RocksDB so we can observe the
performance.
Here is some sam
Hi Ted, Damian, Bill & Sabarish,
I would like to thank you guys for all the help offered to solve this
issue. Seems like the persistent store was not scaling out as expected .
After state store builds up over period of time , the performance of the
kafka streams application was performing poorly s
24 matches
Mail list logo