> > the broker.id needs to be set.
> >
> > Do we have any standards/script/way defined/preferred for doing this or
> > suggested by Kafka experts if we are not using EBS?
> >
> > Thanks and Regards,
> > Srinivas
> >
> > On Thu, Nov 15, 2018
roker is failed, new
> > > broker/instance
> > > > > > > spun-up in AWS get assigned with new broker.id. The issue is,
> > with
> > > > > this
> > > > > > > approach, we need to re-assign the topics/replications on to
> the
> > > new
> >
r a transition period.
>
> Is it possible to use 2 different ports for the same protocol (PLAINTEXT)
> in the broker configuration? Can I simply put 2 connection strings in the
> *listeners* config?
>
> Thank you!
> Dan
>
--
Kaufman Ng
+1 646 961 8063
Solutions Architect | Confluent | www.confluent.io
tch for partitions
> [apptivodb8-campaign-tracker-email-0] to broker xx.xx.xx.xx:9092 (id: 2
> rack: null)
> DEBUG Fetcher:180 - Sending fetch for partitions
> [apptivodb5-campaign-tracker-email-1] to broker xx.xx.xx.xx:9092 (id: 2
> rack: null)
>
>
>
> On Mon, May 14, 2018 a
nalyzed in log file debug logs are taking more space. So i'm
> having
> > disk space issue.
> >
> > I'm using *log4j.properties* for managing the logs, Now I want to remove
> > the DEBUG logs from my logger file.
> >
> > Anyone, please guide me to
as anybody else run into this problem and found a good solution? I'm
> > interested to hear any other solutions for tearing down and rebuilding
> SSL
> > connections on the fly.
> >
> >
> > Thanks,
> > Alex
>
>
>
> --
> Sönke Liebau
> Partner
> Tel. +49 179 7940878
> OpenCore GmbH & Co. KG - Thomas-Mann-Straße 8 - 22880 Wedel - Germany
>
--
Kaufman Ng
+1 646 961 8063
Solutions Architect | Confluent | www.confluent.io
e're using 0.9 ( CDH ) and consumer offsets are stored within Kafka.
> What
> > is the preferred way to get consumer offset from code or script for
> > monitoring ? Is there any sample code/ script to do so ?
> >
> > Thanks,
> > Sunil Parmar
>
--
Kaufman Ng
+1 646 961 8063
Solutions Architect | Confluent | www.confluent.io
> > beyond mailing list: he's getting the close to 1000 up votes and 100
> > > helpful flags on SO for answering almost all questions about Kafka
> > Streams.
> > >
> > > Thank you for your contribution and welcome to Apache Kafka, Matthias!
> > >
> > >
> > >
> > > Guozhang, on behalf of the Apache Kafka PMC
> >
>
--
Kaufman Ng
+1 646 961 8063
Solutions Architect | Confluent | www.confluent.io
5:9094,INTERNAL_SASL://10.1.1.5:9095
On Fri, Nov 10, 2017 at 7:10 PM, Thomas Stringer
wrote:
> Yep I'm familiar with that. Just curious where it's documented that, for
> instance, the CLIENT listener is for client connections.
>
> On Fri, Nov 10, 2017, 12:08 PM Kaufman Ng wrot
EXT, SASL/SSL, etc. I see the encryption part of the
> documentation, but is it just inferred what these listeners apply to?
>
> Thank you in advance!
>
--
Kaufman Ng
+1 646 961 8063
Solutions Architect | Confluent | www.confluent.io
> time in LogManager are static:
> https://github.com/apache/kafka/blob/0.10.0/core/src/
> main/scala/kafka/server/KafkaServer.scala#L597-L620
>
> Kafka version: kafka_2.11-0.10.0.1
>
> Thanks
> --
> haitao.yao
>
--
Kaufman Ng
+1 646 961 8063
Solutions Architect | Confluent | www.confluent.io
egards.
>
> On Tue, Jul 25, 2017 at 9:50 AM, Kaufman Ng wrote:
>
>> Confluent Schema Registry is available in the DC/OS Universe, see here
>> for the package definitions https://github.com
>> /mesosphere/universe/tree/dcd777a7e429678fd74fc7306945cdd27b
>> da3b
> Debasish Ghosh
> http://manning.com/ghosh2
> http://manning.com/ghosh
>
> Twttr: @debasishg
> Blog: http://debasishg.blogspot.com
> Code: http://github.com/debasishg
>
--
Kaufman Ng
+1 646 961 8063
Solutions Architect | Confluent | www.confluent.io
,
> >I have created a topic with 500 partitions in 3 node
> > cluster with replication factor 3. kafka version is 0.11. I executed lsof
> > command and it lists more 1 lakh open files. why these many open files
> and
> > how to reduce it ?.
> >
> >
> > If the latter is true, then is it correct to assume that encryption will
> > take place using SSL if a client authenticates using a Kerberos ticket so
> > long as they have a trust store configured?
> >
> > Thank you.
> >
> > Waleed
> >
>
--
Kaufman Ng
+1 646 961 8063
Solutions Architect | Confluent | www.confluent.io
ithub projects like kalinka (
> https://github.com/dcsolutions/kalinka) and also seen that I could do with
> Apache Camel.
>
> I would like to ask you about some experience or advice you can provide in
> this bridging between ActiveMQ and Kafka.
>
> Thanks in advance,
> David.
>
t would not be possible from a technical point of view...
>
> Cheers
> Nico
>
>
--
Kaufman Ng
+1 646 961 8063
Solutions Architect | Confluent | www.confluent.io
require its own specific way of keep tracking of offsets.
On Sat, Mar 4, 2017 at 1:23 AM, VIVEK KUMAR MISHRA 13BIT0066 <
vivekkumar.mishra2...@vit.ac.in> wrote:
> Hi All,
>
> I want to create my own kafka connector which will connect multiple data
> source.
> Could anyone please hel
/KafkaProducer.html
On Wed, Mar 1, 2017 at 3:14 AM, Yuanjia wrote:
> Hi all,
> When will the messsages be sent in kafka0.10.0?If I use KafkaProducer.send
> to send one message, the messsages isn't sent immediately except invoke
> flush or close.
>
> Thanks.
>
>
>
opicCommand.scala:53)
> at kafka.admin.TopicCommand.main(TopicCommand.scala)
>
>
>
>
>
> **
>
> *Regards,*
> *Laxmi Narayan Patel*
> *MCA NIT Durgapur (2011-2014)*
> *Mob:-9741292048,8345847473*
>
--
Kaufman Ng
+1 646 961 8063
Solutions Architect | Confluent | www.confluent.io
technical judgment, high-quality work and willingness
> > to contribute where needed to make Apache Kafka awesome.
> >
> > Thank you for your contributions, Grant :)
> >
> > --
> > Gwen Shapira
> > Product Manager | Confluent
> > 650.450.2760 | @gw
a port forwarding from this configured
> port to the actual broker port within the container so that the broker
> itself can also find itself, right?
>
> thanks.
> regards, aki
>
--
Kaufman Ng
Solutions Architect | Confluent
+1 646 961 8063 | @kaufmanng
www.confluent.io
ssages using
> script "kafka-console-producer.sh"? Thanks!
>
>
>
>
>
>
> Best Regards
>
> Johnny
>
>
--
Kaufman Ng
Solutions Architect | Confluent
+1 646 961 8063 | @kaufmanng
www.confluent.io
e processing on the
> >> message
> >> >>> available in the source topic and i merge both topic.
> >> >>> That is;
> >> >>>
> >> >>> builder.stream(sourceTopic).to(targetTopic)
> >> >>>
> >> >&g
r.subscribe(Arrays.asList(topicName));
> //..
>
> -
>
> when application run once, I can get consumer offset by
> *kafka-run-class kafka.tools.ConsumerOffsetChecke,
t; Thanks
> >
> > On Mon, Sep 19, 2016 at 8:18 AM, Vadim Keylis
> wrote:
> >
> >> Good morning. Which benchmarking tools we should use to compare
> >> performance of 0.8 and 0.10 versions? Which metrics should we monitor ?
> >>
> >> Thanks in advance,
> >> Vadim
> >>
>
>
--
Kaufman Ng
Solutions Architect | Confluent
+1 646 961 8063 | @kaufmanng
www.confluent.io
m
> 发送时间: 2016-06-02 09:19
> 收件人: users
> 主题: Kafka forum register
> Hello,
>
> My project is using kafka ,and I want register a user in the forum,what
> can I do ?
>
>
>
> Tong SS
>
--
Kaufman Ng | Solutions Architect | Confluent
kauf...@confluent.io | +1 646 961 8063
> This only happens when the topic does not exist. When we restart the
> failing consumer it then can connect correctly to the topic and consume it.
> How can this error be prevented?
>
> Best regards
>
> Patrick
--
Kaufman Ng | Solutions Architect | Confluent
kauf...@confluent.io | +1 646 961 8063
28 matches
Mail list logo