s sounds like a very bad option.
>
> Wouldn't it make more sense to have the key.converter and value.converter
> defined on the specific Connector level?
>
> Any other suggestions?
>
--
*Dustin Cote*
Customer Operations Engineer | Confluent
Follow us: Twitter <https://twitter.com/ConfluentInc> | blog
<http://www.confluent.io/blog>
is to be expected. Should the compaction remove the
> zero-byte files are would the broker do this? I'm not yet enough into the
> cleanup code to understand this.
>
> Regards,
> Harald
>
>
> On 05.08.2016 16:23, Dustin Cote wrote:
>
>> Harald,
>>
>
log
>
> Is this expected behavior or is there yet another configuration option
> that defines when these get purged?
>
> Harald.
>
--
Dustin Cote
confluent.io
ializers/KafkaAvroSerializer.java
> >
> which seems to be related closely to Schema registry concept from
> confluent.
>
> Now, I want to save my avro encoded from kafka to parquet on hdfs using
> Avro schema which is located in the classpath, for instance,
> /META-INF/avro/xxx.a
, 2016 at 3:38 PM, Malcolm, Brian (Centers of Excellence -
Integration) wrote:
> I am using version 0.10.0 of Kafka and the documentation syas the Producer
> acks can have the value can be [all, -1, 0, 1].
> What is the difference between the all and -1 setting?
>
>
>
>
--
Dustin Cote
confluent.io
d topic cannot accept
> message without key.
>
> Why does this happen and what’s the solution?
>
>
>
--
Dustin Cote
confluent.io
c: d...@kafka.apache.org
> Subject: Re: Kafka HDFS Connector
>
> Hi Dustin,
>
> I am looking for option 1.
>
> Looking at Kafka Connect code, I guess we need to write converter code if
> not available.
>
>
> Thanks in advance.
>
> Regards
> Pari
>
>
> O
t.
>
> Is there any suggestion to prevent data loss in this case?
>
> Note: We have tested the same case in 0.8.2.1 and 0.10 release.
>
>
> Thanks and Regards,
> Madhukar
>
--
Dustin Cote
confluent.io
tion transmitted by this e-mail is proprietary to Mphasis, its
> > associated companies and/ or its customers and is intended
> > for use only by the individual or entity to which it is addressed, and
> may
> > contain information that is privileged, confidential or
> > exempt from disclosure under applicable law. If you are not the intended
> > recipient or it appears that this mail has been forwarded
> > to you without proper authority, you are notified that any use or
> > dissemination of this information in any manner is strictly
> > prohibited. In such cases, please notify us immediately at
> > mailmas...@mphasis.com and delete this mail from your records.
> >
>
--
Dustin Cote
confluent.io
ary log file (in my case
> /tmp/kafka-logs/test1-0/.log ) and confirmed that the
> messages are not published (the console consumer receives all the
> messages).
>
> Is there an explanation for this behavior?
>
> Best regards,
> Radu
>
--
Dustin Cote
confluent.io
]
> }
> }
>
> Any ideas as to what I am doing wrong?
>
> -Dave
>
> This e-mail and any files transmitted with it are confidential, may
> contain sensitive information, and are intended solely for the use of the
> individual or entity to whom they are addressed. If you have received this
> e-mail in error, please notify the sender by reply e-mail immediately and
> destroy all copies of the e-mail and any attachments.
>
--
Dustin Cote
confluent.io
ion count? So that partition records can be held in memory
> until they are sent to the replicas? I believe @Ben's kafka setup is such
> that there are thousands of partitions across the topics.
>
>
> On Thu, Jun 9, 2016 at 1:22 PM Dustin Cote wrote:
>
> > @Ben, the big GC st
;s a few pages in the
> confluent
> > docs
> > > > >on JVM tuning iirc. We simply use the G1 and a 4GB Max heap and
> things
> > > > work
> > > > >well (running many thousands of clusters).
> > > > >
> > > > >Thanks
> > > > >Tom Crayford
> > > > >Heroku Kafka
> > > > >
> > > > >On Thursday, 9 June 2016, Lawrence Weikum
> > wrote:
> > > > >
> > > > >> Hello all,
> > > > >>
> > > > >> We’ve been running a benchmark test on a Kafka cluster of ours
> > running
> > > > >> 0.9.0.1 – slamming it with messages to see when/if things might
> > break.
> > > > >> During our test, we caused two brokers to throw OutOfMemory errors
> > > > (looks
> > > > >> like from the Heap) even though each machine still has 43% of the
> > total
> > > > >> memory unused.
> > > > >>
> > > > >> I’m curious what JVM optimizations are recommended for Kafka
> > brokers?
> > > > Or
> > > > >> if there aren’t any that are recommended, what are some
> > optimizations
> > > > >> others are using to keep the brokers running smoothly?
> > > > >>
> > > > >> Best,
> > > > >>
> > > > >> Lawrence Weikum
> > > > >>
> > > > >>
> > > >
> > > >
> >
>
--
Dustin Cote
confluent.io
Hi Elias,
You'll have to do some rolling restarts, but downtime can be limited.
There's two things you have to consider at a high level:
1) How to migrate zookeeper without downtime
-Starting with a quorum of 3, add two of the new servers to the quorum
bringing it up to 5
-Once everything is in sy
---
> Wir sind Mitglied im BVDW (Bundesverband Digitale Wirtschaft)
> --------
> This e-mail is confidential and is intended for the addressee(s) only.
> If you are not the named addressee you may not use it, copy it or
> disclose it to any other person. If you received this message in error
> please notify the sender immediately.
>
--
Dustin Cote
confluent.io
o use in production.
>
> -Dave
>
>
> -Original Message-----
> From: Dustin Cote [mailto:dus...@confluent.io]
> Sent: Thursday, June 02, 2016 9:51 AM
> To: users@kafka.apache.org
> Subject: Re: Changing default logger to RollingFileAppender (KAFKA-2394)
>
> Just
hu, Jun 2, 2016 at 10:38 AM, Tauzell, Dave <
> dave.tauz...@surescripts.com
> > wrote:
>
> > I haven't started using this in production but this is how I will likely
> > setup the logging as it is easier to manage.
> >
> > -Dave
> >
>
ing on the
file name convention and would need to rollback the log4j configuration
should the default change in a later version? What sort of feedback can
those users provide to help us document this the right way?
Thanks,
--
Dustin Cote
confluent.io
sets? What was the purpose of this folder?
> I look forward for your answers.
> Regards,
> Florin
>
> On Wed, May 11, 2016 at 4:12 PM, Dustin Cote wrote:
>
> > Hi Florin,
> >
> > The new consumer is intended to replace both the high level and simple
> > c
ka store committed offset?
> > > > >
> > > > > is it in zookeeper or kafka broker?
> > > > >
> > > > > Also there is option to use offset storage outside kafka, does it
> > mean
> > > ,
> > > > > kafka will not depend on zookeepr for offset.
> > > > >
> > > > > Thanks,
> > > > > snehalata
> > > > >
> > > >
> > >
> >
> >
> >
> > --
> > Radha Krishna, Proddaturi
> > 253-234-5657
> >
>
--
Dustin Cote
confluent.io
t;, "LAG", "OWNER"))
>
>
> is there an equivalent command that I could use for the 0.8.2.1 kafka
> version?
>
--
Dustin Cote
confluent.io
_offset
> > >> topic.
> > >>
> > >> On Tue, May 10, 2016 at 7:07 AM Snehalata Nagaje <
> > >> snehalata.nag...@harbingergroup.com> wrote:
> > >>
> > >>>
> > >>>
> > >>> Hi All,
> &g
he list for all the class names i can run through
> >> ./kafka-run-class.sh [class-name] command?
> >>
> >> Thanks,
> >> Mudit
> >
> >
> >
> >
> > --
> > *Christian Posta*
> > twitter: @christianposta
> > http://www.christianposta.com/blog
> > http://fabric8.io
>
>
--
Dustin Cote
confluent.io
; > When we are on kafka 0.8, all the consumer offsets are stored in ZK and
> we
> > can use some ZK browser to see the contents in different ZK paths.
> >
> > On kafka 0.9, when everything moved to internal kafka topics, do we have
> a
> > tool to browse through the contents in those topics?
> >
>
--
Dustin Cote
confluent.io
It's really not a good idea to use jdk 1.6 to rebuild the jar file. There
can always but jdk 1.7 only features that are used in the code that won't
play nicely with jdk 1.6. As Tom mentioned, you should really be looking
at moving off of jdk 1.6 before looking at upgrading anything else in your
e
25 matches
Mail list logo