Storing offsets in Kafka frees up zookeeper writes for offset sync.so I
think it's preferred one to use whenever possible
On Thursday, October 29, 2015, Mayuresh Gharat
wrote:
> You can use either of them.
> The new kafka consumer (still under development) does not store offsets in
> zookeeper.
Currently there are no partitions based subscription inside topic.So when
you subscribe to both topics your consumer will get data from each
partitions in these two topics, i dont think you would be missing anything.
On Fri, Oct 23, 2015 at 11:35 AM, Fajar Maulana Firdaus
wrote:
> I am using k
Hi,
There are two properties which determines when does a replica falls off
sync.Look for replica.lag.time.max.ms and replica.lag.max.messages .If a
replica goes out of sync then it would not be even considered for leader
election.
Regards,
Pushkar
On Wed, Sep 30, 2015 at 9:44 AM, Shushant Arora
Hi,
While benchmarking new producer and consumer syncing offset in zookeeper i
see that MessageInRate reported in BrokerTopicMetrics is not same as rate
at which i am able to publish and consume messages.
Using my own custom reporter i can see the rate at which messages are
published and consumed
2)You need to implement MetricReporter and provider that implementation
class name against producer side configuration metric.reporters
On Mon, Jul 13, 2015 at 9:08 PM, Swati Suman
wrote:
> Hi Team,
> We are using Kafka 0.8.2
>
> I have two questions:
>
> 1)Is there any Java Api in Kafka that gi
gt; Guozhang
>
> On Thu, May 14, 2015 at 1:40 PM, pushkar priyadarshi <
> priyadarshi.push...@gmail.com> wrote:
>
> > Hi,
> >
> > The documentation for new producer allows passing ack=2(or any other
> > numeric value) but when i actually pass anything other than
Hi,
The documentation for new producer allows passing ack=2(or any other
numeric value) but when i actually pass anything other than 0,1,-1 in
broker log i see following warning.
Client producer-1 from /X.x.x.x:50105 sent a produce request with
request.required.acks of 2, which is now deprecated
, Apr 21, 2015 at 3:07 PM, pushkar priyadarshi <
priyadarshi.push...@gmail.com> wrote:
> I Get warnings in server log saying "No checkpointed highwatermark is
> found for partition" in server.log when trying to create a new topic.
>
> What does this mean?Though this is
I Get warnings in server log saying "No checkpointed highwatermark is found
for partition" in server.log when trying to create a new topic.
What does this mean?Though this is warning was curious to know if it
implies of any potential problem.
Thanks And Regards,
Pushkar
In my knowledge if you are using 0.8.2.1 which is latest stable you can
sync up your consumer offsets in kafka itself instead of Zk which further
brings down write load on ZKs.
Regards,
Pushkar
On Tue, Apr 21, 2015 at 1:13 PM, Jiangjie Qin
wrote:
> 2 partitions should be OK.
>
> On 4/21/15,
So in 0.8.2.0/0.8.2.1 high level consumer can not make use of offset
syncing in kafka?
On Wed, Apr 1, 2015 at 12:51 PM, Jiangjie Qin
wrote:
> Yes, KafkaConsumer in 0.8.2 is still in development. You probably still
> want to use ZookeeperConsumerConnector for now.
>
> On 4/1/15, 9:28 AM, "Mark Za
Hi,
I remember some time back people were asked not to upgrade to 0.8.2.Wanted
to know if issues pertaining to that are resolved now and is it safe now to
migrate to 0.8.2?
Thanks And Regards,
Pushkar
I have been using kafka for quite some time now and would really be
interested to contribute to this awesome code base.
Regards,
Pushkar
On Thu, Jul 17, 2014 at 7:17 AM, Joe Stein wrote:
> ./gradlew scaladoc
>
> Builds the scala doc, perhaps we can start to publish this again with the
> next r
t get
effected as if consumer lags behind too much then it will result into disk
seeks while consuming the older messages.
On Sun, Jun 15, 2014 at 8:16 PM, pushkar priyadarshi <
priyadarshi.push...@gmail.com> wrote:
> what throughput are you getting from your kafka cluster alone?Storm
>
what throughput are you getting from your kafka cluster alone?Storm
throughput can be dependent on what processing you are actually doing from
inside it.so must look at each component starting from kafka first.
Regards,
Pushkar
On Sat, Jun 14, 2014 at 8:44 PM, Shaikh Ahmed wrote:
> Hi,
>
> Dai
setting the config is the way to use async.it throws an exception when
unable to send a message.
On Sun, Jun 8, 2014 at 12:46 PM, Achanta Vamsi Subhash <
achanta.va...@flipkart.com> wrote:
> - Is setting type in config of the producer to sync the way?
> - Is the exception thrown a Runtime Except
Hello Damien,
Im also using same thing for pushing to graphite(forked from gangalia) but
i dont see default jvm paramaters like OS metrics being pushed to
graphite?Have you checked your version.Are you able to push these metrices
as well.
On Thu, May 22, 2014 at 8:02 PM, Jun Rao wrote:
> Thanks
tem.currentTimeMillis() - start)
> > + "to produce " + eventsNum + "messages");
> > producer.close();
> > }
> > }
> >
> > public class Serializer {
> > public static byte[] serialize(Object obj) throws IOException {
> > ByteArrayO
you can send byte[] that you get by using your own serializer ; through
kafka ().On the reciving side u can deseraialize from the byte[] and read
back your object.for using this you will have to
supply serializer.class=kafka.serializer.DefaultEncoder in the properties.
On Tue, May 20, 2014 at 4:2
you can use the kafka-list-topic.sh to find out if leader for particual
topic is available.-1 in leader column might indicate trouble.
On Fri, Apr 25, 2014 at 6:34 AM, Guozhang Wang wrote:
> Could you double check if the topic LOGFILE04 is already created on the
> servers?
>
> Guozhang
>
>
> On
Was trying to understand when we have subscribe then why poll is a separate
API.Why cant we pass a callback in subscribe itself?
On Mon, Apr 7, 2014 at 9:51 PM, Neha Narkhede wrote:
> Hi,
>
> I'm looking for people to review the new consumers APIs. Patch is posted at
> https://issues.apache.org/
i have been using one from here.
https://github.com/whisklabs/puppet-kafka
but had to fix few small problem like when this starts kafka as upstart
service it does not provide log path so kafka logs never appear since as a
service they dont have default terminal.
Thanks for sharing.Will start usi
I don't think there is any direct high level API equivalent to this.every
time you read messages using high level api your offset gets synced in zoo
keeper .auto offset is for cases where last read offset for example have
been purged n rather than getting exception you want to just fall back to
eit
What is the most appropriate design for using kafka producer from
performance view point.I had few in my mind.
1.Since single kafka producer object have synchronization; using single
producer object from multiple thread might not be efficient.so one way
would be to use multiple kafka producer from
i have got it working with graphite.Let me know if you still need any help
with this.
Regards,
Pushkar
On Sun, Feb 9, 2014 at 10:40 PM, Jun Rao wrote:
> You will need to the metrics-graphite jar (
> http://mvnrepository.com/artifact/com.yammer.metrics/metrics-graphite)
>
> Thanks,
>
> Jun
>
>
Thanks Jason.
On Thu, Jan 2, 2014 at 7:04 PM, Jason Rosenberg wrote:
> Hi Pushkar,
>
> We've been using zk 3.4.5 for several months now, without any
> problems, in production.
>
> Jason
>
> On Thu, Jan 2, 2014 at 1:15 AM, pushkar priyadarshi
> wrote:
>
Hi,
I am starting a fresh deployment of kafka + zookeeper.Looking at zookeeper
releases find 3.4.5 old and stable enough.Has anyone used this before in
production?
kafka ops wiki page says at Linkedin deployment still uses 3.3.4.Any
specific reason for the same.
Thanks And Regards,
Pushkar
; > Arjun Narasimha Kota
> >
> >
> > On Thursday 19 December 2013 05:33 PM, pushkar priyadarshi wrote:
> >
> >> 1.When you start producing : at this time if any of your supplied broker
> >> is
> >> alive system will continue to work.
> >>
sday 19 December 2013 05:33 PM, pushkar priyadarshi wrote:
>
>> 1.When you start producing : at this time if any of your supplied broker
>> is
>> alive system will continue to work.
>> 2.Broker going down and coming up with new IP : producer API refreshes
>> metadata
1.When you start producing : at this time if any of your supplied broker is
alive system will continue to work.
2.Broker going down and coming up with new IP : producer API refreshes
metadata information on failures(configurable) so they should be able to
detect new brokers.
But i dont think it's p
yes it worked thanks.
Regards,
Pushkar
On Thu, Dec 19, 2013 at 10:49 AM, Jun Rao wrote:
> Could you try just excluding Annotation_2.8.scala?
>
> Thanks,
>
> Jun
>
>
> On Wed, Dec 18, 2013 at 8:56 PM, pushkar priyadarshi <
> priyadarshi.push...@gmail.com>
On Wed, Dec 18, 2013 at 12:16 AM, pushkar priyadarshi <
> priyadarshi.push...@gmail.com> wrote:
>
> > While doing dev setup as described in
> > https://cwiki.apache.org/confluence/display/KAFKA/Developer+Setup
> >
> > im getting following build errors.
> >
&
ar...@gmail.com> wrote:
> >
> > > Hi pushkar,
> > >
> > > I tried with configuring "message.send.max.retries" to 10. Default
> value
> > > for this is 3.
> > >
> > > But still facing data loss.
> > >
> >
You can try setting a higher value for "message.send.max.retries" in
producer config.
Regards,
Pushkar
On Wed, Dec 18, 2013 at 5:34 PM, Hanish Bansal <
hanish.bansal.agar...@gmail.com> wrote:
> Hi All,
>
> We are having kafka cluster of 2 nodes. (using 0.8.0 final release)
> Replication Factor:
i see many tools mentioned for perf here
https://cwiki.apache.org/confluence/display/KAFKA/Performance+testing
of all these what all already exist in 0.8 release?
e.g. i was not able to find jmx-dump.sh , R script etc anywhere.
On Wed, Dec 18, 2013 at 11:01 AM, pushkar priyadarshi
While doing dev setup as described in
https://cwiki.apache.org/confluence/display/KAFKA/Developer+Setup
im getting following build errors.
immutable is already defined as class immutable Annotations_2.9+.scala
/KafkaEclipse/core/src/main/scala/kafka/utils line 38 Scala Problem
threadsafe is alre
thanks Jun.
On Wed, Dec 18, 2013 at 10:47 AM, Jun Rao wrote:
> You can run kafka-producer-perf-test.sh and kafka-consumer-perf-test.sh.
>
> Thanks,
>
> Jun
>
>
> On Tue, Dec 17, 2013 at 8:44 PM, pushkar priyadarshi <
> priyadarshi.push...@gmail.com> wrote
i am not able to find run-simulator.sh in 0.8 even after building perf.if
this tool has been deprecated what are other alternatives available now for
perf testing?
Regards,
Pushkar
38 matches
Mail list logo