thanks again for your response and sorry for the delay, but I was out for a
few days. I jsut wrapped up a bunch of performance testing, I will be
sending link out to the user group here shortly.
Bert
On Thu, Apr 17, 2014 at 8:00 PM, Bello, Bob wrote:
> Some feedback from your feedback.
>
> BE
The checkpointed offset should be the offset of the next message to be
consumed. So, you should save mAndM.nextOffset().
Thanks,
Jun
On Tue, Apr 22, 2014 at 8:57 PM, Seshadri, Balaji
wrote:
> Yes I disabled it.
>
> My doubt is the path should have offset to be consumed or last consumed
> offse
I'm not seeing that API in java MessageAndMeta,is this part of ConsumerIterator.
-Original Message-
From: Jun Rao [mailto:jun...@gmail.com]
Sent: Wednesday, April 23, 2014 8:47 AM
To: users@kafka.apache.org
Subject: Re: commitOffsets by partition 0.8-beta
The checkpointed offset should
Take a look at the example in
https://cwiki.apache.org/confluence/display/KAFKA/0.8.0+SimpleConsumer+Example
Thanks,
Jun
On Wed, Apr 23, 2014 at 9:01 AM, Seshadri, Balaji
wrote:
> I'm not seeing that API in java MessageAndMeta,is this part of
> ConsumerIterator.
>
>
> -Original Message
Hey guys, I am dealing with a similar problem and hoping a similar solution can
help me out. Looking for some feedback on this problem and potential solution:
So I am reading messages from a topic, then doing some synchronous processing
in the thread handling the consumer iterator, THEN issuing
Can I just increment by one ?.I see that’s what code does ?.
case class MessageAndOffset(message: Message, offset: Long) {
/**
* Compute the offset of the next message in the log
*/
def nextOffset: Long = offset + 1
}
-Original Message-
From: Jun Rao [mailto:jun...@gmail.com]
Does check point avoid duplicate update to zookeeper ?.
-Original Message-
From: Jun Rao [mailto:jun...@gmail.com]
Sent: Wednesday, April 23, 2014 10:14 AM
To: users@kafka.apache.org
Subject: Re: commitOffsets by partition 0.8-beta
Take a look at the example in
https://cwiki.apache.org/c
Hi,
What is the open source Kafka Spout for storm that people are using ?
What is the experience with
https://github.com/nathanmarz/storm-contrib/tree/master/storm-kafka ?
regards
--
Folks have been using this spout
https://github.com/wurstmeister/storm-kafka-0.8-plus which has now been
merged into the storm incubating project
https://github.com/apache/incubator-storm/tree/master/external/storm-kafka
/***
Joe Stein
Founder, Principal C
Good to know. I was looking at the former - will check out the latter.
On Wed, Apr 23, 2014 at 4:41 PM, Joe Stein wrote:
> Folks have been using this spout
> https://github.com/wurstmeister/storm-kafka-0.8-plus which has now been
> merged into the storm incubating project
> https://github.com/a
Thanks. I was concerned that the other one was dated.
regards
On Wed, Apr 23, 2014 at 4:41 PM, Joe Stein wrote:
> Folks have been using this spout
> https://github.com/wurstmeister/storm-kafka-0.8-plus which has now been
> merged into the storm incubating project
> https://github.com/apache/in
Hey all, do you guys have any plans to enhance the topic reassignment tool?
I've had to grow my cluster a couple times and getting an existing topics
partition replicas balanced out to the new brokers really sucks. I have to
describe the topic, awk the output to get it in the json format, then
ma
I don't think we are doing anything at the moment to improve this, but I
think we agree it could be improved. We would welcome a contribution here.
The best way to proceed would just be to write up a wiki or JIRA on how you
think it should work and kick off a discussion.
-Jay
On Wed, Apr 23, 201
We¹ve got a script that rebalances the partitions in a cluster based on
their size (to try and keep the data size across the brokers even), which
works very well for moving partitions onto new cluster members. The only
problem with it is that it¹s got a couple hooks into our internal
configuration
HI Jun,
I just mimicked the commitOffsets in our app.
public void commitOffset(DESMetadata metaData) {
log.info("Update offsets only for ->"+ metaData.toString());
String key =
metaData.getTopic()+"/"+metaData.getPartitionNumber();
Long nextOffset
Yes, that should work.
Thanks,
Jun
On Wed, Apr 23, 2014 at 7:51 PM, Seshadri, Balaji
wrote:
> HI Jun,
>
> I just mimicked the commitOffsets in our app.
>
> public void commitOffset(DESMetadata metaData) {
> log.info("Update offsets only for ->"+
> metaData.toString());
>
Hi Team,
I found a strange phenomenon of isr list in my kafka cluster
When I use the tool that kafka provide to get the topic information, and it
show isr list as following, seem it is ok
[irt...@xseed171.kdev bin]$ ./kafka-topics.sh --describe --zookeeper
10.96.250.215:10013,1
17 matches
Mail list logo