I think you are saying both, i.e. if you
have committed on a partition it returns you that value but if you haven't
it does a remote lookup?
Correct.
The other argument for making committed batched is that commit() is
batched, so there is symmetry.
position() and seek() are always in memory chan
We encountered with this problem, too.
And our problem is that we set the message.max.bytes larger than
replica.fetch.max.bytes.
After we changed the replica.fetch.max.bytes to a larger number, the problem
solved.
Hey Neha,
I actually wasn't proposing the name TopicOffsetPosition, that was just a
typo. I meant TopicPartitionOffset, and I was just referencing what was in
the javadoc. So to restate my proposal without the typo, using just the
existing classes (that naming is a separate question):
long posi
Yes, I meant messages stored in the broker. We have the feeling that some
messages are being lost in the mirrormaker process, and I'd like to make
sure that's not happening. As the deployment is really young, we still
haven't deleted any messages, so the number of messages in FE and BE should
be th
Conceptually, do the position methods only apply to topics you've
subscribed to, or do they apply to all topics in the cluster?
E.g., could I retrieve or set the committed position of any partition?
The positive use case for having access to all partition information would
be to setup an active m
Hi Neha,
6. It seems like #4 can be avoided by using Map> Long> or Map as the argument type.
>
> How? lastCommittedOffsets() is independent of positions(). I'm not sure I
> understood your suggestion.
I think of subscription as you're subscribing to a Set of TopicPartitions.
Because the argume
We’ve been using Miniway’s hadoop-consumer in production for over a year
without any problems. It stores offsets in zookeeper rather than HDFS and it
uses the more recent mapreduce api.
https://github.com/miniway/kafka-hadoop-consumer
On Feb 13, 2014, at 11:18 AM, Marcelo Valle wrote:
> Hell
2. It returns a list of results. But how can you use the list? The only way
to use the list is to make a map of tp=>offset and then look up results in
this map (or do a for loop over the list for the partition you want). I
recommend that if this is an in-memory check we just do one at a time. E.g.
Yes, I'm interested in the Kafka integration to the Karaf container. I have
found solution regarding Zookeeper server, but can not see any for Kafka
broker. Does anybody know about the future plans to implement it?
2014-02-13 22:42 GMT+02:00 Banerjee, Aparup :
> I guess depends on the OSGI conta
Hey guys,
One thing that bugs me is the lack of symmetric for the different position
calls. The way I see it there are two positions we maintain: the fetch
position and the last commit position. There are two things you can do to
these positions: get the current value or change the current value.
I guess depends on the OSGI container you have in mind (virgo , karaf and
such). I haven't seen any Kafka - OSGI Integration yet.
Aparup
On 2/13/14 11:26 AM, "Roman Minko" wrote:
>Hello!
>
>Is there any ability to deploy Kafka into OSGI environment and start Kafka
>Broker in OSGI container?
>
>
Hello!
Is there any ability to deploy Kafka into OSGI environment and start Kafka
Broker in OSGI container?
Thank you,
Roman.
Pradeep -
Thanks for your detailed comments.
1.
subscribe(String topic, int... paritions) and unsubscribe(String topic,
int... partitions) should be subscribe(TopicPartition...
topicPartitions)and unsubscribe(TopicPartition...
topicPartitons)
I think that is reasonable. Overall, I'm in
Hi,
Camus does support raw text messages. If I remember correctly, you just
need to provide your own record decoder and record writer.
We are using Camus to consume messages from Kafka and store them to S3 and
it works quite well.
Maxime
On Thu, Feb 13, 2014 at 8:18 AM, Marcelo Valle wrote:
Hello,
I've been studying different options to consume messages from kafka to
hadoop(hdfs) and found three odds.
Linkedin Camus - https://github.com/linkedin/camus
kafka-hadoop-loader - https://github.com/michal-harish/kafka-hadoop-loader
hadoop-consumer -
https://github.com/apache/kafka/tree/0.8
Yes i have made it to trace as it will help me debug the things.
Have u found any issue in the it.
On Feb 13, 2014 9:12 PM, "Jun Rao" wrote:
> The request log is in trace. Take a look at the log4j property file in
> config/.
>
> Thanks,
>
> Jun
>
>
> On Wed, Feb 12, 2014 at 9:45 PM, Arjun wrote:
Do you mean the # messages stored in the broker (since old messages are
deleted automatically)? We don't have that jmx right now, but it's probably
useful to add one. We do have a jmx on # unconsumed messages for a given
consumer.
Thanks,
Jun
On Thu, Feb 13, 2014 at 5:18 AM, Tomas Nunez wrote:
Kat,
The answer to all your questions are similar. Basically, either we will
discover there is a real bug in Kafka or there is an issue in the
application (mis-config, network issue, etc). So far, given the info you
provided, it's a bit hard to draw any conclusion. For issues like this, I
will typ
The request log is in trace. Take a look at the log4j property file in
config/.
Thanks,
Jun
On Wed, Feb 12, 2014 at 9:45 PM, Arjun wrote:
> I am sorry but could not locate the offset in the request log. I have
> turned on the debug for the logs but couldnt . Do you know any pattern with
> whi
Hi,
I looked over Internet to find a service for kafka 8, but I found nothing...
Does someone know when i could found a service for kafka 8?
*I'm not sure if calling it a "service" is the right term, but I means that
i cloud use the command "service kafka8 start" or something like that!
Thank fo
Or even simpler: Is there any way to know the number of messages in a
topic, in a server? Is that "kafka.logs.topic -> CurrentOffset"? If not,
what does that "CurrentOffset" means?
On Tue, Feb 11, 2014 at 7:08 PM, Tomas Nunez wrote:
> Yes, but this counts how many messages went in that topic, r
We almost have a week before we get back the results. Meanwhile, I would like
to ask a few questions or reiterate those questions again.
1) What if we find the same messages again this time ? ie they were
successfully consumed ? What should we conclude ?
2) What if we don't find the same messa
The set up has 3 kafka brokers running on 3 different ec2 nodes (I added
the host.name in broker config). I am not committing any messages in my
consumer. The consumer is exact replica of the ConsumerGroupExample.
The test machine (10.60.15.123) is outside these systems security group
but has
23 matches
Mail list logo