You can use the getOffsetBefore api. Take a look at
https://cwiki.apache.org/confluence/display/KAFKA/0.8.0+SimpleConsumer+Example
Thanks,
Jun
On Thu, Mar 6, 2014 at 3:09 PM, Hema Bhatia wrote:
> Hi,
>
> Is there a way to get the start offset of messages present at current
> point in kafka br
Hi,
Is there a way to get the start offset of messages present at current point in
kafka broker, given the topic and partition. I am using high-level consumer,
and I want to replay messages from a given offset. I am using replication
factor 1 and single partition, and I am using high level cons
The key actually may be used for log compaction: can you read this and
let us know if it makes sense?
http://kafka.apache.org/081/documentation.html#compaction
If you don't want your messages to be compacted you can explicitly
specify a different key (called partitionKey) in the message. That
wil
Now I understand. The key in the messages will no longer be used after the
partition number is specified in the produce request.
Thanks!
Cheers,
Churu
On Mar 6, 2014, at 10:41 AM, Joel Koshy wrote:
> It is done by the producer - it calls the partitioner before creating
> the producer-request.
It is done by the producer - it calls the partitioner before creating
the producer-request.
On Thu, Mar 06, 2014 at 10:17:22AM -0800, Churu Tang wrote:
> Thanks for the reply! If the broker does not make the decision, then where
> and how the key is used to calculate the partition number?
>
> On
Well, what I'm saying is that message publisher doesn't know what consumers
are interested in, and sometimes they're interested in type of action
executed (event message that carries info about what command was executed),
and sometimes just in state change (what state fields are updated).
Eg. "act
Thanks for the reply! If the broker does not make the decision, then where and
how the key is used to calculate the partition number?
On Mar 5, 2014, at 5:42 PM, Joel Koshy wrote:
>> I have 2 questions about the partition number and key.
>> 1. The produceRequest will explicitly include a part
I've seen this behavior when the broker was not functional. Basically what
you see is that the console producer appears to have sent some messages.
These messages may not have reached the server. Due to the bad state of the
server, the consumer cannot get these messages and also runs into issues
wh
Ok. The is fixed in the 0.8.1 release, which is being voted now.
Thanks,
Jun
On Thu, Mar 6, 2014 at 9:34 AM, David Morales de Frías <
dmora...@paradigmatecnologico.com> wrote:
> 0.8, thanks.
>
>
> 2014-03-06 18:27 GMT+01:00 Jun Rao :
>
> > Which version of Kafka are you using?
> >
> > Thanks,
0.8, thanks.
2014-03-06 18:27 GMT+01:00 Jun Rao :
> Which version of Kafka are you using?
>
> Thanks,
>
> Jun
>
>
> On Thu, Mar 6, 2014 at 2:10 AM, David Morales de Frías <
> dmora...@paradigmatecnologico.com> wrote:
>
> > Hi there,
> >
> > If i start a consumer in a non-existent topic (auto-cre
Which version of Kafka are you using?
Thanks,
Jun
On Thu, Mar 6, 2014 at 2:10 AM, David Morales de Frías <
dmora...@paradigmatecnologico.com> wrote:
> Hi there,
>
> If i start a consumer in a non-existent topic (auto-create true) before the
> producer, the consumer never gets the messages.
>
>
I think I understand what you are saying. I think you are saying that
perhaps you could break up a given problem into a kind of "system state"
and the actual data rows? This is not something we have had a need to do...
-Jay
On Thu, Mar 6, 2014 at 12:26 AM, Vjeran Marcinko <
vjeran.marci...@email
If you really don't mind some messages being lost during failover, your
simplest option would be to just restart consumers at the latest offset in the
new AZ. Or, if you don't mind messages being duplicated, rewind to an earlier
time t as explained by Jun and Neha.
Another thought: you might be
You can certainly have several consumers consuming from the same partition:
just give each a different consumer group ID, and then all the messages from
the partition will be delivered to all of the consumers.
If you want each message to only be processed by one of the consumers, you can
drop t
Hi there,
If i start a consumer in a non-existent topic (auto-create true) before the
producer, the consumer never gets the messages.
These are the steps:
1) kafka-console-consumer --topic newTopic (it doesn't exist)
2) kafka-console-producer --topic newTopic
3) Send some messages
4) I can see t
Thanx Jay,
Somewhat related to log compaction
Did you have a need at LinkedIn where application should publish 2 messages
on separate topics for each action executed, one to "event data" topic
demarking signalizing executed action, and another one to "state change"
topic which ultimately is g
16 matches
Mail list logo