I've often wondered about what it would take to be able to overwrite a
specific offset in a partition (it could be very useful for transaction
rollbacks, message deletions, etc). Unfortunately, I don't think that
feature currently exists.
--Tom
On Mon, Apr 1, 2013 at 11:14 PM, Pankaj Misra wrote
Hi,
Is it possible for consumer to trigger a message dequeue(message deletion) from
the broker after consuming the message?
Thanks & Regards
Pankaj Misra
-Original Message-
From: Jason Rosenberg [mailto:j...@squareup.com]
Sent: Tuesday, April 02, 2013 1:30 AM
To: users@kafka.apache.org
If you use the high-level consumer, when the consumer gets an
OffsetOutofRangeException, it will automatically reset to either the
smallest or the largest available offset, depend on the config
autooffset.reset.
Thanks,
Jun
On Mon, Apr 1, 2013 at 8:44 PM, Yonghui Zhao wrote:
> A little questio
Sean,
A broker can have multiple topics, each with multiple partitions. Each
partition can be consumed by multiple consumers.
Our high level consumer API doesn't allow you to specify a starting offset.
SimpleConsumer does. If you use SimpleConsumer, you are responsible for
managing the consumptio
Currently, a message doesn't have a timestamp header. So you have to encode
the timestamp in the message payload.
Thanks,
Jun
On Mon, Apr 1, 2013 at 7:10 PM, Suyog Rao wrote:
> Is there a way to get the timestamp for a Kafka message entry on the
> server by the consumer? What I would like to d
Got it,thanks
2013/4/2 Philip O'Toole
> On Mon, Apr 1, 2013 at 8:44 PM, Yonghui Zhao
> wrote:
>
> > A little question, when old message log is deleted, the start point
> offset
> > is changed. If a simple consumer seek a offset less than start point
> > offset, what will happen? Read all messa
On Mon, Apr 1, 2013 at 8:44 PM, Yonghui Zhao wrote:
> A little question, when old message log is deleted, the start point offset
> is changed. If a simple consumer seek a offset less than start point
> offset, what will happen? Read all message from start?
>
Standard Kafka Consumer code will rai
A little question, when old message log is deleted, the start point offset
is changed. If a simple consumer seek a offset less than start point
offset, what will happen? Read all message from start?
2013/4/2 Jason Rosenberg
> Essentially,
>
> There's a configuration property: log.retention.hour
Hello,
Hopefully I'm sending this question to the right place. I'm currently
trying to set up a consumer that will allow me to specify the offset,
partition, and consumer group ID all at the same time. This obviously
causes a dilemma since neither the low-level or high-level consumer APIs
seem to
Hello all,
I've been working on updating (i.e., rewriting) my Python client for the
impending 0.8 release. Check it out:
https://github.com/mumrah/kafka-python/tree/0.8
In addition to 0.8 protocol support, the new client supports the
broker-aware request routing required for replication in 0
Is there a way to get the timestamp for a Kafka message entry on the server by
the consumer? What I would like to do is to is check if an offset is of a
particular age before pulling offset + bytes to my consumer. Otherwise I would
like the messages to be continued to be queued in Kafka until th
Hello,
IIRC, no, it does not. Where I work, one team had the same issue and built
some custom code to handle the encryption and decryption of messages at the
producer and consumer. However, you have to take key management into account
as once a message is written to the broker, you can't decr
Hi,
Does Kafka support encrypting data at rest? During my AJUG presentation
someone asked if the files could be encrypted to address PII needs?
Thanks,
Chris
Now with Video:
http://vimeo.com/63040812
(I did notice that I misspoke about reading from replicas, sorry).
On Wed, Mar 20, 2013 at 8:11 AM, Chris Curtin wrote:
> Hi,
>
> It went really well last night. Lots of good questions. Here are the
> slides, and hopefully the video will be up in a fe
Essentially,
There's a configuration property: log.retention.hours
This determines the minimum time a message will remain available on the
broker. The default is 7 days.
The kafka broker doesn't keep track of whether the message has been
consumed or not (or how many times it has been consumed).
On Mon, Apr 1, 2013 at 12:27 PM, Ankit Jain wrote:
> Hi All,
>
> Once the message is consumed by consumer, we want it to delete from message
> broker as well.
>
Kafka doesn't work this way. Read the design doc -- it's well written, and
should be read by anyone working with Kafka.
http://kafka.a
Hi All,
Once the message is consumed by consumer, we want it to delete from message
broker as well.
I was exploring the kafka configuration, but not sure which configuration
would full my need.
Guys, I need your help.
Thanks ..
--
Thanks,
Ankit Jain
Any interest in open sourcing it now and picking up contributors?
On Mon, Apr 1, 2013 at 7:54 AM, Jun Rao wrote:
> At LinkedIn, we are also building a native C producer client for 0.8. It
> uses non-blocking socket I/O to improve the producer throughput. We plan to
> open source it when it's fu
At LinkedIn, we are also building a native C producer client for 0.8. It
uses non-blocking socket I/O to improve the producer throughput. We plan to
open source it when it's fully tested, hopefully in a couple of months.
Thanks,
Jun
On Wed, Mar 27, 2013 at 8:48 PM, Matthew Stump wrote:
> Howdy,
19 matches
Mail list logo