Thanks Jun, makes sense.
On Feb 8, 2013 4:00 PM, "Jun Rao" wrote:
> That's right. If you are partitioning by key, that means you insist a
> message has to go to a certain partition, whether it's available or not.
> So, if a partition is not available, we will drop the message for the
> partition
On 1/31/13 3:30 PM, Marc Labbe wrote:
Hi,
I am fairly new to Kafka and Scala, I am trying to see through the consumer
re-design changes, proposed and implemented for 0.8 and after, which will
affect other languages implementations. There are documentation pages on
the wiki, JIRA issues but I sti
We are in final testing of Kafka and so far the fail-over tests have been
pretty encouraging. If we kill (-9) one of two kafka brokers, with replication
factor=2 we see a flurry of activity as the producer fails and retries its
writes (we use a bulk, synchronous send of 1000 messages at a time,
One of our consumers keeps getting an invalid message size exception. I'm
pretty sure that we don't have a message size this big (1.7G). We have two
other consumer groups consuming messages from the same Kafka instance
happily over the last few days.
Since we keep the logs around for a fixed inter
In testing our 0.8 cluster, we started by just using the sample
server.properties file that ships with 0.8 and tweaking it. The replication
factor property did not have an exemplar in the file, so we didn't include it.
Naturally, the cluster did not do replication.
After sending some data thr
Bob,
In 0.8, if you send a set of messages in sync mode, the producer will throw
back an exception if at least one message can't be sent to the broker after
all retries. The client won't know which messages are sent successfully and
which are not. We do plan to improve the producer API after 0.8 t
Bob,
We do plan to support changing replication factors online in the future.
This will be a post 0.8 feature though.
Thanks,
Jun
On Mon, Feb 11, 2013 at 9:32 AM, Bob Jervis wrote:
> In testing our 0.8 cluster, we started by just using the sample
> server.properties file that ships with 0.8 a
Another way is to figure out a valid offset close to the current offset and
reset the offset in ZK. You can use the tool DumpLogSegment to print out
valid offsets in a log file.
0.6 is pretty old though. I recommend that you upgrade to 0.7.
Thanks,
Jun
On Mon, Feb 11, 2013 at 9:31 AM, Manish Kh
Howdy,
I just pushed the initial version of a new Ruby client that implements the
new 0.8 wire protocol and includes a producer which uses the topic metadata
API to distribute messages across a cluster.
https://github.com/bpot/poseidon
It's still very alpha, but I hope to put it through its pace