> kafka.logs.eventdata-0
> Attributes
> Name // Name of partition
> Size //Is this the
> current number of messages?
Size -> in bytes
> NumberofSegments // Don't know what this i
Clark and all,
I thought a little bit about the serialization question. Here are the
options I see and the pros and cons I can think of. I'd love to hear
people's preferences if you have a strong one.
One important consideration is that however the producer works will also
need to be how the new
Hey Joe,
Metadata: Yes, this is how it works. You give a URL or a few URLs to
bootstrap from. From then on any metadata change will percolate up to all
producers so you should be able to dynamically change the cluster in any
way without needing to restart or reconfigure the producers. So I think y
My 2 cents:
Getting the broker metadata via active brokers is the way to go. It allows one
to dynamically rebalance or introduce a whole new set of servers into a cluster
just by adding them to the cluster and migrating partitions. We use this to
periodically introduce newer Kafka cluster cloud
Yeah I'll fix that name.
Hmm, yeah, I agree that often you want to be able delay network
connectivity until you have started everything up. But at the same time I
kind of loath special init() methods because you always forget to call them
and get one round of error every time. I wonder if in those
Hey Clark,
- Serialization: Yes I agree with these though I don't consider the loss of
generics a big issue. I'll try to summarize what I would consider the best
alternative api with raw byte[].
- Maven: We had this debate a few months back and the consensus was gradle.
Is there a specific issue
Jay,
Thanks for the explanation. I didn't realize that the broker list was for
bootstrapping and was not required to be a complete list of all brokers
(although I see now that it's clearly stated in the text description of the
parameter). Nonetheless, does it still make sense to make the config
Does libkafka (c++) allow one to make an async producer? If so, how?
Client *client;
client = new Client('127.0.0.1', 9092);
client->sendProduceRequest(new ProduceRequest(correlationId, clientId,
requiredAcks, timeout, produceTopicArraySize, produceTopicArray, true));
How would I make that non
Roger,
These are good questions.
1. The producer since 0.8 is actually zookeeper free, so this is not new to
this client it is true for the current client as well. Our experience was
that direct zookeeper connections from zillions of producers wasn't a good
idea for a number of reasons. Our inten
Andrey,
I think this should perform okay. We already create a number of objects per
message sent, one more shouldn't have too much performance impact if it is
just thousands per second.
-Jay
On Fri, Jan 24, 2014 at 2:28 PM, Andrey Yegorov wrote:
> So for each message that I need to send asynch
+1 all of Clark's points above.
On Fri, Jan 24, 2014 at 3:30 PM, Clark Breyman wrote:
> Jay - Thanks for the call for comments. Here's some initial input:
>
> - Make message serialization a client responsibility (making all messages
> byte[]). Reflection-based loading makes it harder to use gen
Jay - Thanks for the call for comments. Here's some initial input:
- Make message serialization a client responsibility (making all messages
byte[]). Reflection-based loading makes it harder to use generic codecs
(e.g. Envelope) or build up codec programmatically.
Non-default partitioning should
A couple comments:
1) Why does the config use a broker list instead of discovering the brokers
in ZooKeeper? It doesn't match the HighLevelConsumer API.
2) It looks like broker connections are created on demand. I'm wondering
if sometimes you might want to flush out config or network connectivi
I would like to know what these attributes mean in my Kafka 0.7.2 brokers. I
see the following in JConsole for my eventdata topic:
kafka.logs.eventdata-0
Attributes
Name // Name of partition
Size
So for each message that I need to send asynchronously I have to create a
new instance of callback and hold on to the message?
This looks nice in theory but in case of few thousands of request/sec this
could use up too much extra memory and push too much to garbage collector,
especially in case con
If I understand your use case I think usage would be something like
producer.send(message, new Callback() {
public void onCompletion(RecordSend send) {
if(send.hasError())
log.write(message);
}
});
Reasonable?
In other words you can include references to any variable
I love the callback in send() but I do not see how it helps in case of an
error.
Imagine the usecase: I want to write messages to the log so I can replay
them to kafka later in case if async send failed.
>From a brief look at the API I see that I'll get back RecordSend object
(which is not true al
As mentioned in a previous email we are working on a re-implementation of
the producer. I would like to use this email thread to discuss the details
of the public API and the configuration. I would love for us to be
incredibly picky about this public api now so it is as good as possible and
we don'
Thank you. It worked.
lCassa
On Wed, Jan 22, 2014 at 7:45 PM, Guozhang Wang wrote:
> Hello,
>
> In your case the key's type is String, not byte array, so you need to
> override the following property:
>
> key.serializer.class -> "kafka.serializer.StringEncoder"
>
> Details of the producer conf
Thanks for find this out. We probably should disconnect on any exception.
Could you file a jira and perhaps attach a patch?
Thanks,
Jun
On Fri, Jan 24, 2014 at 6:06 AM, Ahmy Yulrizka wrote:
> Hi,
>
> I Think I found out the problem..
>
> this is part of the stack trace. First i think there is
Hi,
I Think I found out the problem..
this is part of the stack trace. First i think there is connection problem,
and when connection restore it get new information from the zookeeper
[2014-01-23 23:24:55,391] INFO Opening socket connection to server
host2.provider.com/2.2.2.2:2181 (org.apache.z
21 matches
Mail list logo