It seems like the client is sending the new OffsetFetchRequest introduced
in 0.8.1. The key for the OffsetFetchRequest is 9. You might want to
upgrade your kafka broker to the latest trunk and see if that works.
Thanks,
Neha
On Tue, Jan 28, 2014 at 8:50 PM, Eric Rini wrote:
> So I am using thi
I don't think so. I forgot to include the ifconfig output. Actually the public
IP is not one of the IP configured in the Ethernet interfaces.
Only the Local IP is configured in eth0.
Is there any solution to this?
Ifconfig O/P:
eth0 Link encap:Ethernet HWaddr 22:00:0A:C7:1F:57
So I am using this https://github.com/SOHU-Co/kafka-node client library
(possibly the only v0.8 node.js library that supports a high level consumer
and has some basic docs). When it connects to the broker, an error like
this appears in the console.
[2014-01-28 23:42:30,423] ERROR Closing socket fo
Could it be a port conflict?
Thanks,
Jun
On Tue, Jan 28, 2014 at 5:20 PM, Balasubramanian Jayaraman (Contingent) <
balasubramanian.jayara...@autodesk.com> wrote:
> Jun,
>
> Thanks for your help.
> I get the following exception :
> kafka.common.KafkaException: Socket server failed to bind to
>
If compression is turned on, this applies to the size of the compressed
message and the producer knows the size before it writes the compressed
message on the wire.
Thanks,
Neha
On Jan 28, 2014 7:42 AM, "Philip O'Toole" wrote:
> http://kafka.apache.org/07/configuration.html
>
> Hello -- I can lo
Hey Neha,
Can you elaborate on why you prefer using Java's Future? The downside in my
mind is the use of the checked InterruptedException and ExecutionException.
ExecutionException is arguable, but forcing you to catch
InterruptedException, often in code that can't be interrupted, seems
perverse.
Here are more thoughts on the public APIs -
- I suggest we use java's Future instead of custom Future especially since
it is part of the public API
- Serialization: I like the simplicity of the producer APIs with the
absence of serialization where we just deal with byte arrays for keys and
values
Hey Tom,
That sounds cool. How did you end up handling parallel I/O if you wrap the
individual connections? Don't you need some selector that selects over all
the connections?
-Jay
On Tue, Jan 28, 2014 at 2:31 PM, Tom Brown wrote:
> I implemented a 0.7 client in pure java, and its API very cl
You can configure consumer.timeout.ms that basically notifies your consumer
iterator when there is no data for consumer.timeout.ms milliseconds. You
can then catch it and invoke shutdown() on the consumer. Once the shutdown
returns, the consumer will no longer maintain its connection to zookeeper.
We have a use case where we would want to to shutdown the consumers if they
do not have anything to process. The process which these consumers are a
part is still alive.
Once the consumer shutdown we see that the Zookeeper connections are not
closed/cleaned. Since these consumer's will come back u
Jun,
Thanks for your help.
I get the following exception :
kafka.common.KafkaException: Socket server failed to bind to
54.241.44.129:9092: Cannot assign requested address.
at kafka.network.Acceptor.openServerSocket(SocketServer.scala:188)
at kafka.network.Acceptor.(SocketServer.
That makes sense. Thanks, Jay.
On Tue, Jan 28, 2014 at 4:38 PM, Jay Kreps wrote:
> Hey Roger,
>
> We really can't use ListenableFuture directly though I agree it is nice. We
> have had some previous experience with embedding google collection classes
> in public apis, and it was quite the disa
Hey Roger,
We really can't use ListenableFuture directly though I agree it is nice. We
have had some previous experience with embedding google collection classes
in public apis, and it was quite the disaster. The problem has been that
the google guys regularly go on a refactoring binge for no appa
I implemented a 0.7 client in pure java, and its API very closely resembled
this. (When multiple people independently engineer the same solution, it's
probably good... right?). However, there were a few architectural
differences with my client:
1. The basic client itself was just an asynchronous l
Hmmm, I would really strongly urge us to not introduce a zk dependency just
for discovery. People who want to implement this can certainly do so by
simply looking up urls and setting them in the consumer config, but our
experience with doing this at large scale was pretty bad. Hardcoding the
discov
Hi Guozhang,
thinking out loud... delete then recreate works if it is acceptable to have
a topic specific downtime during which Kafka can't accept requests for that
topic. This downtime would last for the duration while the topic gets
deleted and then recreated. I am assuming here that a producer
+1 ListenableFuture: If this works similar to Deferreds in Twisted Python
or Promised IO in Javascript, I think this is a great pattern for
decoupling your callback logic from the place where the Future is
generated. You can register as many callbacks as you like, each in the
appropriate layer of
Hey Ross,
- ListenableFuture: Interesting. That would be an alternative to the direct
callback support we provide. There could be pros to this, let me think
about it.
- We could provide layering, but I feel that the serialization is such a
small thing we should just make a decision and chose one,
>> The producer since 0.8 is actually zookeeper free, so this is not new to
this client it is true for the current client as well. Our experience was
that direct zookeeper connections from zillions of producers wasn't a good
idea for a number of reasons.
The problem with several thousand connectio
+1 to zk bootstrap + close as an option at least
On Tue, Jan 28, 2014 at 10:09 AM, Neha Narkhede wrote:
> >> The producer since 0.8 is actually zookeeper free, so this is not new to
> this client it is true for the current client as well. Our experience was
> that direct zookeeper connections f
If compression is turned on, the check is done on the compressed message
size. The producer knows the compressed message size before it writes it on
the network.
Thanks,
Neha
On Tue, Jan 28, 2014 at 7:42 AM, Philip O'Toole wrote:
> http://kafka.apache.org/07/configuration.html
>
> Hello -- I c
Sorry to tune in a bit late, but here goes.
> 1. The producer since 0.8 is actually zookeeper free, so this is not new to
> this client it is true for the current client as well. Our experience was
> that direct zookeeper connections from zillions of producers wasn't a good
> idea for a number of
http://kafka.apache.org/07/configuration.html
Hello -- I can look the at the code too, but how does this setting interact
with compression? After all, a Producer doing compression doesn't know the
size of a "message" on the wire it will send to a Kafka broker until after
it has been compressed. An
You should should use the public IP for host.name. What's the error you see
during broker startup?
Thanks,
Jun
On Tue, Jan 28, 2014 at 2:17 AM, Balasubramanian Jayaraman (Contingent) <
balasubramanian.jayara...@autodesk.com> wrote:
> I checked the faq. I did change the host.name in server prop
Hi Jay,
- Just to add some more info/confusion about possibly using Future ...
If Kafka uses a JDK future, it plays nicely with other frameworks as well.
Google Guava has a ListenableFuture that allows callback handling to be
added via the returned future, and allows the callbacks to be passed
All,
We are also working on a C# Client for Kafka, targeting 0.8 only. It is a
port of the C++ one at this stage.
Separate from the C# Client, we are also using AVRO internally for ser format
and interested in anyone elses experience with this. I note Microsoft have
also been doing som
I checked the faq. I did change the host.name in server properties. After
changing it I get ConnectException.
The problem here is in EC2 have a different public IP address (55.x.x.x) and
the local IP address is (10.x.x.x).
I set the host.name property to the local IP address which is 10.x.x.x.
27 matches
Mail list logo