Hi,
I've finally fixed this by closing the connection on timeout and creating a
new connection on the next send.
Thanks,
Gerrit
On Tue, Jan 14, 2014 at 10:20 AM, Gerrit Jansen van Vuuren <
gerrit...@gmail.com> wrote:
> Hi,
>
> thanks I will do this.
>
>
>
> On Tue, Jan 14, 2014 at 9:51 AM, Jo
Hi,
thanks I will do this.
On Tue, Jan 14, 2014 at 9:51 AM, Joe Stein wrote:
> I Gerrit, do you have a ticket already for this issue? Is it possible to
> attach code that reproduces it? Would be great if you can run it against a
> Kafka VM you can grab one from this project for 0.8.0
> http
I Gerrit, do you have a ticket already for this issue? Is it possible to
attach code that reproduces it? Would be great if you can run it against a
Kafka VM you can grab one from this project for 0.8.0
https://github.com/stealthly/scala-kafka to launch a Kafka VM and add
whatever you need to it t
Yes, I'm using my own client following:
https://cwiki.apache.org/confluence/display/KAFKA/A+Guide+To+The+Kafka+Protocol
Everything works except for this weirdness.
On Tue, Jan 14, 2014 at 5:50 AM, Jun Rao wrote:
> So, you implemented your own consumer client using netty?
>
> Thanks,
>
> Jun
>
So, you implemented your own consumer client using netty?
Thanks,
Jun
On Mon, Jan 13, 2014 at 8:42 AM, Gerrit Jansen van Vuuren <
gerrit...@gmail.com> wrote:
> I'm using netty and async write, read.
> For read I used a timeout such that if I do not see anything on the read
> channel, my read f
I'm using netty and async write, read.
For read I used a timeout such that if I do not see anything on the read
channel, my read function times out and returns null.
I do not see any error on the socket, and the same socket is used
throughout all of the fetches.
I'm using the console producer and
I can't seen to find the log trace for the timed out fetch request (every
fetch request seems to have a corresponding completed entry). For the timed
out fetch request, is it that the broker never completed the request or is
it that it just took longer than the socket timeout to finish processing
t
What's the offset used in the fetch request in steps g and i that both
returned offsets 10 and 11?
Thanks,
Jun
On Sat, Jan 11, 2014 at 3:19 AM, Gerrit Jansen van Vuuren <
gerrit...@gmail.com> wrote:
> Hi,
>
>
> No the offsets are not the same. I've printed out the values to see this,
> and its
Do you have the request log turned on? If so, what's total time taken for
the corresponding fetch request?
Thanks,
Jun
On Sat, Jan 11, 2014 at 4:38 AM, Gerrit Jansen van Vuuren <
gerrit...@gmail.com> wrote:
> I'm also seeing the following.
>
> I consume the data in the queue.
> Then after 10 s
I'm also seeing the following.
I consume the data in the queue.
Then after 10 seconds send another fetch request (with the incremented
offset), and never receives a response from the broker, my code eventually
times out (after 30seconds).
The broker writes Expiring fetch request Name: FetchReques
Hi,
No the offsets are not the same. I've printed out the values to see this,
and its not the case.
On Fri, Jan 10, 2014 at 5:02 PM, Jun Rao wrote:
> Are the offset used in the 2 fetch requests the same? If so, you will get
> the same messages twice. You consumer is responsible for advancing
Are the offset used in the 2 fetch requests the same? If so, you will get
the same messages twice. You consumer is responsible for advancing the
offsets after consumption.
Thanks,
Jun
On Thu, Jan 9, 2014 at 1:00 PM, Gerrit Jansen van Vuuren <
gerrit...@gmail.com> wrote:
> Hi,
>
> I'm writing a
thanks, I will definitely put this in,
does the console-producer send compressed messages by default? I haven't
specified compression for it, so assumed that it will send plain text.
On Thu, Jan 9, 2014 at 10:14 PM, Chris Curtin wrote:
> If you look at the example simple consumer:
>
> https://
If you look at the example simple consumer:
https://cwiki.apache.org/confluence/display/KAFKA/0.8.0+SimpleConsumer+Example
You'll see:
if (currentOffset < readOffset) {
System.out.println("Found an old offset: " + currentOffset + "
Expecting: " + readOffset);
continue;
}
an
Hi,
I'm writing a custom consumer for kafka 0.8.
Everything works except for the following:
a. connect, send fetch, read all results
b. send fetch
c. send fetch
d. send fetch
e. via the console publisher, publish 2 messages
f. send fetch :corr-id 1
g. read 2 messages published :offsets [10 11] :c
15 matches
Mail list logo