Hi Jun, Hi Guozhang,
hm, yeah, if the subscribe/unsubscribe is a smart and lightweight
operation this might work. But if it needs to do any additional calls to
fetch metadata during a subscribe/unsubscribe call, the overhead could get
quite problematic. The main issue I still see here is that an
Hi there,
Zookeeper version: 3.4.6-1569965, built on 02/20/2014 09:09 GMT
Kafka version: kafka_2.8.0-0.8.1.1
I have the following architecture/configuration
staging2.mtl.shopmedia.com (broker.id=1)
zookeeper:9092
kafka:2181
staging3.mtl.shopmedia.com(broker.id=2)
zookeeper:9092
kafka:2181
cent
Zookeeper version: 3.4.6-1569965, built on 02/20/2014 09:09 GMT
Kafka version: kafka_2.8.0-0.8.1.1
I have the following architecture/configuration
staging2.mtl.shopmedia.com (broker.id=1)
zookeeper:9092
kafka:2181
staging3.mtl.shopmedia.com(broker.id=2)
zookeeper:9092
kafka:2181
centos.mtl.shop
Is that in 0.8.0.0?
- Rob
Thanks for the replay.
Please let me know if we can use trunk as 0.8.2 is not yet released.
Balaji
From: Neha Narkhede [neha.narkh...@gmail.com]
Sent: Wednesday, September 24, 2014 6:32 PM
To: users@kafka.apache.org
Subject: Re: BadVersion state in Kafka L
Thanks, Joel. That's exactly what I need.
On Wed, Sep 24, 2014 at 7:01 PM, Joel Koshy wrote:
> The consumer iterator returns MessageAndMetadata which includes the
> offset. You would need logic in your application to check the offset
> if it needs to stop processing after a certain offset.
>
> O
Hi,
My requirement is to read a specific number of messages from kafka topic
which contains data in json format and after reading number of messges, i
need to write that in a file and then stop. How can I count number of
messages read by my consumer code(either simpleconsumer or high level) ?
Ple
@Neha When you say this is an expected exception, does that imply there is
no way of getting rid of those exceptions ?
Thanks.
—
Aniket Kulkarni
Using high level consumer and assuming you already created an iterator:
while (msgCount < maxMessages && it.hasNext()) {
bytes = it.next().message();
eventList.add(bytes);
}
(See a complete example here:
https://github.com/apache/flume/blob/trunk/flume-ng-sources/flume-kafka-source/src/main/jav
Aniket,
Could you provide more context to this email? The previous conversation on
the exception is missing so I'm not sure which exception you are referring
to.
Thanks,
Neha
On Thu, Sep 25, 2014 at 8:52 AM, Aniket Kulkarni <
kulkarnianiket...@gmail.com> wrote:
> @Neha When you say this is an e
Thank You. I will try this out.
On Thu, Sep 25, 2014 at 10:01 PM, Gwen Shapira
wrote:
> Using high level consumer and assuming you already created an iterator:
>
> while (msgCount < maxMessages && it.hasNext()) {
> bytes = it.next().message();
> eventList.add(bytes);
> }
>
> (See a complete ex
0.8.1.1
On Wed, Sep 24, 2014 at 5:43 PM, Neha Narkhede
wrote:
> That is odd. Which version of Kafka are you using?
>
> On Wed, Sep 24, 2014 at 5:37 PM, Jinder Aujla
> wrote:
>
> > Hi
> >
> > I'm trying to get some JMX stats from a running Kafka instance, I can
> > connect using jconsole and I c
No. But there is a good chance that it will be available in 0.8.2
On Thu, Sep 25, 2014 at 8:08 AM, Robert Withers
wrote:
> Is that in 0.8.0.0?
>
> - Rob
Hi Neha,
Do you know when are you guys releasing 0.8.2 ?.
Thanks,
Balaji
-Original Message-
From: Seshadri, Balaji [mailto:balaji.sesha...@dish.com]
Sent: Thursday, September 25, 2014 9:41 AM
To: users@kafka.apache.org
Subject: RE: BadVersion state in Kafka Logs
Thanks for the replay.
Hello Neha,
I am trying to run some tests which use Kafka 0.8.1.1. The tests do not
fail but give out a warning messages which I am trying to get rid off such
as :
2014-09-25 11:43:03,572 [kafka-processor-56598-1] ERROR
kafka.network.Processor - Closing socket for /127.0.0.1 because of error
jav
We are close to the release. I'd probably expect 0.8.2 sometime in October.
On Thu, Sep 25, 2014 at 10:37 AM, Seshadri, Balaji wrote:
> Hi Neha,
>
> Do you know when are you guys releasing 0.8.2 ?.
>
> Thanks,
>
> Balaji
>
> -Original Message-
> From: Seshadri, Balaji [mailto:balaji.sesh
Thanks, Neha. Sorry for the confusion.
Best,
Rob
> On Sep 25, 2014, at 11:24 AM, Neha Narkhede wrote:
>
> No. But there is a good chance that it will be available in 0.8.2
>
> On Thu, Sep 25, 2014 at 8:08 AM, Robert Withers
> wrote:
>
>> Is that in 0.8.0.0?
>>
>> - Rob
Zookeeper version: 3.4.6-1569965, built on 02/20/2014 09:09 GMT
Kafka version: kafka_2.8.0-0.8.1.1
I have the following architecture/configuration
staging2.mtl.shopmedia.com (broker.id=1)
zookeeper:9092
kafka:2181
staging3.mtl.shopmedia.com(broker.id=2)
zookeeper:9092
kafka:2181
centos.mtl
I have set up my kafka broker with as single producer and consumer. When I
am plotting the graph for all topic bytes in/out per sec i could see that
value of BytesOutPerSec is more than BytesInPerSec.
Is this correct? I confirmed that my consumer is consuming the messages
only once. What could be
couldn't see your graph. but your replicator factor is 2. then replication
traffic can be the explanation. basically, BytesOut will be 2x of BytesIn.
On Thu, Sep 25, 2014 at 6:19 PM, ravi singh wrote:
> I have set up my kafka broker with as single producer and consumer. When I
> am plotting the
Slight off-topic, but is it also possible to replay a specific number of
messages? For example, using the simple consumer, can I go back/reset the
offset so that I always go read the last 10 messages assuming the size of
each individual message could be different. All I found in the simple
consumer
Thanks Steven. That answers the difference in Bytes in and bytes Out per
sec. But I was wondering why(and how) is BytesOutPerSec is calculated based
on number of partition even though it is consumed only once?
*Regards,*
*Ravi*
On Thu, Sep 25, 2014 at 9:55 PM, Steven Wu wrote:
> couldn't see y
Hi,
Just got a lovely email a bunch of our EC2 instances will be rebooted in a
few days. Some of them run our Kafka 0.8.1 brokers with a few hundred GBs
of data in them. Last time Kafka brokers didn't shut down cleanly it took
them mny hours to recover.
I just found https://issues.apache.or
23 matches
Mail list logo