High level consumer stop reading data

2013-10-29 Thread Hanish Bansal
Hi All,

We are running kafka-0.8, If kafka node machine's network is restarted or
lost for some time, then high level consumer stop reading data from kafka
even after the network is restarted/working.

-- 
*Thanks & Regards*
*Hanish Bansal*


0.8 Head: Simple consumer no receiving messages

2013-10-29 Thread Shafaq
Hi,
   I see the following scenario:

1. Send messages under some topic X, able to see the log folder in Kafka
Broker with name X-0 (Zeroth partition) and having files xxx.log and
xxx.index under them. So guess this is fine

2. THen I fire up the consumer for topic X, it is able to find two streams
(mapping to two partitions I have defined).


Map>> consumerMap =
consumer
.createMessageStreams(topicCountMap);
List> streams = null;
Iterator it = topicCountMap.keySet().iterator();
int threadNumber= 0;
while(it.hasNext()) {
String topic = it.next();
streams = consumerMap.get(topic);
for (KafkaStream stream : streams) {
System.out.println("threadNo =" + threadNumber + "  for
topic = " + topic );
new ConsumerThreadRunnable(stream, threadNumber, topic));
threadNumber++;
}
}

However I don't get any messages in the CounsmerTHreanRunnable here
ConsumerIterator it = stream.iterator();

while (it.hasNext() ) {
byte[] nextMessageByteArray = it.next().message();
   }

If I start the consumer first and then restart the producer thread, sending
the messages for topic X  then consumer is able to receive the messages.

>From kafka docs the high-level  consumer thread does long polling till the
message is available.

What is wrong I'm doing? Any idea to get around the problem.

thanks!

-- 
Kind Regards,
Shafaq


Re: Ganglia Metrics Reporter

2013-10-29 Thread Andrew Otto
Hi Maxime,

I'm using this at the Wikimedia Foundation to send Kafka Broker metrics to 
Ganglia.  However, we use Ganglia in multicast mode.  This mostly seems to work 
with your code, but the ttl on the multicast packets gets set at 1.  We have 
sometimes have multiple levels of ganglia aggregators, and with ttl=1, the 
multicast packets don't make it to the proper aggregators.

I'm looking into either forking or rewriting this library using Codahale 
Metrics v 3.0.1, and supporting multicast more explicitly.  Is this something 
you could do better/faster than me, or should I proceed? :)

-Andrew Otto

(Thanks for writing this, btw!)


On Aug 22, 2013, at 11:42 AM, Maxime Brugidou  wrote:

> Hi all,
> 
> Since I couldn't find any other way to publish kafka metrics to ganglia
> from kafka 0.8 (beta), I just published on github a super-simple ganglia
> metrics reporter for Kafka. It is configurable through the kafka config
> file and you can use it on the broker side and on your consumers/producers.
> There is also a feature to exclude some metrics with a regex (useful if you
> have many topics/partions).
> 
> Here it is on github: https://github.com/criteo/kafka-ganglia
> 
> Let me know if you have issues/questions. This is using metrics-ganglia
> 2.2.0 directly so it is not a ganglia plugin but rather a kafka "add-on".
> 
> I don't know the proper way to distribute this yet for installation on
> brokers, it could also be part of some contrib kafka code.
> 
> Cheers,
> Maxime


Re: Ganglia Metrics Reporter

2013-10-29 Thread Maxime Brugidou
Hi Andrew, how do you plan to use metrics 3.0.1? The current Kafka 0.8
version uses 2.2 AFAIK so this is going to require a new Kafka version.

I don't really use multicast TTL muself but you are right that this should
be configurable and I'll definitely accept pull request going in that
direction. It's a tiny piece of code so it shouldn't be hard to dig it.

Thanks for using this small plugin and let me know what you plan to do.

Cheers
On Oct 29, 2013 2:00 PM, "Andrew Otto"  wrote:

> Hi Maxime,
>
> I'm using this at the Wikimedia Foundation to send Kafka Broker metrics to
> Ganglia.  However, we use Ganglia in multicast mode.  This mostly seems to
> work with your code, but the ttl on the multicast packets gets set at 1.
>  We have sometimes have multiple levels of ganglia aggregators, and with
> ttl=1, the multicast packets don't make it to the proper aggregators.
>
> I'm looking into either forking or rewriting this library using Codahale
> Metrics v 3.0.1, and supporting multicast more explicitly.  Is this
> something you could do better/faster than me, or should I proceed? :)
>
> -Andrew Otto
>
> (Thanks for writing this, btw!)
>
>
> On Aug 22, 2013, at 11:42 AM, Maxime Brugidou 
> wrote:
>
> > Hi all,
> >
> > Since I couldn't find any other way to publish kafka metrics to ganglia
> > from kafka 0.8 (beta), I just published on github a super-simple ganglia
> > metrics reporter for Kafka. It is configurable through the kafka config
> > file and you can use it on the broker side and on your
> consumers/producers.
> > There is also a feature to exclude some metrics with a regex (useful if
> you
> > have many topics/partions).
> >
> > Here it is on github: https://github.com/criteo/kafka-ganglia
> >
> > Let me know if you have issues/questions. This is using metrics-ganglia
> > 2.2.0 directly so it is not a ganglia plugin but rather a kafka "add-on".
> >
> > I don't know the proper way to distribute this yet for installation on
> > brokers, it could also be part of some contrib kafka code.
> >
> > Cheers,
> > Maxime
>


Re: Controlled shutdown failure, retry settings

2013-10-29 Thread Jason Rosenberg
Here's another exception I see during controlled shutdown (this time there
was not an unclean shutdown problem).  Should I be concerned about this
exception? Is any data loss possible with this?  This one happened after
the first "Retrying controlled shutdown after the previous attempt
failed..." message.  The controlled shutdown subsequently succeeded without
another retry (but with a few more of these exceptions).

Again, there was no "Remaining partitions to move..." message before the
retrying message, so I assume the retry happens after an IOException (which
is not logged in KafkaServer.controlledShutdown).

2013-10-29 20:03:31,883  INFO [kafka-request-handler-4]
controller.ReplicaStateMachine - [Replica state machine on controller 10]:
Invoking state change to OfflineReplica for replicas
PartitionAndReplica(mytopic,0,10)
2013-10-29 20:03:31,883 ERROR [kafka-request-handler-4] change.logger -
Controller 10 epoch 190 initiated state change of replica 10 for partition
[mytopic,0] to OfflineReplica failed
java.lang.AssertionError: assertion failed: Replica 10 for partition
[mytopic,0] should be in the NewReplica,OnlineReplica states before moving
to OfflineReplica state. Instead it is in OfflineReplica state
at scala.Predef$.assert(Predef.scala:91)
at
kafka.controller.ReplicaStateMachine.assertValidPreviousStates(ReplicaStateMachine.scala:209)
at
kafka.controller.ReplicaStateMachine.handleStateChange(ReplicaStateMachine.scala:167)
at
kafka.controller.ReplicaStateMachine$$anonfun$handleStateChanges$2.apply(ReplicaStateMachine.scala:89)
at
kafka.controller.ReplicaStateMachine$$anonfun$handleStateChanges$2.apply(ReplicaStateMachine.scala:89)
at scala.collection.immutable.Set$Set1.foreach(Set.scala:81)
at
kafka.controller.ReplicaStateMachine.handleStateChanges(ReplicaStateMachine.scala:89)
at
kafka.controller.KafkaController$$anonfun$shutdownBroker$4$$anonfun$apply$2.apply(KafkaController.scala:199)
at
kafka.controller.KafkaController$$anonfun$shutdownBroker$4$$anonfun$apply$2.apply(KafkaController.scala:184)
at scala.Option.foreach(Option.scala:121)
at
kafka.controller.KafkaController$$anonfun$shutdownBroker$4.apply(KafkaController.scala:184)
at
kafka.controller.KafkaController$$anonfun$shutdownBroker$4.apply(KafkaController.scala:180)
at
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:57)
at
scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:43)
at
kafka.controller.KafkaController.shutdownBroker(KafkaController.scala:180)
at
kafka.server.KafkaApis.handleControlledShutdownRequest(KafkaApis.scala:133)
at kafka.server.KafkaApis.handle(KafkaApis.scala:72)
at
kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:42)
at java.lang.Thread.run(Thread.java:662)

Jason


On Fri, Oct 25, 2013 at 11:51 PM, Jason Rosenberg  wrote:

>
>
> On Fri, Oct 25, 2013 at 9:16 PM, Joel Koshy  wrote:
>
>>
>> Unclean shutdown could result in data loss - since you are moving
>> leadership to a replica that has fallen out of ISR. i.e., it's log end
>> offset is behind the last committed message to this partition.
>>
>>
> But if data is written with 'request.required.acks=-1', no data should be
> lost, no?  Or will partitions be truncated wholesale after an unclean
> shutdown?
>
>
>
>>
>> Take a look at the packaged log4j.properties file. The controller's
>> partition/replica state machines and its channel manager which
>> sends/receives leaderandisr requests/responses to brokers uses a
>> stateChangeLogger. The replica managers on all brokers also use this
>> logger.
>
>
> Ah.so it looks like most things logged with the stateChangeLogger are
> logged at the TRACE log level.and that's the default setting in the
> log4j.properties file.  Needless to say, my contained KafkaServer is not
> currently using that log4j.properties (we are just using a rootLogger with
> level = INFO by default).  I can probably add a special rule to use TRACE
> for the state.change.logger category.  However, I'm not sure I can make it
> so that logging all goes to it's own separate log file.
>
>>
>> Our logging can improve - e.g., it looks like on controller movement
>> we could retry without saying why.
>>
>
> I can file a jira for this, but I'm not sure what it should say!
>


Re: Controlled shutdown failure, retry settings

2013-10-29 Thread Jason Rosenberg
I've filed: https://issues.apache.org/jira/browse/KAFKA-1108


On Tue, Oct 29, 2013 at 4:29 PM, Jason Rosenberg  wrote:

> Here's another exception I see during controlled shutdown (this time there
> was not an unclean shutdown problem).  Should I be concerned about this
> exception? Is any data loss possible with this?  This one happened after
> the first "Retrying controlled shutdown after the previous attempt
> failed..." message.  The controlled shutdown subsequently succeeded without
> another retry (but with a few more of these exceptions).
>
> Again, there was no "Remaining partitions to move..." message before the
> retrying message, so I assume the retry happens after an IOException (which
> is not logged in KafkaServer.controlledShutdown).
>
> 2013-10-29 20:03:31,883  INFO [kafka-request-handler-4]
> controller.ReplicaStateMachine - [Replica state machine on controller 10]:
> Invoking state change to OfflineReplica for replicas
> PartitionAndReplica(mytopic,0,10)
> 2013-10-29 20:03:31,883 ERROR [kafka-request-handler-4] change.logger -
> Controller 10 epoch 190 initiated state change of replica 10 for partition
> [mytopic,0] to OfflineReplica failed
> java.lang.AssertionError: assertion failed: Replica 10 for partition
> [mytopic,0] should be in the NewReplica,OnlineReplica states before moving
> to OfflineReplica state. Instead it is in OfflineReplica state
> at scala.Predef$.assert(Predef.scala:91)
> at
> kafka.controller.ReplicaStateMachine.assertValidPreviousStates(ReplicaStateMachine.scala:209)
> at
> kafka.controller.ReplicaStateMachine.handleStateChange(ReplicaStateMachine.scala:167)
> at
> kafka.controller.ReplicaStateMachine$$anonfun$handleStateChanges$2.apply(ReplicaStateMachine.scala:89)
> at
> kafka.controller.ReplicaStateMachine$$anonfun$handleStateChanges$2.apply(ReplicaStateMachine.scala:89)
> at scala.collection.immutable.Set$Set1.foreach(Set.scala:81)
> at
> kafka.controller.ReplicaStateMachine.handleStateChanges(ReplicaStateMachine.scala:89)
> at
> kafka.controller.KafkaController$$anonfun$shutdownBroker$4$$anonfun$apply$2.apply(KafkaController.scala:199)
> at
> kafka.controller.KafkaController$$anonfun$shutdownBroker$4$$anonfun$apply$2.apply(KafkaController.scala:184)
> at scala.Option.foreach(Option.scala:121)
> at
> kafka.controller.KafkaController$$anonfun$shutdownBroker$4.apply(KafkaController.scala:184)
> at
> kafka.controller.KafkaController$$anonfun$shutdownBroker$4.apply(KafkaController.scala:180)
> at
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:57)
> at
> scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:43)
> at
> kafka.controller.KafkaController.shutdownBroker(KafkaController.scala:180)
> at
> kafka.server.KafkaApis.handleControlledShutdownRequest(KafkaApis.scala:133)
> at kafka.server.KafkaApis.handle(KafkaApis.scala:72)
> at
> kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:42)
> at java.lang.Thread.run(Thread.java:662)
>
> Jason
>
>
> On Fri, Oct 25, 2013 at 11:51 PM, Jason Rosenberg wrote:
>
>>
>>
>> On Fri, Oct 25, 2013 at 9:16 PM, Joel Koshy  wrote:
>>
>>>
>>> Unclean shutdown could result in data loss - since you are moving
>>> leadership to a replica that has fallen out of ISR. i.e., it's log end
>>> offset is behind the last committed message to this partition.
>>>
>>>
>> But if data is written with 'request.required.acks=-1', no data should be
>> lost, no?  Or will partitions be truncated wholesale after an unclean
>> shutdown?
>>
>>
>>
>>>
>>> Take a look at the packaged log4j.properties file. The controller's
>>> partition/replica state machines and its channel manager which
>>> sends/receives leaderandisr requests/responses to brokers uses a
>>> stateChangeLogger. The replica managers on all brokers also use this
>>> logger.
>>
>>
>> Ah.so it looks like most things logged with the stateChangeLogger are
>> logged at the TRACE log level.and that's the default setting in the
>> log4j.properties file.  Needless to say, my contained KafkaServer is not
>> currently using that log4j.properties (we are just using a rootLogger with
>> level = INFO by default).  I can probably add a special rule to use TRACE
>> for the state.change.logger category.  However, I'm not sure I can make it
>> so that logging all goes to it's own separate log file.
>>
>>>
>>> Our logging can improve - e.g., it looks like on controller movement
>>> we could retry without saying why.
>>>
>>
>> I can file a jira for this, but I'm not sure what it should say!
>>
>
>


Re: Recovering High Level Consumer from exceptions

2013-10-29 Thread Joseph Lawson
Thanks. i ended up catching any exception in the consumer thread and retrying 
the iteration unless i specifically toggled it to dump and quit on error.

Sent from my Droid Charge on Verizon 4G LTE Jun Rao wrote:
Normally, consumers don't timeout unless you have configured it. You just
need to make sure you don't kill the consumer thread if it hits an
application error.

Thanks,

Jun


On Mon, Oct 28, 2013 at 5:21 PM, Joseph Lawson  wrote:

> Hi everyone,
>
> how should a high level consumer implementation handle consumer
> exceptions.  Using the high level consumer wiki example, the primary thread
> calls an example.run() and then waits for a thread.sleep and then shuts
> down.  If each ConsumerTest is a thread, what is the best way for the
> parent thread to handle consumer exceptions such as a consumer timeout or a
> rebalance max retries exception?
>
> I've run into cases where my high level consumer just keeps spinning while
> my consumer threads are crashed.  Any suggestions as to how I should detect
> the exceptions from the consumer threads in the high level consumer logic?
>
> Thanks!
>
> -Joe Lawson
>


Robust of cassandra

2013-10-29 Thread Jiang Jacky
Hi, Everyone
For now, I have a kafka cluster, and there is only 1 partition in each
node. i just also produce lots of messages to 1 topic only.
Then, I have a problem, I used it for a while, the topic is crashed, the
error message is "NoAvailableLeader", but it is good for other topics that
I do not often use. So I have to cleanup the zookeeper cache, and restart
it again, then that topic comes back normal.
Can someone tell me how to fix that? I don't want to clean the zookeeper
sometimes in order to make the topic available. It seems not stable.
Thanks


Re: Robust of cassandra

2013-10-29 Thread Neha Narkhede
What is the replication factor and have you checked if there are leader
elections around the time you see the NoAvailableLeader exception? It is
important to figure out the root cause of NoAvailableLeader, but it is
transient and should fix itself. Are you using 0.8 HEAD ?


On Tue, Oct 29, 2013 at 5:57 PM, Jiang Jacky  wrote:

> Hi, Everyone
> For now, I have a kafka cluster, and there is only 1 partition in each
> node. i just also produce lots of messages to 1 topic only.
> Then, I have a problem, I used it for a while, the topic is crashed, the
> error message is "NoAvailableLeader", but it is good for other topics that
> I do not often use. So I have to cleanup the zookeeper cache, and restart
> it again, then that topic comes back normal.
> Can someone tell me how to fix that? I don't want to clean the zookeeper
> sometimes in order to make the topic available. It seems not stable.
> Thanks
>


Plan for Scala 2.10+ support

2013-10-29 Thread Abhinav Anand
Hi,
  We are building pub-sub applications over kafka in our company. Some of
our packages have been build on Scala 2.10+. Though only Kafka_2.9.2 is
available on maven repository. Kafka_2.9.2 uses some deprecated scala
classes (viz. ClassManifest), which causes runtime exception for our
application.

We are currently maintaining our own repository of Kafka build over
scala_2.10. Though we wanted to open-source our system and wanted to check
if there is any plan for Kafka_2.10 getting pushed to maven or apache
repository ??

-- 
Abhinav Anand


Re: Plan for Scala 2.10+ support

2013-10-29 Thread Aniket Bhatnagar
The latest 0.8 branch has support for scala 2.10. We use it in our
projects. Once 0.8 is released, I believe you should be able to see 2.10
artifacts in maven repository. Are you using 0.7 or 0.8?


On 30 October 2013 08:58, Abhinav Anand  wrote:

> Hi,
>   We are building pub-sub applications over kafka in our company. Some of
> our packages have been build on Scala 2.10+. Though only Kafka_2.9.2 is
> available on maven repository. Kafka_2.9.2 uses some deprecated scala
> classes (viz. ClassManifest), which causes runtime exception for our
> application.
>
> We are currently maintaining our own repository of Kafka build over
> scala_2.10. Though we wanted to open-source our system and wanted to check
> if there is any plan for Kafka_2.10 getting pushed to maven or apache
> repository ??
>
> --
> Abhinav Anand
>


Re: Plan for Scala 2.10+ support

2013-10-29 Thread chetan conikee
Are there any public maven repos hosting 0.8 with 2.10+ support.? 

Sent from my iPhone

> On Oct 29, 2013, at 8:32 PM, Aniket Bhatnagar  
> wrote:
> 
> The latest 0.8 branch has support for scala 2.10. We use it in our
> projects. Once 0.8 is released, I believe you should be able to see 2.10
> artifacts in maven repository. Are you using 0.7 or 0.8?
> 
> 
>> On 30 October 2013 08:58, Abhinav Anand  wrote:
>> 
>> Hi,
>>  We are building pub-sub applications over kafka in our company. Some of
>> our packages have been build on Scala 2.10+. Though only Kafka_2.9.2 is
>> available on maven repository. Kafka_2.9.2 uses some deprecated scala
>> classes (viz. ClassManifest), which causes runtime exception for our
>> application.
>> 
>> We are currently maintaining our own repository of Kafka build over
>> scala_2.10. Though we wanted to open-source our system and wanted to check
>> if there is any plan for Kafka_2.10 getting pushed to maven or apache
>> repository ??
>> 
>> --
>> Abhinav Anand
>> 


Re: Plan for Scala 2.10+ support

2013-10-29 Thread Kane Kane
I think there was the plan to make kafka producer and consumer pure in
Java, so scala version wouldn't matter. And I think that's mostly why
people want certain scala version, not because of kafka itself, but they
just need producer/consumer libraries.


On Tue, Oct 29, 2013 at 8:32 PM, Aniket Bhatnagar <
aniket.bhatna...@gmail.com> wrote:

> The latest 0.8 branch has support for scala 2.10. We use it in our
> projects. Once 0.8 is released, I believe you should be able to see 2.10
> artifacts in maven repository. Are you using 0.7 or 0.8?
>
>
> On 30 October 2013 08:58, Abhinav Anand  wrote:
>
> > Hi,
> >   We are building pub-sub applications over kafka in our company. Some of
> > our packages have been build on Scala 2.10+. Though only Kafka_2.9.2 is
> > available on maven repository. Kafka_2.9.2 uses some deprecated scala
> > classes (viz. ClassManifest), which causes runtime exception for our
> > application.
> >
> > We are currently maintaining our own repository of Kafka build over
> > scala_2.10. Though we wanted to open-source our system and wanted to
> check
> > if there is any plan for Kafka_2.10 getting pushed to maven or apache
> > repository ??
> >
> > --
> > Abhinav Anand
> >
>


Re: Plan for Scala 2.10+ support

2013-10-29 Thread Aniket Bhatnagar
I haven't been able to find one which we why had to build Kafka 0.8 from
source. It's not super hard though to build Kafka.


On 30 October 2013 09:07, chetan conikee  wrote:

> Are there any public maven repos hosting 0.8 with 2.10+ support.?
>
> Sent from my iPhone
>
> > On Oct 29, 2013, at 8:32 PM, Aniket Bhatnagar <
> aniket.bhatna...@gmail.com> wrote:
> >
> > The latest 0.8 branch has support for scala 2.10. We use it in our
> > projects. Once 0.8 is released, I believe you should be able to see 2.10
> > artifacts in maven repository. Are you using 0.7 or 0.8?
> >
> >
> >> On 30 October 2013 08:58, Abhinav Anand  wrote:
> >>
> >> Hi,
> >>  We are building pub-sub applications over kafka in our company. Some of
> >> our packages have been build on Scala 2.10+. Though only Kafka_2.9.2 is
> >> available on maven repository. Kafka_2.9.2 uses some deprecated scala
> >> classes (viz. ClassManifest), which causes runtime exception for our
> >> application.
> >>
> >> We are currently maintaining our own repository of Kafka build over
> >> scala_2.10. Though we wanted to open-source our system and wanted to
> check
> >> if there is any plan for Kafka_2.10 getting pushed to maven or apache
> >> repository ??
> >>
> >> --
> >> Abhinav Anand
> >>
>


Re: Plan for Scala 2.10+ support

2013-10-29 Thread Joe Stein
For now from the 0.8 branch you can build what your producer/consumer build
needs and publish to a local repository

./sbt "++2.10.2 publish-local"

The 0.8.0 release vote is being held up with an error uploading/posting
artifacts https://issues.apache.org/jira/browse/INFRA-6927 and once that is
resolved we can continue towards having 0.8.0 in maven central with
supported builds

/***
 Joe Stein
 Founder, Principal Consultant
 Big Data Open Source Security LLC
 http://www.stealth.ly
 Twitter: @allthingshadoop 
/


On Tue, Oct 29, 2013 at 11:43 PM, Aniket Bhatnagar <
aniket.bhatna...@gmail.com> wrote:

> I haven't been able to find one which we why had to build Kafka 0.8 from
> source. It's not super hard though to build Kafka.
>
>
> On 30 October 2013 09:07, chetan conikee  wrote:
>
> > Are there any public maven repos hosting 0.8 with 2.10+ support.?
> >
> > Sent from my iPhone
> >
> > > On Oct 29, 2013, at 8:32 PM, Aniket Bhatnagar <
> > aniket.bhatna...@gmail.com> wrote:
> > >
> > > The latest 0.8 branch has support for scala 2.10. We use it in our
> > > projects. Once 0.8 is released, I believe you should be able to see
> 2.10
> > > artifacts in maven repository. Are you using 0.7 or 0.8?
> > >
> > >
> > >> On 30 October 2013 08:58, Abhinav Anand  wrote:
> > >>
> > >> Hi,
> > >>  We are building pub-sub applications over kafka in our company. Some
> of
> > >> our packages have been build on Scala 2.10+. Though only Kafka_2.9.2
> is
> > >> available on maven repository. Kafka_2.9.2 uses some deprecated scala
> > >> classes (viz. ClassManifest), which causes runtime exception for our
> > >> application.
> > >>
> > >> We are currently maintaining our own repository of Kafka build over
> > >> scala_2.10. Though we wanted to open-source our system and wanted to
> > check
> > >> if there is any plan for Kafka_2.10 getting pushed to maven or apache
> > >> repository ??
> > >>
> > >> --
> > >> Abhinav Anand
> > >>
> >
>


Re: Plan for Scala 2.10+ support

2013-10-29 Thread chetan conikee
Thanks Joe. 
We have been maintaining an 0.8+2.10 release in our private repo for the past 
few months. 


> On Oct 29, 2013, at 8:46 PM, Joe Stein  wrote:
> 
> For now from the 0.8 branch you can build what your producer/consumer build
> needs and publish to a local repository
> 
> ./sbt "++2.10.2 publish-local"
> 
> The 0.8.0 release vote is being held up with an error uploading/posting
> artifacts https://issues.apache.org/jira/browse/INFRA-6927 and once that is
> resolved we can continue towards having 0.8.0 in maven central with
> supported builds
> 
> /***
> Joe Stein
> Founder, Principal Consultant
> Big Data Open Source Security LLC
> http://www.stealth.ly
> Twitter: @allthingshadoop 
> /
> 
> 
> On Tue, Oct 29, 2013 at 11:43 PM, Aniket Bhatnagar <
> aniket.bhatna...@gmail.com> wrote:
> 
>> I haven't been able to find one which we why had to build Kafka 0.8 from
>> source. It's not super hard though to build Kafka.
>> 
>> 
>>> On 30 October 2013 09:07, chetan conikee  wrote:
>>> 
>>> Are there any public maven repos hosting 0.8 with 2.10+ support.?
>>> 
>>> Sent from my iPhone
>>> 
> On Oct 29, 2013, at 8:32 PM, Aniket Bhatnagar <
 aniket.bhatna...@gmail.com> wrote:
 
 The latest 0.8 branch has support for scala 2.10. We use it in our
 projects. Once 0.8 is released, I believe you should be able to see
>> 2.10
 artifacts in maven repository. Are you using 0.7 or 0.8?
 
 
> On 30 October 2013 08:58, Abhinav Anand  wrote:
> 
> Hi,
> We are building pub-sub applications over kafka in our company. Some
>> of
> our packages have been build on Scala 2.10+. Though only Kafka_2.9.2
>> is
> available on maven repository. Kafka_2.9.2 uses some deprecated scala
> classes (viz. ClassManifest), which causes runtime exception for our
> application.
> 
> We are currently maintaining our own repository of Kafka build over
> scala_2.10. Though we wanted to open-source our system and wanted to
>>> check
> if there is any plan for Kafka_2.10 getting pushed to maven or apache
> repository ??
> 
> --
> Abhinav Anand
>> 


Re: High level consumer stop reading data

2013-10-29 Thread Jun Rao
Any exception/error from the consumer?

Thanks,

Jun


On Tue, Oct 29, 2013 at 4:50 AM, Hanish Bansal <
hanish.bansal.agar...@gmail.com> wrote:

> Hi All,
>
> We are running kafka-0.8, If kafka node machine's network is restarted or
> lost for some time, then high level consumer stop reading data from kafka
> even after the network is restarted/working.
>
> --
> *Thanks & Regards*
> *Hanish Bansal*
>


Re: Re: leader:none question

2013-10-29 Thread Neha Narkhede
We forgot to delete that script from the 0.8 beta distro. Since we don't
have a way to delete the topic, the cleanest way to come out of this would
be to wipe out the cluster and start fresh. If that's too much overhead,
you can try bringing the cluster down, deleting the /brokers/topics/
data from zookeeper, delete the local directory from the brokers and
restart the cluster. But we haven't tried the latter approach, so not sure
if it works.

Thanks,
Neha


On Mon, Oct 28, 2013 at 7:03 PM, linghongbo008 wrote:

> Hi, Neha !
> I use kafka0.8  that offered to delete the function of the topic.
> It can delete the topic by running the shell script
> "bin/kafka-delete-topic.sh "
> and create the topic by running the script "kafka-create-topic.sh ".
> Now, how do I fix this error to the topic of leader isn't  "leader :none"
>
> Thanks.
>
>
>
>
> From: Neha Narkhede
> Date: 2013-10-28 22:36
> To: users
> Subject: Re: leader:none question
> There is no way to delete topics in Kafka yet. So trying to delete topics
> like this might cause failures and cause the topics to be in a bad state.
>
> Thanks,
> Neha
> On Oct 28, 2013 4:46 AM, "linghongbo008"  wrote:
>
> > Hi All
> > I encountered an error.   I delete a certain topic and recreate it , but
> > when I run the shell "kafka-list-topic.sh ",   the topic of all
> partitions
> > "leader:none".
> > I have no idea!  Please tell me what happened.
> > Thanks.
> >
> >
> > topic: mqttmsglog   partition: 0leader: nonereplicas:
> 11,13,14
> >  isr:
> > topic: mqttmsglog   partition: 1leader: nonereplicas:
> 12,14,11
> >  isr:
> > topic: mqttmsglog   partition: 2leader: nonereplicas:
> 13,11,12
> >  isr:
> > topic: mqttmsglog   partition: 3leader: nonereplicas:
> 14,12,13
> >  isr:
> > topic: mqttmsglog   partition: 4leader: nonereplicas:
> 11,14,12
> >  isr:
> > topic: mqttmsglog   partition: 5leader: nonereplicas:
> 12,11,13
> >  isr:
> > topic: mqttmsglog   partition: 6leader: nonereplicas:
> 13,12,14
> >  isr:
> > topic: mqttmsglog   partition: 7leader: nonereplicas:
> 14,13,11
> >  isr:
> > topic: mqttmsglog   partition: 8leader: nonereplicas:
> 11,12,13
> >  isr:
> > topic: mqttmsglog   partition: 9leader: nonereplicas:
> 12,13,14
> >  isr:
> > topic: mqttmsglog   partition: 10   leader: nonereplicas:
> 13,14,11
> >  isr:
>


Re: Cannot start Kafka 0.8, exception in leader election

2013-10-29 Thread Neha Narkhede
Guozhang,

In this case, it seems like the controller is trying to talk to itself as
the controller establishes a channel with every broker in the cluster.

Thanks,
Neha


On Mon, Oct 28, 2013 at 4:26 PM, Guozhang Wang  wrote:

> Hello Nicholas,
>
> The log shows the controller cannot connect with one of the brokers due to
> "ava.net.ConnectException: Connection refused". Are all your brokers in the
> same cluster?
>
> Guozhang
>
>
> On Mon, Oct 28, 2013 at 3:37 PM, Nicholas Tietz  wrote:
>
> > Hi everyone,
> >
> > I'm having some problems getting Kafka 0.8 running. (I'm running this on
> > Mac OS X 10.8 with the latest updates; Java version 1.6.0_51.) I followed
> > the instructions from the quickstart for 0.8, both from-source and
> > from-binary (cleaned up all related files each time). However, I am
> getting
> > an exception when I try to start the Kafka server.
> >
> > Below is the relevant output from the console output. I cut the beginning
> > and the end. The "INFO 0 successfully elected as leader", "ERROR while
> > electing [...]" and the exception recur until I terminate the process.
> >
> > Any help on this issue would be appreciated.
> >
> > Thanks,
> > Nicholas Tietz
> >
> > Relevant console output:
> > [...]
> > [2013-10-28 18:35:03,055] INFO zookeeper state changed (SyncConnected)
> > (org.I0Itec.zkclient.ZkClient)
> > [2013-10-28 18:35:03,093] INFO Registered broker 0 at path /brokers/ids/0
> > with address coffeemachine:9092. (kafka.utils.ZkUtils$)
> > [2013-10-28 18:35:03,094] INFO [Kafka Server 0], Connecting to ZK:
> > localhost:2181 (kafka.server.KafkaServer)
> > [2013-10-28 18:35:03,145] INFO Will not load MX4J, mx4j-tools.jar is not
> in
> > the classpath (kafka.utils.Mx4jLoader$)
> > [2013-10-28 18:35:03,151] INFO 0 successfully elected as leader
> > (kafka.server.ZookeeperLeaderElector)
> > [2013-10-28 18:35:03,263] ERROR Error while electing or becoming leader
> on
> > broker 0 (kafka.server.ZookeeperLeaderElector)
> > java.net.ConnectException: Connection refused
> > at sun.nio.ch.Net.connect(Native Method)
> > at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:532)
> > at kafka.network.BlockingChannel.connect(BlockingChannel.scala:57)
> > at
> >
> >
> kafka.controller.ControllerChannelManager.kafka$controller$ControllerChannelManager$$addNewBroker(ControllerChannelManager.scala:84)
> > at
> >
> >
> kafka.controller.ControllerChannelManager$$anonfun$1.apply(ControllerChannelManager.scala:35)
> > at
> >
> >
> kafka.controller.ControllerChannelManager$$anonfun$1.apply(ControllerChannelManager.scala:35)
> > at scala.collection.immutable.Set$Set1.foreach(Set.scala:81)
> > at
> >
> >
> kafka.controller.ControllerChannelManager.(ControllerChannelManager.scala:35)
> > at
> >
> >
> kafka.controller.KafkaController.startChannelManager(KafkaController.scala:503)
> > at
> >
> >
> kafka.controller.KafkaController.initializeControllerContext(KafkaController.scala:467)
> > at
> >
> >
> kafka.controller.KafkaController.onControllerFailover(KafkaController.scala:215)
> > at
> >
> >
> kafka.controller.KafkaController$$anonfun$1.apply$mcV$sp(KafkaController.scala:89)
> > at
> >
> kafka.server.ZookeeperLeaderElector.elect(ZookeeperLeaderElector.scala:53)
> > at
> >
> >
> kafka.server.ZookeeperLeaderElector.startup(ZookeeperLeaderElector.scala:43)
> > at
> kafka.controller.KafkaController.startup(KafkaController.scala:396)
> > at kafka.server.KafkaServer.startup(KafkaServer.scala:96)
> > at
> > kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:34)
> > at kafka.Kafka$.main(Kafka.scala:46)
> > at kafka.Kafka.main(Kafka.scala)
> > [2013-10-28 18:35:03,267] INFO New leader is 0
> > (kafka.server.ZookeeperLeaderElector$LeaderChangeListener)
> > [2013-10-28 18:35:03,273] INFO 0 successfully elected as leader
> > (kafka.server.ZookeeperLeaderElector)
> > [2013-10-28 18:35:03,276] INFO [Kafka Server 0], Started
> > (kafka.server.KafkaServer)
> > [2013-10-28 18:35:03,294] ERROR Error while electing or becoming leader
> on
> > broker 0 (kafka.server.ZookeeperLeaderElector)
> > java.net.ConnectException: Connection refused
> > [...]
> >
>
>
>
> --
> -- Guozhang
>


Re: High level consumer stop reading data

2013-10-29 Thread Hanish Bansal
No, it doesn't throw any exception. Consumer goes to halt state and after
restarting it again starts consuming the data.


On Wed, Oct 30, 2013 at 9:37 AM, Jun Rao  wrote:

> Any exception/error from the consumer?
>
> Thanks,
>
> Jun
>
>
> On Tue, Oct 29, 2013 at 4:50 AM, Hanish Bansal <
> hanish.bansal.agar...@gmail.com> wrote:
>
> > Hi All,
> >
> > We are running kafka-0.8, If kafka node machine's network is restarted or
> > lost for some time, then high level consumer stop reading data from kafka
> > even after the network is restarted/working.
> >
> > --
> > *Thanks & Regards*
> > *Hanish Bansal*
> >
>



-- 
*Thanks & Regards*
*Hanish Bansal*