Re: Review Request 28481: Patch for KAFKA-1792

2014-12-19 Thread Dmitry Pekar

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/28481/
---

(Updated Dec. 19, 2014, 2:48 p.m.)


Review request for kafka.


Bugs: KAFKA-1792
https://issues.apache.org/jira/browse/KAFKA-1792


Repository: kafka


Description (updated)
---

KAFKA-1792: CR


KAFKA-1792: CR2


KAFKA-1792: merge of KAFKA-1753


Diffs (updated)
-

  core/src/main/scala/kafka/admin/AdminUtils.scala 
28b12c7b89a56c113b665fbde1b95f873f8624a3 
  core/src/main/scala/kafka/admin/ReassignPartitionsCommand.scala 
979992b68af3723cd229845faff81c641123bb88 
  core/src/test/scala/unit/kafka/admin/AdminTest.scala 
e28979827110dfbbb92fe5b152e7f1cc973de400 
  topics.json ff011ed381e781b9a177036001d44dca3eac586f 

Diff: https://reviews.apache.org/r/28481/diff/


Testing
---


Thanks,

Dmitry Pekar



[jira] [Commented] (KAFKA-1792) change behavior of --generate to produce assignment config with fair replica distribution and minimal number of reassignments

2014-12-19 Thread Dmitry Pekar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14253483#comment-14253483
 ] 

Dmitry Pekar commented on KAFKA-1792:
-

Updated reviewboard https://reviews.apache.org/r/28481/diff/
 against branch origin/trunk

> change behavior of --generate to produce assignment config with fair replica 
> distribution and minimal number of reassignments
> -
>
> Key: KAFKA-1792
> URL: https://issues.apache.org/jira/browse/KAFKA-1792
> Project: Kafka
>  Issue Type: Sub-task
>  Components: tools
>Reporter: Dmitry Pekar
>Assignee: Dmitry Pekar
> Fix For: 0.8.3
>
> Attachments: KAFKA-1792.patch, KAFKA-1792_2014-12-03_19:24:56.patch, 
> KAFKA-1792_2014-12-08_13:42:43.patch, KAFKA-1792_2014-12-19_16:48:12.patch
>
>
> Current implementation produces fair replica distribution between specified 
> list of brokers. Unfortunately, it doesn't take
> into account current replica assignment.
> So if we have, for instance, 3 brokers id=[0..2] and are going to add fourth 
> broker id=3, 
> generate will create an assignment config which will redistribute replicas 
> fairly across brokers [0..3] 
> in the same way as those partitions were created from scratch. It will not 
> take into consideration current replica 
> assignment and accordingly will not try to minimize number of replica moves 
> between brokers.
> As proposed by [~charmalloc] this should be improved. New output of improved 
> --generate algorithm should suite following requirements:
> - fairness of replica distribution - every broker will have R or R+1 replicas 
> assigned;
> - minimum of reassignments - number of replica moves between brokers will be 
> minimal;
> Example.
> Consider following replica distribution per brokers [0..3] (we just added 
> brokers 2 and 3):
> - broker - 0, 1, 2, 3 
> - replicas - 7, 6, 0, 0
> The new algorithm will produce following assignment:
> - broker - 0, 1, 2, 3 
> - replicas - 4, 3, 3, 3
> - moves - -3, -3, +3, +3
> It will be fair and number of moves will be 6, which is minimal for specified 
> initial distribution.
> The scope of this issue is:
> - design an algorithm matching the above requirements;
> - implement this algorithm and unit tests;
> - test it manually using different initial assignments;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1792) change behavior of --generate to produce assignment config with fair replica distribution and minimal number of reassignments

2014-12-19 Thread Dmitry Pekar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Pekar updated KAFKA-1792:

Attachment: KAFKA-1792_2014-12-19_16:48:12.patch

> change behavior of --generate to produce assignment config with fair replica 
> distribution and minimal number of reassignments
> -
>
> Key: KAFKA-1792
> URL: https://issues.apache.org/jira/browse/KAFKA-1792
> Project: Kafka
>  Issue Type: Sub-task
>  Components: tools
>Reporter: Dmitry Pekar
>Assignee: Dmitry Pekar
> Fix For: 0.8.3
>
> Attachments: KAFKA-1792.patch, KAFKA-1792_2014-12-03_19:24:56.patch, 
> KAFKA-1792_2014-12-08_13:42:43.patch, KAFKA-1792_2014-12-19_16:48:12.patch
>
>
> Current implementation produces fair replica distribution between specified 
> list of brokers. Unfortunately, it doesn't take
> into account current replica assignment.
> So if we have, for instance, 3 brokers id=[0..2] and are going to add fourth 
> broker id=3, 
> generate will create an assignment config which will redistribute replicas 
> fairly across brokers [0..3] 
> in the same way as those partitions were created from scratch. It will not 
> take into consideration current replica 
> assignment and accordingly will not try to minimize number of replica moves 
> between brokers.
> As proposed by [~charmalloc] this should be improved. New output of improved 
> --generate algorithm should suite following requirements:
> - fairness of replica distribution - every broker will have R or R+1 replicas 
> assigned;
> - minimum of reassignments - number of replica moves between brokers will be 
> minimal;
> Example.
> Consider following replica distribution per brokers [0..3] (we just added 
> brokers 2 and 3):
> - broker - 0, 1, 2, 3 
> - replicas - 7, 6, 0, 0
> The new algorithm will produce following assignment:
> - broker - 0, 1, 2, 3 
> - replicas - 4, 3, 3, 3
> - moves - -3, -3, +3, +3
> It will be fair and number of moves will be 6, which is minimal for specified 
> initial distribution.
> The scope of this issue is:
> - design an algorithm matching the above requirements;
> - implement this algorithm and unit tests;
> - test it manually using different initial assignments;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1792) change behavior of --generate to produce assignment config with fair replica distribution and minimal number of reassignments

2014-12-19 Thread Dmitry Pekar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14253484#comment-14253484
 ] 

Dmitry Pekar commented on KAFKA-1792:
-

Merged patch from KAFKA-1753 here to be a single patch.
Now --decommission-broker implementation also uses modified algorithm
for fair partition reassignment.

> change behavior of --generate to produce assignment config with fair replica 
> distribution and minimal number of reassignments
> -
>
> Key: KAFKA-1792
> URL: https://issues.apache.org/jira/browse/KAFKA-1792
> Project: Kafka
>  Issue Type: Sub-task
>  Components: tools
>Reporter: Dmitry Pekar
>Assignee: Dmitry Pekar
> Fix For: 0.8.3
>
> Attachments: KAFKA-1792.patch, KAFKA-1792_2014-12-03_19:24:56.patch, 
> KAFKA-1792_2014-12-08_13:42:43.patch, KAFKA-1792_2014-12-19_16:48:12.patch
>
>
> Current implementation produces fair replica distribution between specified 
> list of brokers. Unfortunately, it doesn't take
> into account current replica assignment.
> So if we have, for instance, 3 brokers id=[0..2] and are going to add fourth 
> broker id=3, 
> generate will create an assignment config which will redistribute replicas 
> fairly across brokers [0..3] 
> in the same way as those partitions were created from scratch. It will not 
> take into consideration current replica 
> assignment and accordingly will not try to minimize number of replica moves 
> between brokers.
> As proposed by [~charmalloc] this should be improved. New output of improved 
> --generate algorithm should suite following requirements:
> - fairness of replica distribution - every broker will have R or R+1 replicas 
> assigned;
> - minimum of reassignments - number of replica moves between brokers will be 
> minimal;
> Example.
> Consider following replica distribution per brokers [0..3] (we just added 
> brokers 2 and 3):
> - broker - 0, 1, 2, 3 
> - replicas - 7, 6, 0, 0
> The new algorithm will produce following assignment:
> - broker - 0, 1, 2, 3 
> - replicas - 4, 3, 3, 3
> - moves - -3, -3, +3, +3
> It will be fair and number of moves will be 6, which is minimal for specified 
> initial distribution.
> The scope of this issue is:
> - design an algorithm matching the above requirements;
> - implement this algorithm and unit tests;
> - test it manually using different initial assignments;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 29231: Patch for KAFKA-1824

2014-12-19 Thread Eric Olander


> On Dec. 19, 2014, 2:36 a.m., Eric Olander wrote:
> > core/src/main/scala/kafka/tools/ConsoleProducer.scala, line 269
> > 
> >
> > remove() returns the value assigned to the key being removed, so you 
> > could simply do:
> > 
> > topic = props.remove("topic")
> > 
> > instead of the getProperty() and remove()
> 
> Gwen Shapira wrote:
> Will do. Thanks for the tip, Eric :)

So, I forgot that the underlying implementation of Properties is stupid - it is 
HashMap instead of HashMap which its API covers 
for. So, the remove() is going to return an Object rather than a String, so 
you'd have to add a cast to handle that.  Anyway, that might be more convoluted 
than what you have.  And I'm sure moving away from using Properties is a far 
more invasive change.  So, sorry to have sent you donw that rabbit hole.


- Eric


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/29231/#review65582
---


On Dec. 19, 2014, 1:56 a.m., Gwen Shapira wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/29231/
> ---
> 
> (Updated Dec. 19, 2014, 1:56 a.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1824
> https://issues.apache.org/jira/browse/KAFKA-1824
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> fixing accidental return of "WARN Property topic is not valid"
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/tools/ConsoleProducer.scala 
> 1061cc74fac69693836f1e75add06b09d459a764 
> 
> Diff: https://reviews.apache.org/r/29231/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Gwen Shapira
> 
>



[jira] [Created] (KAFKA-1825) leadership election state is stale and never recovers without all brokers restarting

2014-12-19 Thread Joe Stein (JIRA)
Joe Stein created KAFKA-1825:


 Summary: leadership election state is stale and never recovers 
without all brokers restarting
 Key: KAFKA-1825
 URL: https://issues.apache.org/jira/browse/KAFKA-1825
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.1.1, 0.8.2
Reporter: Joe Stein
Priority: Critical
 Fix For: 0.8.2


I am not sure what is the cause here but I can succinctly and repeatedly  
reproduce this issue. I tried with 0.8.1.1 and 0.8.2-beta and both behave in 
the same manner.

The code to reproduce this is here 
https://github.com/stealthly/go_kafka_client/tree/wipAsyncSaramaProducer/producers

scenario 3 brokers, 1 zookeeper, 1 client (each AWS c3.2xlarge instances)

create topic 
producer client sends in 380,000 messages/sec (attached executable)

everything is fine until you kill -SIGTERM broker #2 

then at that point the state goes bad for that topic.  even trying to use the 
console producer (with the sarama producer off) doesn't work.

doing a describe the yoyoma topic looks fine, ran prefered leadership election 
lots of issues... still can't produce... only resolution is bouncing all 
brokers :(

root@ip-10-233-52-139:/opt/kafka_2.10-0.8.1.1# bin/kafka-topics.sh --zookeeper 
10.218.189.234:2181 --describe
Topic:yoyomaPartitionCount:36   ReplicationFactor:3 Configs:
Topic: yoyoma   Partition: 0Leader: 1   Replicas: 1,2,3 Isr: 1,3
Topic: yoyoma   Partition: 1Leader: 1   Replicas: 2,3,1 Isr: 1,3
Topic: yoyoma   Partition: 2Leader: 1   Replicas: 3,1,2 Isr: 1,3
Topic: yoyoma   Partition: 3Leader: 1   Replicas: 1,3,2 Isr: 1,3
Topic: yoyoma   Partition: 4Leader: 1   Replicas: 2,1,3 Isr: 1,3
Topic: yoyoma   Partition: 5Leader: 1   Replicas: 3,2,1 Isr: 1,3
Topic: yoyoma   Partition: 6Leader: 1   Replicas: 1,2,3 Isr: 1,3
Topic: yoyoma   Partition: 7Leader: 1   Replicas: 2,3,1 Isr: 1,3
Topic: yoyoma   Partition: 8Leader: 1   Replicas: 3,1,2 Isr: 1,3
Topic: yoyoma   Partition: 9Leader: 1   Replicas: 1,3,2 Isr: 1,3
Topic: yoyoma   Partition: 10   Leader: 1   Replicas: 2,1,3 Isr: 1,3
Topic: yoyoma   Partition: 11   Leader: 1   Replicas: 3,2,1 Isr: 1,3
Topic: yoyoma   Partition: 12   Leader: 1   Replicas: 1,2,3 Isr: 1,3
Topic: yoyoma   Partition: 13   Leader: 1   Replicas: 2,3,1 Isr: 1,3
Topic: yoyoma   Partition: 14   Leader: 1   Replicas: 3,1,2 Isr: 1,3
Topic: yoyoma   Partition: 15   Leader: 1   Replicas: 1,3,2 Isr: 1,3
Topic: yoyoma   Partition: 16   Leader: 1   Replicas: 2,1,3 Isr: 1,3
Topic: yoyoma   Partition: 17   Leader: 1   Replicas: 3,2,1 Isr: 1,3
Topic: yoyoma   Partition: 18   Leader: 1   Replicas: 1,2,3 Isr: 1,3
Topic: yoyoma   Partition: 19   Leader: 1   Replicas: 2,3,1 Isr: 1,3
Topic: yoyoma   Partition: 20   Leader: 1   Replicas: 3,1,2 Isr: 1,3
Topic: yoyoma   Partition: 21   Leader: 1   Replicas: 1,3,2 Isr: 1,3
Topic: yoyoma   Partition: 22   Leader: 1   Replicas: 2,1,3 Isr: 1,3
Topic: yoyoma   Partition: 23   Leader: 1   Replicas: 3,2,1 Isr: 1,3
Topic: yoyoma   Partition: 24   Leader: 1   Replicas: 1,2,3 Isr: 1,3
Topic: yoyoma   Partition: 25   Leader: 1   Replicas: 2,3,1 Isr: 1,3
Topic: yoyoma   Partition: 26   Leader: 1   Replicas: 3,1,2 Isr: 1,3
Topic: yoyoma   Partition: 27   Leader: 1   Replicas: 1,3,2 Isr: 1,3
Topic: yoyoma   Partition: 28   Leader: 1   Replicas: 2,1,3 Isr: 1,3
Topic: yoyoma   Partition: 29   Leader: 1   Replicas: 3,2,1 Isr: 1,3
Topic: yoyoma   Partition: 30   Leader: 1   Replicas: 1,2,3 Isr: 1,3
Topic: yoyoma   Partition: 31   Leader: 1   Replicas: 2,3,1 Isr: 1,3
Topic: yoyoma   Partition: 32   Leader: 1   Replicas: 3,1,2 Isr: 1,3
Topic: yoyoma   Partition: 33   Leader: 1   Replicas: 1,3,2 Isr: 1,3
Topic: yoyoma   Partition: 34   Leader: 1   Replicas: 2,1,3 Isr: 1,3
Topic: yoyoma   Partition: 35   Leader: 1   Replicas: 3,2,1 Isr: 1,3
root@ip-10-233-52-139:/opt/kafka_2.10-0.8.1.1# 
bin/kafka-preferred-replica-election.sh --zookeeper 10.218.189.234:2181
Successfully started preferred replica election for partitions Set([yoyoma,29], 
[yoyoma,14], [yoyoma,22], [yoyoma,15], [yoyoma,3], [yoyoma,11], [yoyoma,32], 
[yoyoma,23], [yoyoma,18], [yoyoma,25], [yoyoma,26], [yoyoma,1], [yoyoma,9], 
[yoyoma,33], [yoyoma,5], [yoyoma,12], [yoyoma,20], [yoyoma,4], [yoyoma,7], 
[yoyoma,24], [yoyoma,35], [yoyoma,10], [yoyoma,8], [yoyoma,2], [yoyoma,21], 
[yoyoma,31], [yoyoma,28], [yoyoma,19], [yoyoma,16], [yoyoma,13], [yoyoma,34], 
[yoyoma,0], [test-1210,0], [yoyoma,30],

[jira] [Updated] (KAFKA-1825) leadership election state is stale and never recovers without all brokers restarting

2014-12-19 Thread Joe Stein (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Stein updated KAFKA-1825:
-
Attachment: KAFKA-1825.executable.tgz

attached build of code to reproduce ./producer on ubuntu

> leadership election state is stale and never recovers without all brokers 
> restarting
> 
>
> Key: KAFKA-1825
> URL: https://issues.apache.org/jira/browse/KAFKA-1825
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.1.1, 0.8.2
>Reporter: Joe Stein
>Priority: Critical
> Fix For: 0.8.2
>
> Attachments: KAFKA-1825.executable.tgz
>
>
> I am not sure what is the cause here but I can succinctly and repeatedly  
> reproduce this issue. I tried with 0.8.1.1 and 0.8.2-beta and both behave in 
> the same manner.
> The code to reproduce this is here 
> https://github.com/stealthly/go_kafka_client/tree/wipAsyncSaramaProducer/producers
> scenario 3 brokers, 1 zookeeper, 1 client (each AWS c3.2xlarge instances)
> create topic 
> producer client sends in 380,000 messages/sec (attached executable)
> everything is fine until you kill -SIGTERM broker #2 
> then at that point the state goes bad for that topic.  even trying to use the 
> console producer (with the sarama producer off) doesn't work.
> doing a describe the yoyoma topic looks fine, ran prefered leadership 
> election lots of issues... still can't produce... only resolution is bouncing 
> all brokers :(
> root@ip-10-233-52-139:/opt/kafka_2.10-0.8.1.1# bin/kafka-topics.sh 
> --zookeeper 10.218.189.234:2181 --describe
> Topic:yoyoma  PartitionCount:36   ReplicationFactor:3 Configs:
>   Topic: yoyoma   Partition: 0Leader: 1   Replicas: 1,2,3 Isr: 1,3
>   Topic: yoyoma   Partition: 1Leader: 1   Replicas: 2,3,1 Isr: 1,3
>   Topic: yoyoma   Partition: 2Leader: 1   Replicas: 3,1,2 Isr: 1,3
>   Topic: yoyoma   Partition: 3Leader: 1   Replicas: 1,3,2 Isr: 1,3
>   Topic: yoyoma   Partition: 4Leader: 1   Replicas: 2,1,3 Isr: 1,3
>   Topic: yoyoma   Partition: 5Leader: 1   Replicas: 3,2,1 Isr: 1,3
>   Topic: yoyoma   Partition: 6Leader: 1   Replicas: 1,2,3 Isr: 1,3
>   Topic: yoyoma   Partition: 7Leader: 1   Replicas: 2,3,1 Isr: 1,3
>   Topic: yoyoma   Partition: 8Leader: 1   Replicas: 3,1,2 Isr: 1,3
>   Topic: yoyoma   Partition: 9Leader: 1   Replicas: 1,3,2 Isr: 1,3
>   Topic: yoyoma   Partition: 10   Leader: 1   Replicas: 2,1,3 Isr: 1,3
>   Topic: yoyoma   Partition: 11   Leader: 1   Replicas: 3,2,1 Isr: 1,3
>   Topic: yoyoma   Partition: 12   Leader: 1   Replicas: 1,2,3 Isr: 1,3
>   Topic: yoyoma   Partition: 13   Leader: 1   Replicas: 2,3,1 Isr: 1,3
>   Topic: yoyoma   Partition: 14   Leader: 1   Replicas: 3,1,2 Isr: 1,3
>   Topic: yoyoma   Partition: 15   Leader: 1   Replicas: 1,3,2 Isr: 1,3
>   Topic: yoyoma   Partition: 16   Leader: 1   Replicas: 2,1,3 Isr: 1,3
>   Topic: yoyoma   Partition: 17   Leader: 1   Replicas: 3,2,1 Isr: 1,3
>   Topic: yoyoma   Partition: 18   Leader: 1   Replicas: 1,2,3 Isr: 1,3
>   Topic: yoyoma   Partition: 19   Leader: 1   Replicas: 2,3,1 Isr: 1,3
>   Topic: yoyoma   Partition: 20   Leader: 1   Replicas: 3,1,2 Isr: 1,3
>   Topic: yoyoma   Partition: 21   Leader: 1   Replicas: 1,3,2 Isr: 1,3
>   Topic: yoyoma   Partition: 22   Leader: 1   Replicas: 2,1,3 Isr: 1,3
>   Topic: yoyoma   Partition: 23   Leader: 1   Replicas: 3,2,1 Isr: 1,3
>   Topic: yoyoma   Partition: 24   Leader: 1   Replicas: 1,2,3 Isr: 1,3
>   Topic: yoyoma   Partition: 25   Leader: 1   Replicas: 2,3,1 Isr: 1,3
>   Topic: yoyoma   Partition: 26   Leader: 1   Replicas: 3,1,2 Isr: 1,3
>   Topic: yoyoma   Partition: 27   Leader: 1   Replicas: 1,3,2 Isr: 1,3
>   Topic: yoyoma   Partition: 28   Leader: 1   Replicas: 2,1,3 Isr: 1,3
>   Topic: yoyoma   Partition: 29   Leader: 1   Replicas: 3,2,1 Isr: 1,3
>   Topic: yoyoma   Partition: 30   Leader: 1   Replicas: 1,2,3 Isr: 1,3
>   Topic: yoyoma   Partition: 31   Leader: 1   Replicas: 2,3,1 Isr: 1,3
>   Topic: yoyoma   Partition: 32   Leader: 1   Replicas: 3,1,2 Isr: 1,3
>   Topic: yoyoma   Partition: 33   Leader: 1   Replicas: 1,3,2 Isr: 1,3
>   Topic: yoyoma   Partition: 34   Leader: 1   Replicas: 2,1,3 Isr: 1,3
>   Topic: yoyoma   Partition: 35   Leader: 1   Replicas: 3,2,1 Isr: 1,3
> root@ip-10-233-52-139:/opt/kafka_2.10-0.8.1.1# 
> bin/kafka-preferred-replica-election.sh --zookeeper 10.218.189.234:2181
> Successfully started preferred replica election for partitions 
> Set([yoyoma,29], [yoyoma,14], [yoyoma,22], [yoyoma,15

[jira] [Commented] (KAFKA-1806) broker can still expose uncommitted data to a consumer

2014-12-19 Thread Joe Stein (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14253786#comment-14253786
 ] 

Joe Stein commented on KAFKA-1806:
--

I am not sure if this is directly related but perhaps in some way possibly so I 
wanted to bring it up. I just created 
https://issues.apache.org/jira/browse/KAFKA-1825 which is a case where the 
Sarama client is putting Kafka in a bad state.  I suspect this might be the 
same type of scenario too.

[~lokeshbirla] is there some chance of getting code to reproduce your issue 
succinctly (please see my KAFKA-1825 sample code to reproduce and even a binary 
for folks to try out). 

<< sometimes this issue goes away however I see other problem of leadership 
changes very often even when all brokers are running.
This is a another issue I see in production with the Sarama client. I am 
working on hunting down the root cause but right now the thinking is that it is 
related to https://issues.apache.org/jira/browse/KAFKA-766 and 
https://github.com/Shopify/sarama/issues/236 with 
https://github.com/Shopify/sarama/commit/03ad601663634fd75eb357fee6782653f5a9a5ed
 being a fix for it.  

> broker can still expose uncommitted data to a consumer
> --
>
> Key: KAFKA-1806
> URL: https://issues.apache.org/jira/browse/KAFKA-1806
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.8.1.1
>Reporter: lokesh Birla
>Assignee: Neha Narkhede
>
> Although following issue: https://issues.apache.org/jira/browse/KAFKA-727
> is marked fixed but I still see this issue in 0.8.1.1. I am able to 
> reproducer the issue consistently. 
> [2014-08-18 06:43:58,356] ERROR [KafkaApi-1] Error when processing fetch 
> request for partition [mmetopic4,2] offset 1940029 from consumer with 
> correlation id 21 (kafka.server.Kaf
> kaApis)
> java.lang.IllegalArgumentException: Attempt to read with a maximum offset 
> (1818353) less than the start offset (1940029).
> at kafka.log.LogSegment.read(LogSegment.scala:136)
> at kafka.log.Log.read(Log.scala:386)
> at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$readMessageSet(KafkaApis.scala:530)
> at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$readMessageSets$1.apply(KafkaApis.scala:476)
> at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$readMessageSets$1.apply(KafkaApis.scala:471)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:233)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:233)
> at scala.collection.immutable.Map$Map1.foreach(Map.scala:119)
> at 
> scala.collection.TraversableLike$class.map(TraversableLike.scala:233)
> at scala.collection.immutable.Map$Map1.map(Map.scala:107)
> at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$readMessageSets(KafkaApis.scala:471)
> at 
> kafka.server.KafkaApis$FetchRequestPurgatory.expire(KafkaApis.scala:783)
> at 
> kafka.server.KafkaApis$FetchRequestPurgatory.expire(KafkaApis.scala:765)
> at 
> kafka.server.RequestPurgatory$ExpiredRequestReaper.run(RequestPurgatory.scala:216)
> at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-1826) add command to delete all consumer group information for a topic in zookeeper

2014-12-19 Thread Onur Karaman (JIRA)
Onur Karaman created KAFKA-1826:
---

 Summary: add command to delete all consumer group information for 
a topic in zookeeper
 Key: KAFKA-1826
 URL: https://issues.apache.org/jira/browse/KAFKA-1826
 Project: Kafka
  Issue Type: Bug
Reporter: Onur Karaman
Assignee: Onur Karaman






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1826) add command to delete all consumer group information for a topic in zookeeper

2014-12-19 Thread Neha Narkhede (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14254211#comment-14254211
 ] 

Neha Narkhede commented on KAFKA-1826:
--

This is a good idea. It probably makes sense to fold this under KAFKA-1476 
which is meant to be a generic consumer tool

> add command to delete all consumer group information for a topic in zookeeper
> -
>
> Key: KAFKA-1826
> URL: https://issues.apache.org/jira/browse/KAFKA-1826
> Project: Kafka
>  Issue Type: Bug
>Reporter: Onur Karaman
>Assignee: Onur Karaman
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1826) add command to delete all consumer group information for a topic in zookeeper

2014-12-19 Thread Onur Karaman (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1826?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Onur Karaman updated KAFKA-1826:

Issue Type: Sub-task  (was: Bug)
Parent: KAFKA-1476

> add command to delete all consumer group information for a topic in zookeeper
> -
>
> Key: KAFKA-1826
> URL: https://issues.apache.org/jira/browse/KAFKA-1826
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Onur Karaman
>Assignee: Onur Karaman
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Review Request 29277: add command to delete all consumer group information for a topic in zookeeper

2014-12-19 Thread Onur Karaman

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/29277/
---

Review request for kafka.


Bugs: KAFKA-1826
https://issues.apache.org/jira/browse/KAFKA-1826


Repository: kafka


Description
---

add command to delete all consumer group information for a topic in zookeeper


Diffs
-

  core/src/main/scala/kafka/admin/AdminUtils.scala 
28b12c7b89a56c113b665fbde1b95f873f8624a3 
  core/src/main/scala/kafka/admin/TopicCommand.scala 
285c0333ff43543d3e46444c1cd9374bb883bb59 
  core/src/main/scala/kafka/utils/ZkUtils.scala 
56e3e88e0cc6d917b0ffd1254e173295c1c4aabd 
  
core/src/test/scala/unit/kafka/admin/DeleteAllConsumerGroupInfoForTopicInZKTest.scala
 PRE-CREATION 
  core/src/test/scala/unit/kafka/utils/TestUtils.scala 
94d0028d8c4907e747aa8a74a13d719b974c97bf 

Diff: https://reviews.apache.org/r/29277/diff/


Testing
---


Thanks,

Onur Karaman



[jira] [Commented] (KAFKA-1826) add command to delete all consumer group information for a topic in zookeeper

2014-12-19 Thread Onur Karaman (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14254219#comment-14254219
 ] 

Onur Karaman commented on KAFKA-1826:
-

Created reviewboard https://reviews.apache.org/r/29277/diff/
 against branch origin/trunk

> add command to delete all consumer group information for a topic in zookeeper
> -
>
> Key: KAFKA-1826
> URL: https://issues.apache.org/jira/browse/KAFKA-1826
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Onur Karaman
>Assignee: Onur Karaman
> Attachments: KAFKA-1826.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1826) add command to delete all consumer group information for a topic in zookeeper

2014-12-19 Thread Onur Karaman (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1826?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Onur Karaman updated KAFKA-1826:

Attachment: KAFKA-1826.patch

> add command to delete all consumer group information for a topic in zookeeper
> -
>
> Key: KAFKA-1826
> URL: https://issues.apache.org/jira/browse/KAFKA-1826
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Onur Karaman
>Assignee: Onur Karaman
> Attachments: KAFKA-1826.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1826) add command to delete all consumer group information for a topic in zookeeper

2014-12-19 Thread Onur Karaman (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1826?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Onur Karaman updated KAFKA-1826:

Status: Patch Available  (was: Open)

> add command to delete all consumer group information for a topic in zookeeper
> -
>
> Key: KAFKA-1826
> URL: https://issues.apache.org/jira/browse/KAFKA-1826
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Onur Karaman
>Assignee: Onur Karaman
> Attachments: KAFKA-1826.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-1816) Topic configs reset after partition increase

2014-12-19 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira resolved KAFKA-1816.
-
Resolution: Duplicate

> Topic configs reset after partition increase
> 
>
> Key: KAFKA-1816
> URL: https://issues.apache.org/jira/browse/KAFKA-1816
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.1.1
>Reporter: Andrew Jorgensen
>Priority: Minor
>  Labels: newbie
> Fix For: 0.8.3
>
>
> If you alter a topic to increase the number of partitions then the 
> configuration erases the existing configs for that topic. This can be 
> reproduces by doing the following:
> {code:none}
> $ bin/kafka-topics.sh --create --zookeeper localhost --topic test_topic 
> --partitions 5 --config retention.ms=3600
> $ bin/kafka-topics.sh --describe --zookeeper localhost --topic test_topic
> > Topic:test_topicPartitionCount:5ReplicationFactor:1 
> > Configs:retention.ms=3600
> $ bin/kafka-topics.sh --alter --zookeeper localhost --topic test_topic 
> --partitions 10
> $ bin/kafka-topics.sh --describe --zookeeper localhost --topic test_topic
> > Topic:test_topicPartitionCount:10ReplicationFactor:1 
> > Configs:
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1806) broker can still expose uncommitted data to a consumer

2014-12-19 Thread lokesh Birla (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14254541#comment-14254541
 ] 

lokesh Birla commented on KAFKA-1806:
-

Hi Joe,

Your problem described in  https://issues.apache.org/jira/browse/KAFKA-1825 is 
very similar. I get leadership changes quite often even with 3 partitions 
itself. I also found that using sarma fix where I am restricting batch size to 
1000, DOES not resolve the problem. I tried to slow down the producer speed by 
using batchsize=1000. Even with 72k message/sec (msg size 150 bytes), I still 
see leadership change issue and broker offset error. 

I only filed sarama issue:  https://github.com/Shopify/sarama/issues/236. 

--Lokesh

> broker can still expose uncommitted data to a consumer
> --
>
> Key: KAFKA-1806
> URL: https://issues.apache.org/jira/browse/KAFKA-1806
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.8.1.1
>Reporter: lokesh Birla
>Assignee: Neha Narkhede
>
> Although following issue: https://issues.apache.org/jira/browse/KAFKA-727
> is marked fixed but I still see this issue in 0.8.1.1. I am able to 
> reproducer the issue consistently. 
> [2014-08-18 06:43:58,356] ERROR [KafkaApi-1] Error when processing fetch 
> request for partition [mmetopic4,2] offset 1940029 from consumer with 
> correlation id 21 (kafka.server.Kaf
> kaApis)
> java.lang.IllegalArgumentException: Attempt to read with a maximum offset 
> (1818353) less than the start offset (1940029).
> at kafka.log.LogSegment.read(LogSegment.scala:136)
> at kafka.log.Log.read(Log.scala:386)
> at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$readMessageSet(KafkaApis.scala:530)
> at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$readMessageSets$1.apply(KafkaApis.scala:476)
> at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$readMessageSets$1.apply(KafkaApis.scala:471)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:233)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:233)
> at scala.collection.immutable.Map$Map1.foreach(Map.scala:119)
> at 
> scala.collection.TraversableLike$class.map(TraversableLike.scala:233)
> at scala.collection.immutable.Map$Map1.map(Map.scala:107)
> at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$readMessageSets(KafkaApis.scala:471)
> at 
> kafka.server.KafkaApis$FetchRequestPurgatory.expire(KafkaApis.scala:783)
> at 
> kafka.server.KafkaApis$FetchRequestPurgatory.expire(KafkaApis.scala:765)
> at 
> kafka.server.RequestPurgatory$ExpiredRequestReaper.run(RequestPurgatory.scala:216)
> at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1806) broker can still expose uncommitted data to a consumer

2014-12-19 Thread lokesh Birla (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14254551#comment-14254551
 ] 

lokesh Birla commented on KAFKA-1806:
-

Hello Neha,

I did further debugging on this by turning trace on and found the following. 

1. Broker 1 and broker 3 have different offset for partition 0 for topic 
mmetopic1.  Broker 1 has higher offset than Broker 3.  
2. Due to kafka leadership changed, Broker 3 became the leader which has lower 
offset and when Broker 1 send fetch request with higher offset, error comes 
from broker 3 since it does NOT have that higher offset. 

Here is important trace information. 

Broker 1 (log)

[2014-09-02 06:53:55,466] DEBUG Partition [mmetopic1,0] on broker 1: Old hw for 
partition [mmetopic1,0] is 1330329. New hw is 1330329. All leo's are 
1371212,1330329,1331850 (kafka.cluster.Partition)[2014-09-02 06:53:55,537] INFO 
Truncating log mmetopic1-0 to offset 1329827. (kafka.log.Log)
[2014-09-02 06:53:55,477] INFO [ReplicaFetcherManager on broker 1] Added 
fetcher for partitions ArrayBuffer, [[mmetopic1,0], initOffset 1330329 to 
broker id:3,host:10.1.130.1,port:9092] ) (kafka.server.ReplicaFetcherManager
[2014-09-02 06:53:55,479] TRACE [KafkaApi-1] Handling request: 
Name:UpdateMetadataRequest;Version:0;Controller:2;ControllerEpoch:2;CorrelationId:5;ClientId:id_2-host_null-port_9092;AliveBrokers:id:3,host:10.1.130.1,port:9092,id:2,host:10.1.129.1,port:9092,id:1,host:10.1.128.1,port:9092;PartitionState:[mmetopic1,0]
 -> 
(LeaderAndIsrInfo:(Leader:3,ISR:3,2,LeaderEpoch:2,ControllerEpoch:2),ReplicationFactor:3),AllReplicas:1,2,3)
 from client: /10.1.128.1:59805 (kafka.server.KafkaApis)
[2014-09-02 06:53:55,490] TRACE [ReplicaFetcherThread-0-3], issuing to broker 3 
of fetch request Name: FetchRequest; Version: 0; CorrelationId: 3687; ClientId: 
ReplicaFetcherThread-0-3; ReplicaId: 1; MaxWait: 500 ms; MinBytes: 1 bytes; 
RequestInfo: [mmetopic1,0] -> PartitionFetchInfo(1330329,2097152) 
(kafka.server.ReplicaFetcherThread)
[2014-09-02 06:53:55,543] WARN [ReplicaFetcherThread-0-3], Replica 1 for 
partition [mmetopic1,0] reset its fetch offset to current leader 3's latest 
offset 1329827 (kafka.server.ReplicaFetcherThread)
[2014-09-02 06:53:55,543] ERROR [ReplicaFetcherThread-0-3], Current offset 
1330329 for partition [mmetopic1,0] out of range; reset offset to 1329827 
(kafka.server.ReplicaFetcherThread)

Broker 3 (log)
[2014-09-02 06:53:06,525] TRACE Setting log end offset for replica 2 for 
partition [mmetopic1,0] to 1330329 (kafka.cluster.Replica)
[2014-09-02 06:53:06,526] DEBUG Partition [mmetopic1,0] on broker 3: Old hw for 
partition [mmetopic1,0] is 1329827. New hw is 1329827. All leo's are 
1329827,1330329 (kafka.cluster.Partition)




=

[2014-09-02 06:53:06,530] ERROR [KafkaApi-3] Error when processing fetch 
request for partition [mmetopic1,0] offset 1330329 from follower with 
correlation id 3686 (kafka.server.KafkaApis)
kafka.common.OffsetOutOfRangeException: Request for offset 1330329 but we only 
have log segments in the range 0 to 1329827.
at kafka.log.Log.read(Log.scala:380)
at 
kafka.server.KafkaApis.kafka$server$KafkaApis$$readMessageSet(KafkaApis.scala:530)
at 
kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$readMessageSets$1.apply(KafkaApis.scala:476)
at 
kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$readMessageSets$1.apply(KafkaApis.scala:471)
at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:233)
at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:233)
at scala.collection.immutable.Map$Map3.foreach(Map.scala:164)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:233)
at scala.collection.immutable.Map$Map3.map(Map.scala:144)
at 
kafka.server.KafkaApis.kafka$server$KafkaApis$$readMessageSets(KafkaApis.scala:471)
at kafka.server.KafkaApis.handleFetchRequest(KafkaApis.scala:437)
at kafka.server.KafkaApis.handle(KafkaApis.scala:186)
at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:42)
at java.lang.Thread.run(Thread.java:745)

==



> broker can still expose uncommitted data to a consumer
> --
>
> Key: KAFKA-1806
> URL: https://issues.apache.org/jira/browse/KAFKA-1806
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.8.1.1
>Reporter: lokesh Birla
>Assignee: Neha Narkhede
>
> Although following issue: https://issues.apache.org/jira/browse/KAFKA-727
> is marked fixed but I still se

[jira] [Updated] (KAFKA-1806) broker fetch request uses old leader offset which is higher than current leader offset causes error

2014-12-19 Thread lokesh Birla (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lokesh Birla updated KAFKA-1806:

Summary: broker fetch request uses old leader offset which is higher than 
current leader offset causes error  (was: broker can still expose uncommitted 
data to a consumer)

> broker fetch request uses old leader offset which is higher than current 
> leader offset causes error
> ---
>
> Key: KAFKA-1806
> URL: https://issues.apache.org/jira/browse/KAFKA-1806
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.8.1.1
>Reporter: lokesh Birla
>Assignee: Neha Narkhede
>
> Although following issue: https://issues.apache.org/jira/browse/KAFKA-727
> is marked fixed but I still see this issue in 0.8.1.1. I am able to 
> reproducer the issue consistently. 
> [2014-08-18 06:43:58,356] ERROR [KafkaApi-1] Error when processing fetch 
> request for partition [mmetopic4,2] offset 1940029 from consumer with 
> correlation id 21 (kafka.server.Kaf
> kaApis)
> java.lang.IllegalArgumentException: Attempt to read with a maximum offset 
> (1818353) less than the start offset (1940029).
> at kafka.log.LogSegment.read(LogSegment.scala:136)
> at kafka.log.Log.read(Log.scala:386)
> at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$readMessageSet(KafkaApis.scala:530)
> at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$readMessageSets$1.apply(KafkaApis.scala:476)
> at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$readMessageSets$1.apply(KafkaApis.scala:471)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:233)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:233)
> at scala.collection.immutable.Map$Map1.foreach(Map.scala:119)
> at 
> scala.collection.TraversableLike$class.map(TraversableLike.scala:233)
> at scala.collection.immutable.Map$Map1.map(Map.scala:107)
> at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$readMessageSets(KafkaApis.scala:471)
> at 
> kafka.server.KafkaApis$FetchRequestPurgatory.expire(KafkaApis.scala:783)
> at 
> kafka.server.KafkaApis$FetchRequestPurgatory.expire(KafkaApis.scala:765)
> at 
> kafka.server.RequestPurgatory$ExpiredRequestReaper.run(RequestPurgatory.scala:216)
> at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)