[jira] [Commented] (KAFKA-813) Minor cleanup in Controller

2013-03-20 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13607697#comment-13607697
 ] 

Jun Rao commented on KAFKA-813:
---

Thanks for patch v2. Some more comments.

20. PartitionStateMachine.initializeLeaderAndIsrForPartition(): why is the 
following line removed?
  partitionState.put(topicAndPartition, OnlinePartition)

21. PartitionNoReplicaOnlineException seems long. Is it better to use 
NoReplicaOnlineException?

22. KafkaController: To compute the gauge OfflinePartitionsCount, we need to 
synchronize on controller lock.



> Minor cleanup in Controller
> ---
>
> Key: KAFKA-813
> URL: https://issues.apache.org/jira/browse/KAFKA-813
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8
>Reporter: Swapnil Ghike
>Assignee: Swapnil Ghike
>Priority: Blocker
>  Labels: kafka-0.8
> Fix For: 0.8
>
> Attachments: kafka-813-v1.patch, kafka-813-v2.patch
>
>
> Before starting work on delete topic support, uploading a patch first to 
> address some minor hiccups that touch a bunch of files:
> 1. Change PartitionOfflineException to PartitionUnavailableException because 
> in the partition state machine we mark a partition offline when its leader is 
> down, whereas the PartitionOfflineException is thrown when all the assigned 
> replicas of the partition are down.
> 2. Change PartitionOfflineRate to UnavailablePartitionRate
> 3. Remove default leader selector from partition state machine's 
> handleStateChange. We can specify null as default when we don't need to use a 
> leader selector.
> 4. Include controller info in the client id of LeaderAndIsrRequest.
> 5. Rename controllerContext.allleaders to something more meaningful - 
> partitionLeadershipInfo.
> 6. We don't need to put partition in OnlinePartition state in partition state 
> machine initializeLeaderAndIsrForPartition, the state change occurs in 
> handleStateChange.
> 7. Add todo in handleStateChanges
> 8. Left a comment above ReassignedPartitionLeaderSelector that reassigned 
> replicas are already in the ISR (this is not true for other leader 
> selectors), renamed the vals in the selector.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (KAFKA-813) Minor cleanup in Controller

2013-03-20 Thread Neha Narkhede (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13607705#comment-13607705
 ] 

Neha Narkhede commented on KAFKA-813:
-

Thanks for patch v2 -

1. KafkaController
1.1 The change to OfflinePartitionsCount looks good. However, there is a 
distinction between a partition whose leader is not alive but other replicas 
are so leader election will happen and a partition for which all replicas are 
dead. In the latter case, there can be no leader for that partition which is a 
much more dangerous state for a partition to be in. I suggest two metrics, 
OfflinePartitionsCount to indicate the former and UnavailablePartitionsCount to 
indicate the latter
1.2 I wonder why ActiveControllerCount and OfflinePartitionsCount are not part 
of ControllerStats ?

2. NoOpLeaderSelector
Minor code style suggestion - return is not required here.

3. PartitionStateMachine
We don't have to define the noOpLeaderSelector in the controller since it is 
used only in PartitionStateMachine.handleStateChanges(). Let's move it there. 
The reason offlineLeaderSelector is there since both the controller and the 
partition state machine access it.

Nit pick - Can we change PartitionNoReplicaOnlineException to 
NoReplicaForPartitionException ? :)

> Minor cleanup in Controller
> ---
>
> Key: KAFKA-813
> URL: https://issues.apache.org/jira/browse/KAFKA-813
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8
>Reporter: Swapnil Ghike
>Assignee: Swapnil Ghike
>Priority: Blocker
>  Labels: kafka-0.8
> Fix For: 0.8
>
> Attachments: kafka-813-v1.patch, kafka-813-v2.patch
>
>
> Before starting work on delete topic support, uploading a patch first to 
> address some minor hiccups that touch a bunch of files:
> 1. Change PartitionOfflineException to PartitionUnavailableException because 
> in the partition state machine we mark a partition offline when its leader is 
> down, whereas the PartitionOfflineException is thrown when all the assigned 
> replicas of the partition are down.
> 2. Change PartitionOfflineRate to UnavailablePartitionRate
> 3. Remove default leader selector from partition state machine's 
> handleStateChange. We can specify null as default when we don't need to use a 
> leader selector.
> 4. Include controller info in the client id of LeaderAndIsrRequest.
> 5. Rename controllerContext.allleaders to something more meaningful - 
> partitionLeadershipInfo.
> 6. We don't need to put partition in OnlinePartition state in partition state 
> machine initializeLeaderAndIsrForPartition, the state change occurs in 
> handleStateChange.
> 7. Add todo in handleStateChanges
> 8. Left a comment above ReassignedPartitionLeaderSelector that reassigned 
> replicas are already in the ISR (this is not true for other leader 
> selectors), renamed the vals in the selector.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Comment Edited] (KAFKA-813) Minor cleanup in Controller

2013-03-20 Thread Neha Narkhede (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13607705#comment-13607705
 ] 

Neha Narkhede edited comment on KAFKA-813 at 3/20/13 3:22 PM:
--

Thanks for patch v2 -

1. KafkaController
1.1 The change to OfflinePartitionsCount looks good. However, there is a 
distinction between a partition whose leader is not alive but other replicas 
are so leader election will happen and a partition for which all replicas are 
dead. In the latter case, there can be no leader for that partition which is a 
much more dangerous state for a partition to be in. I suggest two metrics, 
OfflinePartitionsCount to indicate the former and UnavailablePartitionsCount to 
indicate the latter
1.2 I wonder why ActiveControllerCount and OfflinePartitionsCount are not part 
of ControllerStats ?

2. NoOpLeaderSelector
Minor code style suggestion - return is not required here.

3. PartitionStateMachine
We don't have to define the noOpLeaderSelector in the controller since it is 
used only in PartitionStateMachine.handleStateChanges(). Let's move it there. 
The reason offlineLeaderSelector is there since both the controller and the 
partition state machine access it.

Nit pick - Can we change PartitionNoReplicaOnlineException to 
NoReplicaOnlineForPartitionException or simply NoReplicaOnlineException ? :)

  was (Author: nehanarkhede):
Thanks for patch v2 -

1. KafkaController
1.1 The change to OfflinePartitionsCount looks good. However, there is a 
distinction between a partition whose leader is not alive but other replicas 
are so leader election will happen and a partition for which all replicas are 
dead. In the latter case, there can be no leader for that partition which is a 
much more dangerous state for a partition to be in. I suggest two metrics, 
OfflinePartitionsCount to indicate the former and UnavailablePartitionsCount to 
indicate the latter
1.2 I wonder why ActiveControllerCount and OfflinePartitionsCount are not part 
of ControllerStats ?

2. NoOpLeaderSelector
Minor code style suggestion - return is not required here.

3. PartitionStateMachine
We don't have to define the noOpLeaderSelector in the controller since it is 
used only in PartitionStateMachine.handleStateChanges(). Let's move it there. 
The reason offlineLeaderSelector is there since both the controller and the 
partition state machine access it.

Nit pick - Can we change PartitionNoReplicaOnlineException to 
NoReplicaForPartitionException ? :)
  
> Minor cleanup in Controller
> ---
>
> Key: KAFKA-813
> URL: https://issues.apache.org/jira/browse/KAFKA-813
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8
>Reporter: Swapnil Ghike
>Assignee: Swapnil Ghike
>Priority: Blocker
>  Labels: kafka-0.8
> Fix For: 0.8
>
> Attachments: kafka-813-v1.patch, kafka-813-v2.patch
>
>
> Before starting work on delete topic support, uploading a patch first to 
> address some minor hiccups that touch a bunch of files:
> 1. Change PartitionOfflineException to PartitionUnavailableException because 
> in the partition state machine we mark a partition offline when its leader is 
> down, whereas the PartitionOfflineException is thrown when all the assigned 
> replicas of the partition are down.
> 2. Change PartitionOfflineRate to UnavailablePartitionRate
> 3. Remove default leader selector from partition state machine's 
> handleStateChange. We can specify null as default when we don't need to use a 
> leader selector.
> 4. Include controller info in the client id of LeaderAndIsrRequest.
> 5. Rename controllerContext.allleaders to something more meaningful - 
> partitionLeadershipInfo.
> 6. We don't need to put partition in OnlinePartition state in partition state 
> machine initializeLeaderAndIsrForPartition, the state change occurs in 
> handleStateChange.
> 7. Add todo in handleStateChanges
> 8. Left a comment above ReassignedPartitionLeaderSelector that reassigned 
> replicas are already in the ISR (this is not true for other leader 
> selectors), renamed the vals in the selector.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (KAFKA-155) Support graceful Decommissioning of Broker

2013-03-20 Thread Neha Narkhede (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13607766#comment-13607766
 ] 

Neha Narkhede commented on KAFKA-155:
-

ShutdownBroker admin command does not completely achieve this. It merely moves 
the leader from that broker and then shuts the broker down. But now, some of 
the partitions could be under replicated. What really solves the problem is 
preferred replica election admin command. This allows you to add new brokers to 
an existing cluster, move some partition off of the brokers to be 
decommissioned and then shutdown the broker by killing it.

> Support graceful Decommissioning of Broker
> --
>
> Key: KAFKA-155
> URL: https://issues.apache.org/jira/browse/KAFKA-155
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Sharad Agarwal
> Fix For: 0.8
>
>
> There should be a graceful way of decommissioning the broker so that there is 
> absolutely 0 data loss. Decommissioning is not necessarily related to 
> replication (Kafka-50).
> There should be a way to get the broker out of the cluster only from the 
> produce side. Consumers should be able to continue keep pulling data. When 
> the administrator is sure that all data has been consumed by consumers, 
> broker node can be removed permanently.
> Same would be useful for rolling upgrades without any message loss.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (KAFKA-155) Support graceful Decommissioning of Broker

2013-03-20 Thread Neha Narkhede (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neha Narkhede updated KAFKA-155:


Affects Version/s: 0.7

> Support graceful Decommissioning of Broker
> --
>
> Key: KAFKA-155
> URL: https://issues.apache.org/jira/browse/KAFKA-155
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.7
>Reporter: Sharad Agarwal
> Fix For: 0.8
>
>
> There should be a graceful way of decommissioning the broker so that there is 
> absolutely 0 data loss. Decommissioning is not necessarily related to 
> replication (Kafka-50).
> There should be a way to get the broker out of the cluster only from the 
> produce side. Consumers should be able to continue keep pulling data. When 
> the administrator is sure that all data has been consumed by consumers, 
> broker node can be removed permanently.
> Same would be useful for rolling upgrades without any message loss.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (KAFKA-155) Support graceful Decommissioning of Broker

2013-03-20 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13607798#comment-13607798
 ] 

Jun Rao commented on KAFKA-155:
---

You mean the partition reassignment tool, not preferred replication election 
tool, right?

> Support graceful Decommissioning of Broker
> --
>
> Key: KAFKA-155
> URL: https://issues.apache.org/jira/browse/KAFKA-155
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.7
>Reporter: Sharad Agarwal
> Fix For: 0.8
>
>
> There should be a graceful way of decommissioning the broker so that there is 
> absolutely 0 data loss. Decommissioning is not necessarily related to 
> replication (Kafka-50).
> There should be a way to get the broker out of the cluster only from the 
> produce side. Consumers should be able to continue keep pulling data. When 
> the administrator is sure that all data has been consumed by consumers, 
> broker node can be removed permanently.
> Same would be useful for rolling upgrades without any message loss.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (KAFKA-155) Support graceful Decommissioning of Broker

2013-03-20 Thread Neha Narkhede (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13607807#comment-13607807
 ] 

Neha Narkhede commented on KAFKA-155:
-

Oh, right, i meant the partition reassignment tool :)

> Support graceful Decommissioning of Broker
> --
>
> Key: KAFKA-155
> URL: https://issues.apache.org/jira/browse/KAFKA-155
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.7
>Reporter: Sharad Agarwal
> Fix For: 0.8
>
>
> There should be a graceful way of decommissioning the broker so that there is 
> absolutely 0 data loss. Decommissioning is not necessarily related to 
> replication (Kafka-50).
> There should be a way to get the broker out of the cluster only from the 
> produce side. Consumers should be able to continue keep pulling data. When 
> the administrator is sure that all data has been consumed by consumers, 
> broker node can be removed permanently.
> Same would be useful for rolling upgrades without any message loss.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (KAFKA-820) Topic metadata request handling fails to return all metadata about replicas

2013-03-20 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13607829#comment-13607829
 ] 

Jun Rao commented on KAFKA-820:
---

Thanks for the patch. Not sure that I understand the changes in 
AdminUtils.getBrokerInfoFromCache(). It seems to me that with or w/o the patch, 
the method will throw an exception if at least one of the items in brokerIds 
can't be converted to a Broker object, in which case the return value is 
irrelevant.

> Topic metadata request handling fails to return all metadata about replicas
> ---
>
> Key: KAFKA-820
> URL: https://issues.apache.org/jira/browse/KAFKA-820
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8
>Reporter: Neha Narkhede
>Assignee: Neha Narkhede
>Priority: Blocker
>  Labels: kafka-0.8
> Attachments: kafka-820-v1.patch
>
>
> The admin utility that fetches topic metadata needs improvement in error 
> handling. While fetching replica and isr broker information, if one of the 
> replicas is offline, it fails to fetch the replica and isr info for the rest 
> of them. This creates confusion on the client since it seems to the client 
> the rest of the brokers are offline as well.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (KAFKA-811) Fix clientId in migration tool

2013-03-20 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13607838#comment-13607838
 ] 

Jun Rao commented on KAFKA-811:
---

Committed v2 to 0.8.

> Fix clientId in migration tool
> --
>
> Key: KAFKA-811
> URL: https://issues.apache.org/jira/browse/KAFKA-811
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8
>Reporter: Swapnil Ghike
>Assignee: Swapnil Ghike
>Priority: Blocker
>  Labels: kafka-0.8
> Fix For: 0.8
>
> Attachments: kafka-811.patch, kafka-811-v2.patch
>
>
> Append producer threadId to the clientId passed by the user.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (KAFKA-820) Topic metadata request handling fails to return all metadata about replicas

2013-03-20 Thread Neha Narkhede (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13607868#comment-13607868
 ] 

Neha Narkhede commented on KAFKA-820:
-

The return value is not irrelevant. As I explained, even if one broker is down, 
it aborts sending the broker data for the rest of the brokers. The impression 
it gives the client is that all the brokers are dead. The reason it throws the 
exception at the end is because we need to send the appropriate error code to 
the client if at least one broker is down.

> Topic metadata request handling fails to return all metadata about replicas
> ---
>
> Key: KAFKA-820
> URL: https://issues.apache.org/jira/browse/KAFKA-820
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8
>Reporter: Neha Narkhede
>Assignee: Neha Narkhede
>Priority: Blocker
>  Labels: kafka-0.8
> Attachments: kafka-820-v1.patch
>
>
> The admin utility that fetches topic metadata needs improvement in error 
> handling. While fetching replica and isr broker information, if one of the 
> replicas is offline, it fails to fetch the replica and isr info for the rest 
> of them. This creates confusion on the client since it seems to the client 
> the rest of the brokers are offline as well.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (KAFKA-820) Topic metadata request handling fails to return all metadata about replicas

2013-03-20 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13607877#comment-13607877
 ] 

Jun Rao commented on KAFKA-820:
---

When an exception is thrown in getBrokerInfoFromCache(), no value is returned, 
right?

> Topic metadata request handling fails to return all metadata about replicas
> ---
>
> Key: KAFKA-820
> URL: https://issues.apache.org/jira/browse/KAFKA-820
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8
>Reporter: Neha Narkhede
>Assignee: Neha Narkhede
>Priority: Blocker
>  Labels: kafka-0.8
> Attachments: kafka-820-v1.patch
>
>
> The admin utility that fetches topic metadata needs improvement in error 
> handling. While fetching replica and isr broker information, if one of the 
> replicas is offline, it fails to fetch the replica and isr info for the rest 
> of them. This creates confusion on the client since it seems to the client 
> the rest of the brokers are offline as well.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (KAFKA-820) Topic metadata request handling fails to return all metadata about replicas

2013-03-20 Thread Neha Narkhede (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neha Narkhede updated KAFKA-820:


Attachment: kafka-820-v2.patch

You are right, attached v2 patch to fix the issue

> Topic metadata request handling fails to return all metadata about replicas
> ---
>
> Key: KAFKA-820
> URL: https://issues.apache.org/jira/browse/KAFKA-820
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8
>Reporter: Neha Narkhede
>Assignee: Neha Narkhede
>Priority: Blocker
>  Labels: kafka-0.8
> Attachments: kafka-820-v1.patch, kafka-820-v2.patch
>
>
> The admin utility that fetches topic metadata needs improvement in error 
> handling. While fetching replica and isr broker information, if one of the 
> replicas is offline, it fails to fetch the replica and isr info for the rest 
> of them. This creates confusion on the client since it seems to the client 
> the rest of the brokers are offline as well.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (KAFKA-813) Minor cleanup in Controller

2013-03-20 Thread Swapnil Ghike (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13608130#comment-13608130
 ] 

Swapnil Ghike commented on KAFKA-813:
-

A couple of comments/questions before uploading the next patch:

[~junrao]: 
20. Because initializeLeaderAndIsrForPartition() is called at only one place, 
in handleStateChange(). And handleStateChange() puts the partition in Online 
state.
21. Ok, renaming to NoReplicaOnlineException.
22. Ok, also please read below.

[~nehanarkhede]: 
1.1 I agree that having all the replicas assigned to a partition down is a 
dangerous situation. I guess that will be reflected if the 
OfflinePartitionsCount is non-zero for a while, and then we can dig into Zk. An 
OfflinePartitionsCount that remains non-zero for a while could also indicate 
another dangerous situation where the replicas are up but leader is not being 
assigned. So, i think that clues provided by an OfflinePartitionsCount that 
remains non-zero for a few minutes subsume the clues given by 
UnavailablePartitionsCount, and we may not specifically need the latter. 
Thoughts?

2. Argh, yes, thanks.

3. I agree that Controller does not need to know about a useless leader 
selector. Moving NoOpLeaderSelector object to partition state machine. 

[~junrao],[~nehanarkhede]: :1.2 I believe the reason was to initialize the 
gauges when the controller object is created. However, we can move the gauges 
to ControllerStats and force their initialization like server.registerStats(). 
So perhaps it would be good if we could decide which jmx beans need to be 
force-initialized and at which places in the code. Accordingly I can make the 
relevant changes. 


> Minor cleanup in Controller
> ---
>
> Key: KAFKA-813
> URL: https://issues.apache.org/jira/browse/KAFKA-813
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8
>Reporter: Swapnil Ghike
>Assignee: Swapnil Ghike
>Priority: Blocker
>  Labels: kafka-0.8
> Fix For: 0.8
>
> Attachments: kafka-813-v1.patch, kafka-813-v2.patch
>
>
> Before starting work on delete topic support, uploading a patch first to 
> address some minor hiccups that touch a bunch of files:
> 1. Change PartitionOfflineException to PartitionUnavailableException because 
> in the partition state machine we mark a partition offline when its leader is 
> down, whereas the PartitionOfflineException is thrown when all the assigned 
> replicas of the partition are down.
> 2. Change PartitionOfflineRate to UnavailablePartitionRate
> 3. Remove default leader selector from partition state machine's 
> handleStateChange. We can specify null as default when we don't need to use a 
> leader selector.
> 4. Include controller info in the client id of LeaderAndIsrRequest.
> 5. Rename controllerContext.allleaders to something more meaningful - 
> partitionLeadershipInfo.
> 6. We don't need to put partition in OnlinePartition state in partition state 
> machine initializeLeaderAndIsrForPartition, the state change occurs in 
> handleStateChange.
> 7. Add todo in handleStateChanges
> 8. Left a comment above ReassignedPartitionLeaderSelector that reassigned 
> replicas are already in the ISR (this is not true for other leader 
> selectors), renamed the vals in the selector.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (KAFKA-813) Minor cleanup in Controller

2013-03-20 Thread Neha Narkhede (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13608462#comment-13608462
 ] 

Neha Narkhede commented on KAFKA-813:
-

1.1 Makes sense, we can just live with the existing counter for now.
Also, let's register the controller stats at startup. That way, they at least 
show a value of 0 instead of nan.

> Minor cleanup in Controller
> ---
>
> Key: KAFKA-813
> URL: https://issues.apache.org/jira/browse/KAFKA-813
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8
>Reporter: Swapnil Ghike
>Assignee: Swapnil Ghike
>Priority: Blocker
>  Labels: kafka-0.8
> Fix For: 0.8
>
> Attachments: kafka-813-v1.patch, kafka-813-v2.patch
>
>
> Before starting work on delete topic support, uploading a patch first to 
> address some minor hiccups that touch a bunch of files:
> 1. Change PartitionOfflineException to PartitionUnavailableException because 
> in the partition state machine we mark a partition offline when its leader is 
> down, whereas the PartitionOfflineException is thrown when all the assigned 
> replicas of the partition are down.
> 2. Change PartitionOfflineRate to UnavailablePartitionRate
> 3. Remove default leader selector from partition state machine's 
> handleStateChange. We can specify null as default when we don't need to use a 
> leader selector.
> 4. Include controller info in the client id of LeaderAndIsrRequest.
> 5. Rename controllerContext.allleaders to something more meaningful - 
> partitionLeadershipInfo.
> 6. We don't need to put partition in OnlinePartition state in partition state 
> machine initializeLeaderAndIsrForPartition, the state change occurs in 
> handleStateChange.
> 7. Add todo in handleStateChanges
> 8. Left a comment above ReassignedPartitionLeaderSelector that reassigned 
> replicas are already in the ISR (this is not true for other leader 
> selectors), renamed the vals in the selector.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (KAFKA-813) Minor cleanup in Controller

2013-03-20 Thread Swapnil Ghike (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swapnil Ghike updated KAFKA-813:


Attachment: kafka-813-v3.patch

Patch v3 takes care of the comments made above.

> Minor cleanup in Controller
> ---
>
> Key: KAFKA-813
> URL: https://issues.apache.org/jira/browse/KAFKA-813
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8
>Reporter: Swapnil Ghike
>Assignee: Swapnil Ghike
>Priority: Blocker
>  Labels: kafka-0.8
> Fix For: 0.8
>
> Attachments: kafka-813-v1.patch, kafka-813-v2.patch, 
> kafka-813-v3.patch
>
>
> Before starting work on delete topic support, uploading a patch first to 
> address some minor hiccups that touch a bunch of files:
> 1. Change PartitionOfflineException to PartitionUnavailableException because 
> in the partition state machine we mark a partition offline when its leader is 
> down, whereas the PartitionOfflineException is thrown when all the assigned 
> replicas of the partition are down.
> 2. Change PartitionOfflineRate to UnavailablePartitionRate
> 3. Remove default leader selector from partition state machine's 
> handleStateChange. We can specify null as default when we don't need to use a 
> leader selector.
> 4. Include controller info in the client id of LeaderAndIsrRequest.
> 5. Rename controllerContext.allleaders to something more meaningful - 
> partitionLeadershipInfo.
> 6. We don't need to put partition in OnlinePartition state in partition state 
> machine initializeLeaderAndIsrForPartition, the state change occurs in 
> handleStateChange.
> 7. Add todo in handleStateChanges
> 8. Left a comment above ReassignedPartitionLeaderSelector that reassigned 
> replicas are already in the ISR (this is not true for other leader 
> selectors), renamed the vals in the selector.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Comment Edited] (KAFKA-330) Add delete topic support

2013-03-20 Thread Swapnil Ghike (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13608564#comment-13608564
 ] 

Swapnil Ghike edited comment on KAFKA-330 at 3/21/13 2:47 AM:
--

Delete topic admin path schema updated at 
https://cwiki.apache.org/confluence/display/KAFKA/Kafka+data+structures+in+Zookeeper

  was (Author: swapnilghike):
Delete topic admin patch schema updated at 
https://cwiki.apache.org/confluence/display/KAFKA/Kafka+data+structures+in+Zookeeper
  
> Add delete topic support 
> -
>
> Key: KAFKA-330
> URL: https://issues.apache.org/jira/browse/KAFKA-330
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8
>Reporter: Neha Narkhede
>Assignee: Swapnil Ghike
>Priority: Blocker
>  Labels: features, kafka-0.8, p2, project
>
> One proposal of this API is here - 
> https://cwiki.apache.org/confluence/display/KAFKA/Kafka+replication+detailed+design+V2#KafkareplicationdetaileddesignV2-Deletetopic

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (KAFKA-330) Add delete topic support

2013-03-20 Thread Swapnil Ghike (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13608564#comment-13608564
 ] 

Swapnil Ghike commented on KAFKA-330:
-

Delete topic admin patch schema updated at 
https://cwiki.apache.org/confluence/display/KAFKA/Kafka+data+structures+in+Zookeeper

> Add delete topic support 
> -
>
> Key: KAFKA-330
> URL: https://issues.apache.org/jira/browse/KAFKA-330
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8
>Reporter: Neha Narkhede
>Assignee: Swapnil Ghike
>Priority: Blocker
>  Labels: features, kafka-0.8, p2, project
>
> One proposal of this API is here - 
> https://cwiki.apache.org/confluence/display/KAFKA/Kafka+replication+detailed+design+V2#KafkareplicationdetaileddesignV2-Deletetopic

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira