junrao commented on code in PR #21005:
URL: https://github.com/apache/kafka/pull/21005#discussion_r2875416148
##########
core/src/main/scala/kafka/server/ConfigAdminManager.scala:
##########
@@ -112,48 +112,33 @@ class ConfigAdminManager(nodeId: Int,
})
request.resources().forEach(resource => {
if (!results.containsKey(resource)) {
- val resourceType = ConfigResource.Type.forId(resource.resourceType())
- val configResource = new ConfigResource(resourceType,
resource.resourceName())
- try {
- if (containsDuplicates(resource.configs().asScala.map(_.name()))) {
- throw new InvalidRequestException("Error due to duplicate config
keys")
- }
- val nullUpdates = new util.ArrayList[String]()
- resource.configs().forEach { config =>
- if (config.configOperation() != AlterConfigOp.OpType.DELETE.id() &&
- config.value() == null) {
- nullUpdates.add(config.name())
+ processConfigResource(
Review Comment:
```
1. Cordon the target folder via the Admin API.
2. Move out all replicas.
3. Shut down the broker.
4. Update log.dirs in the local configuration file.
5. uncordon the log directory with the Admin API by connecting to the
controller
6. Restart the broker.
```
Hmm, I guess we still can't do the same validation in the controller.
Otherwise, step 5 will fail.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]