Thanks for the insights, Greg. It would be great if the CRUSH rule for an EC pool can be dynamically changed…but if that’s not the case, the troubleshooting doc also offers up the idea of adding more OSDs, and we have another 8 OSDs (one from each node) we can move into the default root. However: just to clarify the point of adding OSDs: the current EC profile has a failure domain of ‘host’... will adding more OSDs still improve the odds of CRUSH finding a good mapping within the given timeout period?
BTW, I’m a little concerned about moving all 8 OSDs at once, as we’re skinny on RAM and the EC pools seem to like more RAM that replicated pools do. Considering the RAM issue, is adding 2-4 OSDs at a time the recommendation? (other than adding more RAM). -- Paul Evans This looks like it's just the standard risk of using a pseudo-random algorithm: you need to "randomly" map 8 pieces into 8 slots. Sometimes the CRUSH calculation will return the same 7 slots so many times in a row that it simply fails to get all 8 of them inside of the time bounds that are currently set. If you look through the list archives we've discussed this a few times, especially Loïc in the context of erasure coding. See http://ceph.com/docs/master/rados/troubleshooting/troubleshooting-pg/#crush-gives-up-too-soon for the fix. But I think that doc is wrong and you can change the CRUSH rule in use without creating a new pool — right, Loïc? -Greg
_______________________________________________ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com