My understanding is that the exact same objects would move back to the OSD if weight went 1 -> 0 -> 1 given the same Cluster state and same object names, CRUSH is deterministic so that would be the almost certain result.
On Mon, Jan 15, 2018 at 2:46 PM, lists <li...@merit.unu.edu> wrote: > Hi Wes, > > On 15-1-2018 20:32, Wes Dillingham wrote: > >> I dont hear a lot of people discuss using xfs_fsr on OSDs and going over >> the mailing list history it seems to have been brought up very infrequently >> and never as a suggestion for regular maintenance. Perhaps its not needed. >> > True, it's just something we've always done on all our xfs filesystems, to > keep them speedy and snappy. I've disabled it, and then it doesn't happen. > > Perhaps I'll keep it disabled. > > But on this last question, about data distribution across OSDs: > > In that case, how about reweighting that osd.10 to "0", wait until >> all data has moved off osd.10, and then setting it back to "1". >> Would this result in *exactly* the same situation as before, or >> would it at least cause the data to have spread move better across >> the other OSDs? >> > > Would it work like that? Or would setting it back to "1" give me again the > same data on this OSD that we started with? > > Thanks for your comments, > > MJ > _______________________________________________ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > -- Respectfully, Wes Dillingham wes_dilling...@harvard.edu Research Computing | Senior CyberInfrastructure Storage Engineer Harvard University | 38 Oxford Street, Cambridge, Ma 02138 | Room 204
_______________________________________________ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com