Create a crush rule that only chooses non-ssd drives, then
ceph osd pool set <perf-pool-name> crush_rule YourNewRuleName
and it will move over to the non-ssd OSDs.

Den fre 28 maj 2021 kl 02:18 skrev Jeremy Hansen <jer...@skidrow.la>:
>
>
> I’m very new to Ceph so if this question makes no sense, I apologize.  
> Continuing to study but I thought an answer to this question would help me 
> understand Ceph a bit more.
>
> Using cephadm, I set up a cluster.  Cephadm automatically creates a pool for 
> Ceph metrics.  It looks like one of my ssd osd’s was allocated for the PG.  
> I’d like to understand how to remap this PG so it’s not using the SSD OSDs.
>
> ceph pg map 1.0
> osdmap e205 pg 1.0 (1.0) -> up [28,33,10] acting [28,33,10]
>
> OSD 28 is the SSD.
>
> Is this possible?  Does this make any sense?  I’d like to reserve the SSDs 
> for their own pool.
>
> Thank you!
> -jeremy
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io



-- 
May the most significant bit of your life be positive.
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to