> > One thing that popped up in my mind, if I do this, the reamining unused space, > e.g., /dev/sdX2 in both disks, will be *partitions*, and not *disks*. Does > Ceph > have a problem with using partitions for OSDs?
Nope, that should be fine. In fact back in the Filestore days you generally had two partitions. > > >> I think you’d not have a good experience, see my compromise suggestion above. >> Remind me how many chassis total? As a certain chassis count, the importance >> of mirrored boot (and network bonding) diminishes. > > We have 13x chassis. I am already convinced to not have OSDs on VDs :-) Having been there, I like to help others avoid the torment ;) > > >> Does it show that the BBU is a supercap vs a battery? > > It looks like supercap: Groovy. Those are usually rated for at least 3 years. There's a handy script here https://github.com/prometheus-community/node-exporter-textfile-collector-scripts for getting StorCLI info into Prometheus. If you have a fleet prom you might add that to your node_exporter deployment. It should be possible to give the bundled Ceph-managed node_exporter additional config to pick this up, but I haven't done that myself. > > Cachevault_Info : > =============== > > -------------------- > Property Value > -------------------- > Type CVPM05 > Temperature 24 C > State Optimal > -------------------- > > ... > > GasGaugeStatus : > ============== > > ------------------------------ > Property Value > ------------------------------ > Pack Energy 136 J > Capacitance 100 % > Remaining Reserve Space 0 > ------------------------------ > > >> When I see RAID HBAs I either just set passthrough and ignore the RoC > > I think I'll do just that and just use MD for the small RAID volume for the > OS. Given your constraints, I think that's the best strategy. The only thing that MD for the OS won't give you is automatically handling the loss of the first boot drive across a power cycle, but I personally haven't found that to be critical. > > Thank you, > Gustavo _______________________________________________ ceph-users mailing list -- [email protected] To unsubscribe send an email to [email protected]
