On 03/07/18 13:46, John Spray wrote:
> To directly address that warning rather than silencing it, you'd
> increase the number of PGs in your primary data pool.

Since the number of PGs per OSD is limited (or, at least, a recommended
limit), I rather prefer to invest them in my datapools. Since I am using
erasure coding in my datapools, the PG explosion is important (My EC is
8+2). Wasting PGs in an empty datapool (the empty root datapool is not
using EC, it is using size:3), when that is irreversible, is... a waste.
My resources are scarce.

> There's a conflict here between pools with lots of data (where the MB
> per PG might be the main concern, not the object size), vs.
> metadata-ish pools (where the object counter per PG is the main
> concern).  Maybe it doesn't really make sense to group them all
> together when calculating the average object-per-pg count that's used
> in this health warning -- I'll bring that up over on ceph-devel in a
> moment.

This is a good point.

Some weeks ago I asked about what is worse for OSD: number of PGs or
number of objects stored. As a programmer I would say that number of
objects is worse (you need to track each object), so better to have 1000
PGs with 100 objects than 10 PGs with a million objects. Nevertheless
the answer is the list was "objects don't matter, PGs do". I don't
understand the reason for it.

-- 
Jesús Cea Avión                         _/_/      _/_/_/        _/_/_/
j...@jcea.es - http://www.jcea.es/     _/_/    _/_/  _/_/    _/_/  _/_/
Twitter: @jcea                        _/_/    _/_/          _/_/_/_/_/
jabber / xmpp:j...@jabber.org  _/_/  _/_/    _/_/          _/_/  _/_/
"Things are not so easy"      _/_/  _/_/    _/_/  _/_/    _/_/  _/_/
"My name is Dump, Core Dump"   _/_/_/        _/_/_/      _/_/  _/_/
"El amor es poner tu felicidad en la felicidad de otro" - Leibniz

Attachment: signature.asc
Description: OpenPGP digital signature

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to