You can also just remove the caching from the pool, increase the pgs, then
set it back up as a cache pool. It'll require downtime if it's in front of
an EC rbd pool or EC cephfs on Jewel or Hammer, but it won't take long as
all of the objects will be gone.
Why do you need to increase the PG count
You can also just remove the caching from the pool, increase the pgs,
then set it back up as a cache pool. It'll require downtime if it's
in front of an EC rbd pool or EC cephfs on Jewel or Hammer, but it
won't take long as all of the objects will be gone.
Why do you need to increase the PG
What are your pg numbers for each pool? Your % used in each pool? And
number of OSDs?
On Sun, May 28, 2017, 10:30 AM Konstantin Shalygin wrote:
>
> > You can also just remove the caching from the pool, increase the pgs,
> > then set it back up as a cache pool. It'll require downtime if it's
> >
AFAIK, you only have 2 networks for Ceph. The private internal traffic
between the OSDs. Only servers running OSD daemons need access to this
vlan/subnet. The other is the public network. The following things need
access to this subnet/vlan:
1) Anything that accesses data like rbds, cephfs, or usin
Thanks David.
>>Every single one of the above needs to be able to access all of the mons and
>>osds. I don't think you can have multiple subnets for this,
Yes that's why this multi tenancy question
>>but you can do this via routing. Say your private osd network is
>>xxx.xxx.10.0, your public
Hi Jake,
200MB/s is pretty low load across 5 servers. I wouldn't expect the
tp_osd_tp threads to be that heavily loaded that it's not responding for
60s. Sounds like a bug. Can you reproduce it? It might be worth
trying it with debug bluestore = 20.
Mark
On 05/27/2017 05:02 AM, Jake Gri
On 05/28/2017 09:43 PM, David Turner wrote:
What are your pg numbers for each pool? Your % used in each pool? And
number of OSDs?
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
89380G 74755G 14625G 16.36
POOLS:
NAME ID USED %USED
If you aren't increasing your target max bytes and target full ratio, I
wouldn't bother increasing your pgs on the cache pool. It will not gain
any increased size at all as its size is dictated by those settings and not
the total size of the cluster. It will remain as redundant as always.
If you
On 05/29/2017 10:08 AM, David Turner wrote:
If you aren't increasing your target max bytes and target full ratio,
I wouldn't bother increasing your pgs on the cache pool. It will not
gain any increased size at all as its size is dictated by those
settings and not the total size of the clust
Never. I would only consider increasing it if you were increasing your
target max bytes or target full ratio.
On Sun, May 28, 2017, 11:14 PM Konstantin Shalygin wrote:
>
> On 05/29/2017 10:08 AM, David Turner wrote:
>
> If you aren't increasing your target max bytes and target full ratio, I
> wo
10 matches
Mail list logo