Re: [ceph-users] Spurious empty files in CephFS root pool when multiple pools associated

2018-07-06 Thread Jesus Cea
On 06/07/18 14:56, Steffen Winther Sørensen wrote: >> Really stated where, anyone? > Right here > , > too bad would have been nice. It will come... eventually. I hope. In the meantime, if yo

[ceph-users] PG explosion with erasure codes, power of two and "x pools have many more objects per pg than average"

2018-05-25 Thread Jesus Cea
Hi there. I have configured a POOL with a 8+2 erasure code. My target by space usage and OSD configuration, would be 128 PG, but since each configure PG will be using 10 actual "PGs", I have created the pool with only 8 PG (80 real PG). Since I can increase PGs but not decreasing it, this decision

Re: [ceph-users] Increasing number of PGs by not a factor of two?

2018-05-25 Thread Jesus Cea
On 17/05/18 20:36, David Turner wrote: > By sticking with PG numbers as a base 2 number (1024, 16384, etc) all of > your PGs will be the same size and easier to balance and manage.  What > happens when you have a non base 2 number is something like this.  Say > you have 4 PGs that are all 2GB in si

Re: [ceph-users] Increasing number of PGs by not a factor of two?

2018-05-25 Thread Jesus Cea
OK, I am writing this so you don't waste your time correcting me. I beg your pardon. On 25/05/18 18:28, Jesus Cea wrote: > So, if I understand correctly, ceph tries to do the minimum splits. If > you increase PG from 8 to 12, it will split 4 PGs and leave the other 4 > PGs alon

Re: [ceph-users] Increasing number of PGs by not a factor of two?

2018-05-25 Thread Jesus Cea
On 25/05/18 20:21, David Turner wrote: > If you start your pool with 12 PGs, 4 of them will have double the size > of the other 8.  It is 100% based on a power of 2 and has absolutely > nothing to do with the number you start with vs the number you increase > to.  If your PG count is not a power of

Re: [ceph-users] PG explosion with erasure codes, power of two and "x pools have many more objects per pg than average"

2018-05-25 Thread Jesus Cea
On 25/05/18 20:26, Paul Emmerich wrote: > Answes inline. > >> 2018-05-25 17:57 GMT+02:00 Jesus Cea > <mailto:j...@jcea.es>>: recommendation. Would be nice to know too if >> being "close" to a power of two is better than be far away and if it >>

[ceph-users] Rebalancing an Erasure coded pool seems to move far more data that necessary

2018-05-25 Thread Jesus Cea
I have a Erasure Coded 8+2 pool with 8 PGs. Each PG is spread on 10 OSDs using Reed-Solomon (the Erasure Code). When I rebalance the cluster I see two PGs moving: "active+remapped+backfilling". A "pg dump" shows this: """ root@jcea:/srv# ceph --id jcea pg dump|grep backf dumped all 75.5 25

[ceph-users] Spurious empty files in CephFS root pool when multiple pools associated

2018-07-03 Thread Jesus Cea
Hi there. I have an issue with cephfs and multiple datapools inside. I have like SIX datapools inside the cephfs, I control where files are stored using xattrs in the directories. The "root" directory only contains directories with "xattrs" requesting new objects to be stored in different pools.

Re: [ceph-users] Spurious empty files in CephFS root pool when multiple pools associated

2018-07-03 Thread Jesus Cea
On 03/07/18 13:08, John Spray wrote: > Right: as you've noticed, they're not spurious, they're where we keep > a "backtrace" xattr for a file. > > Backtraces are lazily updated paths, that enable CephFS to map an > inode number to a file's metadata, which is needed when resolving hard > links or N

Re: [ceph-users] Spurious empty files in CephFS root pool when multiple pools associated

2018-07-03 Thread Jesus Cea
On 03/07/18 13:46, John Spray wrote: > To directly address that warning rather than silencing it, you'd > increase the number of PGs in your primary data pool. Since the number of PGs per OSD is limited (or, at least, a recommended limit), I rather prefer to invest them in my datapools. Since I am

Re: [ceph-users] Spurious empty files in CephFS root pool when multiple pools associated

2018-07-03 Thread Jesus Cea
On 03/07/18 15:09, Steffen Winther Sørensen wrote: > > >> On 3 Jul 2018, at 12.53, Jesus Cea wrote: >> >> Hi there. >> >> I have an issue with cephfs and multiple datapools inside. I have like >> SIX datapools inside the cephfs, I control whe