[ceph-users] Need help with upmap feature on luminous

2019-02-05 Thread Kári Bertilsson
Hello I previously enabled upmap and used automatic balancing with "ceph balancer on". I got very good results and OSD's ended up with perfectly distributed pg's. Now after adding several new OSD's, auto balancing does not seem to be working anymore. OSD's have 30-50% usage where previously all h

Re: [ceph-users] Need help with upmap feature on luminous

2019-02-05 Thread Kári Bertilsson
ceph version 12.2.8-pve1 on proxmox ceph osd df tree @ https://pastebin.com/e68fJ5fM I added `debug mgr = 4/5` to [global] section in ceph.conf on the active mgr. And restarted mgr service. Is this correct ? I noticed some config settings in the mgr logs.. Changed config to use "mgr/balancer/

Re: [ceph-users] Need help with upmap feature on luminous

2019-02-05 Thread Kári Bertilsson
I am testing running manually `ceph osd pg-upmap-items 41.1 106 125` Nothing shows up in logs on neither OSD 106 or 125 and nothing happens. ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] ceph osd pg-upmap-items not working

2019-02-27 Thread Kári Bertilsson
Hello I am trying to diagnose why upmap stopped working where it was previously working fine. Trying to move pg 41.1 to 123 has no effect and seems to be ignored. # ceph osd pg-upmap-items 41.1 23 123 set 41.1 pg_upmap_items mapping to [23->123] No rebalacing happens and if i run it again it sh

Re: [ceph-users] ceph osd pg-upmap-items not working

2019-02-28 Thread Kári Bertilsson
put two PGs in the same rack if the crush > rule doesn't allow it. > > Where are OSDs 23 and 123 in your cluster? What is the relevant crush rule? > > -- dan > > > On Wed, Feb 27, 2019 at 9:17 PM Kári Bertilsson > wrote: > > > > Hello > > > >

Re: [ceph-users] ceph osd pg-upmap-items not working

2019-03-09 Thread Kári Bertilsson
t; <https://github.com/ceph/ceph/pull/26127> > <https://github.com/ceph/ceph/pull/26127> > https://github.com/ceph/ceph/pull/26127 if you are eager to get out of > the trap right now. > > <https://github.com/ceph/ceph/pull/26179> > <https://github.com/ceph/c

Re: [ceph-users] ceph osd pg-upmap-items not working

2019-03-18 Thread Kári Bertilsson
a kick. When my cluster isn't > balancing when it's supposed to, I just run `ceph mgr fail {active mgr}` > and within a minute or so the cluster is moving PGs around. > > On Sat, Mar 9, 2019 at 8:05 PM Kári Bertilsson > wrote: > >> Thanks >> >> I did a

Re: [ceph-users] ceph osd pg-upmap-items not working

2019-04-04 Thread Kári Bertilsson
Yeah i agree... the auto balancer is definitely doing a poor job for me. I have been experimenting with this for weeks and i can make way better optimization than the balancer by looking at "ceph osd df tree" and manually running various ceph upmap commands. Too bad this is tedious work, and tend

[ceph-users] How to enable TRIM on dmcrypt bluestore ssd devices

2019-04-26 Thread Kári Bertilsson
Hello I am using "ceph-deploy osd create --dmcrypt --bluestore" to create the OSD's. I know there is some security concern when enabling TRIM/discard on encrypted devices, but i would rather get the performance increase. Wondering how to enable TRIM in this scenario ? ___

[ceph-users] Getting "No space left on device" when reading from cephfs

2019-05-09 Thread Kári Bertilsson
Hello I am running cephfs with 8/2 erasure coding. I had about 40tb usable free(110tb raw), one small disk crashed and i added 2x10tb disks. Now it's backfilling & recovering with 0B free and i can't read a single file from the file system... This happend with max-backfilling 4, but i have increa

Re: [ceph-users] Getting "No space left on device" when reading from cephfs

2019-05-09 Thread Kári Bertilsson
oit GmbH > Freseniusstr. 31h > 81247 München > www.croit.io > Tel: +49 89 1896585 90 > > On Thu, May 9, 2019 at 2:08 PM Kári Bertilsson > wrote: > > > > Hello > > > > I am running cephfs with 8/2 erasure coding. I had about 40tb usable > free(110tb ra