oit GmbH
> Freseniusstr. 31h
> 81247 München
> www.croit.io
> Tel: +49 89 1896585 90
>
> On Thu, May 9, 2019 at 2:08 PM Kári Bertilsson
> wrote:
> >
> > Hello
> >
> > I am running cephfs with 8/2 erasure coding. I had about 40tb usable
> free(110tb ra
Hello
I am running cephfs with 8/2 erasure coding. I had about 40tb usable
free(110tb raw), one small disk crashed and i added 2x10tb disks. Now it's
backfilling & recovering with 0B free and i can't read a single file from
the file system...
This happend with max-backfilling 4, but i have increa
Hello
I am using "ceph-deploy osd create --dmcrypt --bluestore" to create the
OSD's.
I know there is some security concern when enabling TRIM/discard on
encrypted devices, but i would rather get the performance increase.
Wondering how to enable TRIM in this scenario ?
___
Yeah i agree... the auto balancer is definitely doing a poor job for me.
I have been experimenting with this for weeks and i can make way better
optimization than the balancer by looking at "ceph osd df tree" and
manually running various ceph upmap commands.
Too bad this is tedious work, and tend
a kick. When my cluster isn't
> balancing when it's supposed to, I just run `ceph mgr fail {active mgr}`
> and within a minute or so the cluster is moving PGs around.
>
> On Sat, Mar 9, 2019 at 8:05 PM Kári Bertilsson
> wrote:
>
>> Thanks
>>
>> I did a
t; <https://github.com/ceph/ceph/pull/26127>
> <https://github.com/ceph/ceph/pull/26127>
> https://github.com/ceph/ceph/pull/26127 if you are eager to get out of
> the trap right now.
>
> <https://github.com/ceph/ceph/pull/26179>
> <https://github.com/ceph/c
put two PGs in the same rack if the crush
> rule doesn't allow it.
>
> Where are OSDs 23 and 123 in your cluster? What is the relevant crush rule?
>
> -- dan
>
>
> On Wed, Feb 27, 2019 at 9:17 PM Kári Bertilsson
> wrote:
> >
> > Hello
> >
> >
Hello
I am trying to diagnose why upmap stopped working where it was previously
working fine.
Trying to move pg 41.1 to 123 has no effect and seems to be ignored.
# ceph osd pg-upmap-items 41.1 23 123
set 41.1 pg_upmap_items mapping to [23->123]
No rebalacing happens and if i run it again it sh
I am testing running manually `ceph osd pg-upmap-items 41.1 106 125`
Nothing shows up in logs on neither OSD 106 or 125 and nothing happens.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
ceph version 12.2.8-pve1 on proxmox
ceph osd df tree @ https://pastebin.com/e68fJ5fM
I added `debug mgr = 4/5` to [global] section in ceph.conf on the active
mgr. And restarted mgr service. Is this correct ?
I noticed some config settings in the mgr logs.. Changed config to use
"mgr/balancer/
Hello
I previously enabled upmap and used automatic balancing with "ceph balancer
on". I got very good results and OSD's ended up with perfectly distributed
pg's.
Now after adding several new OSD's, auto balancing does not seem to be
working anymore. OSD's have 30-50% usage where previously all h
11 matches
Mail list logo